Search is not available for this dataset
text
string | meta
dict |
---|---|
\chapter{Summary and future perspectives}\label{chapter:ch6}
This work explored the dynamic features of brain function in young adolescents and how they are affected in individuals who were born preterm. To this end, both task-based and resting-state fMRI paradigms were analysed, and one methodological advancement was developed to bring the advantages of dynamic approaches also to task-based studies. Here, I summarise our main findings and identify potential avenues for future work that builds upon the analyses and clinical contributions presented so far.
\section{Summary of findings}
\textbf{Resting-state brain dynamics in preterm-born young adolescents:} We investigated, for the first time, the development of blood oxygenation level dependent (BOLD) signal variability and co-activation patterns (CAPs) in young adolescents born prematurely as compared to fullterm-born controls. To address the issue of high dimensionality in voxelwise BOLD variability maps, we employed a partial least squares correlation (PLSC) approach to identify multivariate patterns of alterations across groups and how they relate to life course measures, namely age at assessment; gestational age; and an interaction between the two. We used a similar PLSC approach to identify differences in CAP expression between the groups and their relationship with the aforementioned life course variables. Through this approach, we discovered that the development of BOLD signal variability is indeed altered in the preterm group in a broad pattern distributed across several areas of the brain, but especially in the bilateral hipoccampi and salience network including bilateral insulae and anterior cingulate cortex (ACC). Since the ACC has been recently identified in other studies as having altered connectivity in preterm-born individuals, we investigated this region further by performing an ACC-based CAPs analysis to uncover dynamic functional connectivity patterns that arise across age. Similarly to the BOLD variability analysis, we found different trajectories of CAP development in both groups. Indeed, the change in the balance between internally- and externally-oriented networks across age is more accentuated in the preterm group. Taken together, our observations suggest that the preterm-born brain triggers neurological compensation mechanisms that start during the highly dynamic age range of early adolescence and fail to find an optimal balance.
\textbf{Reality Filtering task processing in preterm-born young adolescents:} To test whether pre\-term-born young adolescents would be able to complete a task that relies on a particularly vulnerable region in this population (the orbitofrontal cortex, OFC), we performed a reality filtering experiment. By looking into how brain activation changed depending on the type of stimulus being shown and comparing the preterm and control groups, we found that despite being able to perform the task with comparable accuracy to the fullterm group, the levels of OFC activation in the preterm group are lower. Moreover, no other regions were significantly more activated in the preterm than in controls. This suggests that preterm-born individuals may have developed mechanisms to optimise OFC activity such that they are still able to perform the task without depending on the same level of activation as the control group.
\textbf{Time-resolved task-driven modulations of brain connectivity:}
We thus proceeded to investigate how brain organisation changes over time as a result of task performance in the preterm population. Given their higher risk of attentional and socio-emotional deficits, we elaborated a task that includes a movie watching aspect and an emotion regulation one. To explore this rich data set we first developed a time-resolved method to recover task-driven co-activation patterns (PPI-CAPs) and analyse their relationships with the seed, task, or an interaction between the two. We initially validate this framework in an adult data set. Once the methodology was stable, we applied it to the preterm data and extended the method to allow group comparisons. Here, we identified a series of dorsal anterior cingulate cortex (ACC)-based co-activation patterns that vary differently in preterms and controls according to the task. Interestingly, a new pattern including the limbic network emerged from this data which had not been found in the CAP analysis from Chapter \ref{chapter:ch3}, which was based on resting-state data. This is in line with existing evidence that the limbic network is involved in emotion processing. Together, these results highlight the relevance of studying brain dynamics in clinical populations. The code developed to perform the analysis described in this work has been made available on \url{https://github.com/lorenafreitas/PPI_CAPs}
\section{Perspective for future research}
\subsection*{Linking dynamic brain function and clinical outcomes}
The studies presented in this thesis characterise dynamic features of brain function in preterm-born young adolescents and how they compare to fullterm-born individuals at his age. Several of these involve areas known to be part of high-order functional networks (see, for example, chapter \ref{chapter:ch3}). An interesting follow-up will thus be to see how the markers unveiled here relate to clinical outcomes such as attention levels, working memory, executive functions, etc. In fact, this data is available from the \textit{Building the Path to Resilience in Preterm-Born Infants} project, of which this work is a constituting part. This would thus be a natural and feasible development for the near future.
\subsection*{Mindfulness meditation as a potential intervention in preterm young adolescents}
Although interventions for premature babies have been routinely implemented with a positive effect on cognition and motor abilities \citep{Ferreira2020}, so far there is no consensus regarding procedures applied at later stages in life. Socio-emotional and executive function skills are, however, still in plain development during childhood and adolescence, suggesting that this age may still be within the intervention window.
Mindfulness meditation is a form of mind training to develop a reflective (as opposed to reflexive) way of responding to both internal or external events \citep{Bishop2004} that involves attention, attitude and intention. As \citep{Kabat-Zinn1994} describes it, it involves "paying attention (Attention), in a particular way (Attitude), on purpose (Intention), in the present moment, and non-judgmentally". Studies on its benefits for physical and mental health as well as its neurocognitive mechanisms have gained increased popularity in investigations involving adults, children and adolescents. In fact, even short sessions of meditation given to inexperienced participants have been deemed enough to improve attention levels \citet{Norris2018, Jankowski2020}. Moreover, regular practice has been shown to have long-term effects on attention \citep{Zanesco2018} and brain funtion. Benefits such as the ones described above have put meditation in the spotlight as a potential intervention in clinical practice \citep{Simkin2014,Zhang2018}.
In young populations, mindfulness meditation training has emerged as a potential tool to help manage a wide variety of symptoms including disruptive behaviour \citep{Perry-Parrish2016} and lack of attention \citep{Zhang2018}. A study involving typically developing children at 11 years old showed that 8 weeks of mindfulness training already has the potential to improve attentional self-regulation \citep{Felver2017}. Another, found that meditation programs can enhance cognitive and social-emotional development in young populations \citep{Schonert-Reichl2015}. Taken together, these results further suggest a link between these cognitive domains and that mindfulness meditation may be an avenue for intervention in clinical populations.
The \textit{Building the Path to Resilience in Preterm-Born Infants} project, of which this thesis is part, has acquired functional and structural MRI data from the preterm-born young adolescents studied here after 8 weeks of mindfulness training. Crucially, mindfulness has recently been found to relate to dynamic --- as opposed to static --- features of neural function and neural network interactions over time \citep{Marusak2018}. This highlights PPI-CAPs as a compelling avenue to explore the effects of mindfulness meditation as a potential intervention for young adolescents born prematurely, as its focus is precisely to uncover dynamic aspects of brain function during performance of a task.
\subsection*{PPI-CAPs as markers for neurofeedback}
FMRI Neurofeedback (NF) is a technique in which real-time information about someone's own brain activity is fed back to them, which gives them the chance to attempt to control it. It has been found to be a promising means to reshape neural activity, and has been used as an intervention tool in several neurological and psychiatric disorders \citep{Guntensperger2017, Misaki2019} including in adolescent clinical populations \citep{Alegria2017}. Most relevant for this work is its potential for self-driven modulation of emotion processing domains, both in adult \citep{Koush2017,Lorenzetti2018} as well as children and adolescent populations \citep{CohenKadosh2016}. In most of these studies, a seed region is selected for which information on activation levels is provided to the user, who then tries to modulate that brain area's activity.
Recently, \citet{Koush2017} showed that it is also possible to gain control over entire networks related to emotion regulation using a connectivity-neurofeedback approach. This opens a promising avenue for future research built on the basis of this thesis. In Chapter \ref{chapter:ch5}, we introduced Psychophysiological Interaction of Co-Activation Patterns (PPI-CAPs) as a seed-based method to investigate time-resolved changes in effective connectivity also in a task-based environment. With this method, we have investigated differences in dynamic brain function during the performance of a task in the preterm group as compared to fullterm-born controls. The very PPI-CAPs which are less elicited by the clinical population could potentially be used as an NF target in future studies. In this paradigm, an initial run could be performed to identify target PPI-CAPs --- that is, those which were most differently expressed between groups. Then, subsequent runs would be carried out where the subject's goal is to attempt to reproduce that pattern. It is important to note, however, that although fMRI NF has been show to successfully modulate activation and connectivity in the brain, and to lead to behavioural changes, how this translates into clinically significant improvements remains debatable \citep{Thibault2018}.
\subsection*{ Extensions for PPI-CAPs}
As described in more detail in section \ref{sec:extensions_ppicaps} of Chapter \ref{chapter:ch5} (Potential extensions of the PPI-CAPs approach), there are several ways in which this methodology could be extended to capture additional information on the dynamic features brain function. Of note, rather than approaching brain function as a series of separately elicited brain sates \citep{Leonardi2014,Gonzalez-Castillo2015a, Freitas2020}, one could think of it as several patterns that may overlap with each other in dynamic ways \citep{Karahanoglu2013,Karahanoglu2015a}. The so-called \textit{innovation signals} from \citet{Karahanoglu2013}'s work reflect moments in which there are significant \textit{changes} in activation intensity of certain brain areas, rather than pure amplitude.
One could thus apply the frame selection and clustering steps on these signals to yield \textit{innovation-driven} PPI-CAPs (or PPI-\textit{i}CAPs), which would represent spatial patterns of voxels whose signals \textit{transition} simultaneously. Backprojecting these would then recover their time courses, thus revealing moments when different combinations of those patterns may overlap.
Another attractive route for extension would be to consider the introduction of temporal relationships between successive time points. This has been shown to be a promising avenue both when considering sequential \citep{Eavani2013,Chen2016, Vidaurre2017} or overlapping \citep{Sourty2016, Bolton2018a} brain states. For instance in the case of the present work, given that the results from Chapter \ref{chapter:ch5}.1 which revealed default mode network (PPI-CAP\textsubscript{3}), fronto-parietal network (PPI-CAP\textsubscript{1} and PPI-CAP\textsubscript{2}) and salience network (PPI-CAP\textsubscript{2}) contributions during movie-watching, causal interplays between these networks could be assessed in the context of the so-called triple network model \citep{Menon2011}.
So far, PPI-CAPs address temporal dynamics alone, without taking into consideration how to optimally tackle the spatial dimension of the data. One extension could thus be to inject a spatial prior in deriving PPI-CAPs \citep{Zhuang2018}, or to study the spatial variability of task-related functional activity patterns in more detail \citep{Kiviniemi2011}. This could be achieved by separately considering, for each PPI-CAP, the pools of frames linked to given task contexts, and carrying out statistical comparisons at this level \citep{Amico2014}.
Finally, one could investigate measures that are more sophisticated than pure occurrences as features of interest \citep{Chen2015, Bolton2020}, or broaden the analysis of PPI-CAPs to a meta-state perspective \citep{Miller2016,Vidaurre2017}, where a meta-state would symbolise a particular combination of expression and polarity of the investigated patterns.
The ultimate goal is to apply novel tools to better understand brain function both in health individuals and in clinical cohorts. I believe that the future avenues presented here would help provide a more accurate picture of brain function dynamics and have great potential to address these populations.
\subsection*{Probing into structure-function relationships}
While the main goal of this thesis was to focus on the relevance of functional brain dynamics for the study of preterm birth, it is important to consider that the brain's underlying structural architecture clearly affects not only static measures of brain function \citep{Honey2009} but also dynamic ones \citep{Hansen2015}. However, for reasons that remain to be explored --- and may include non-linear neural processing in specific brain areas as well as confounding physiological artefacts --- the BOLD signal contains information that does not simply reproduce that of brain structure. Therefore, analysing both together may bring relevant, additional information which previous studies had missed. For instance, \citet{Amico2018} demonstrated how an approach combining multimodal canonical correlation as well as joint independent component analysis can be used to investigate structural-functional alterations by recovering task-sensitive “hybrid” patterns of connectivity that represent subjects' connectivity fingerprint.
Graph signal processing (GSP) has recently emerged in the neuroimaging field as a novel framework for brain data analysis that integrates brain structure and function (see \citet{Huang2018} for a broad overview). In this scheme, brain structure defines a graph representation where brain regions are the nodes and white matter tracts are the edges, while each frame of fMRI activity is a temporal sample of a signal living on this graph. More recently, this concept has has seen an interesting extension in which high quality activity time courses within the white matter are derived through the combination of a voxel-wise structural graph, and of grey matter activity \citep{Tarun2020}.
To the best of our knowledge, graph analyses on preterm-born populations to date have solely considered either structure or function individually. Therefore, this remains a promising avenue to obtain a better-informed picture of brain function in prematurity.
\subsection*{A note on addressing the global challenge of prematurity}
An important issue in the study of the neurodevelopmental effects of preterm birth is that, although most of the global burden of preterm birth is shouldered by low- and middle-income countries (LMICs), only a tiny portion of the currently available research evidence for their prevention and treatment come from these settings \citep{Smid2016}. However, since high-income countries tend to offer more funding for research and in many cases have better facilities at researchers' disposal, investigations in these regions of potential biomarkers for targeted intervention that improves clinical outcomes in this population are also of utmost importance. Once non-invasive interventions such as the ones the \textit{Building the Path to Resilience in Preterm-Born Infants} project --- of which this thesis is a constituting part --- aims to investigate are found, the need for expensive equipment such as MRI machines will prove less essential, and LMIC populations will also benefit from them.
% https://www.who.int/news-room/fact-sheets/detail/preterm-birth
\clearpage | {
"alphanum_fraction": 0.8230374344,
"avg_line_length": 211.4268292683,
"ext": "tex",
"hexsha": "96f6ecc145ac123344000008cff6018b44fe3804",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a93aaca70d47d4187627549e1624500b5ff77908",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lorenafreitas/EPFL_thesis_template",
"max_forks_repo_path": "main/ch6_FuturePerspectives.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a93aaca70d47d4187627549e1624500b5ff77908",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lorenafreitas/EPFL_thesis_template",
"max_issues_repo_path": "main/ch6_FuturePerspectives.tex",
"max_line_length": 1857,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a93aaca70d47d4187627549e1624500b5ff77908",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lorenafreitas/EPFL_thesis_template",
"max_stars_repo_path": "main/ch6_FuturePerspectives.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3505,
"size": 17337
} |
\subsubsection{Template fit method}
\label{sec:bkg_mj_we_ff}
The multijet background expectation in the $W\rightarrow \ell\nu$ ($\ell=e,\mu$) channel is divided among heavy-quark decays, conversions, and hadrons faking leptons.
Since it is very difficult to model these events in MC, a template derived from data with a modified lepton selection (control region) is used to fit the $W$ transverse missing energy ($E_T^{miss}$) distribution in the signal region.
In this method, an enriched multijet template is constructed by selecting events from data where one lepton that passes the loose likelihood identification criteria, fails the tight criteria.
% (electron from photons conversion may be isolated, but the TightLH identification criteria places a requirement on the number of expected inner detector B-Layer hits, reducing the presence of the conversion component in the signal region).
% Moreover, signal electrons must pass the isolation {\fontfamily{txtt}\selectfont FCTight} working point as defined by the {\fontfamily{txtt}\selectfont IsolationSelectionTool}~\cite{EGammaIdentificationRun2} while multijet-template electrons must fail.
% \todo{Pass isolation for signal and fail for MJ leptons.}
% \todo{Apply MET rebuild using MJ template leptons.}
% \todo{Fix trigger selection for MJ template.}
% \todo{I'd expect to see more diff for plots \ref{fig:FR2_ff} and \ref{fig:SR_ff}!}
% \todo{Do we relax both cuts for FR1/MJ?}
% The signal selection uses a loose likelihood lepton trigger and, by selecting loose leptons with this trigger, one will observe a high suppression of loose leptons.
% Because of this effect, a single lepton \todo{loose trigger is used instead}....
The template needs to be built with a discriminating variable that can separate the background from the
signal.
In this case, the $W$ missing energy ( $E_{T}^{miss}$ ) is constructed using the selected lepton-like object and the missing energy in the event but, for the template, the minimum $E_{T}^{miss}$ of 25 GeV and $m_{T}^{W}$ of 40 GeV cuts – present in the signal selection – are not applied, giving access to the low $E_{T}^{miss}$ and $m_{T}^{W}$ region where the QCD background is dominant.
It is expected that some events from signal, electroweak processes and top background pass the control region selection causing a contamination in this sample.
However, this can be estimated – and can be subtracted – by selecting events from Monte Carlo simulation of these processes that pass the control region selection.
One can see the results of this technique in Figure~ \ref{fig:FR2_ff}, that shows the distributions of the events and the contamination.
The contamination of signal and non-QCD background in electron channel is estimated to be about 5\% of the events in the template selection region (where no transverse mass and missing energy cuts are applied).
If we consider only events above the transverse mass cut of 40 GeV and missing energy above 25 GeV, then the contamination corresponds to about 10.2\% of the number of events in the QCD background estimation.
For muon channel the contamination of signal and non-QCD background in muon channel is estimated to be about 13.7\% of the events in the template selection region.
Considering only events above the transverse mass cut of 40 GeV and missing energy above 25 GeV, then the contamination corresponds to about 43.3\% of the number of events in the QCD background estimation.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/SR_MJ/fakeSlices-met_reco_et__M_T-AIso_AID-met_reco_et-antiIsoTight_FakeLepQual-FR2region-el.pdf}
\includegraphics[width=0.45\textwidth]{figures/SR_MJ/fakeSlices-met_reco_et__M_T-AIso_AID-met_reco_et-antiIsoTight_FakeLepQual-FR2region-mu.pdf}
\caption{
The W transverse mass spectrum reconstructed in the electron (left) and muon (right) channel, for events passing the control region selection.
}
\label{fig:FR2_ff}
\end{figure}
A template fit on the W transverse missing energy is then performed, before applying the $m_{T}^{W} > 50$~GeV and $E_{T}^{miss}>25$~GeV cuts.
In the fit, the group of EWK processes and the multijet component are left free to float.
% , while the other backgrounds are constrained to their normalisation to the luminosity.
The template fit is performed for all events without any lepton charge selection.
The scale factors obtained from a fit are applied to the template described previously, where the contamination is subtracted, are shown on Fig.\ref{fig:FR1_ff}.
Then, an estimate of the fraction of multijet contamination in the signal region (SR) is obtained
using the fit scale factors and the data and MC templates with a cut on $m_{T}^{W} > 50$~GeV and $E_{T}^{miss}>25$~GeV.
The fractions of multijet background extrapolated to the signal region are displayed on Fig.~\ref{fig:SR_ff}.
Electron channel in way more contaminated by multijet background then muon channel: fakes contribution is 10.5\% and 1.3\% respectively.
This is one more reason why we focus on the ratio of muons rather then electrons.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/SR_MJ/fakeSlices-met_reco_et__M_T-AIso_AID-met_reco_et-antiIsoTight_FakeLepQual-FR1afterFit-el.pdf}
\includegraphics[width=0.45\textwidth]{figures/SR_MJ/fakeSlices-met_reco_et__M_T-AIso_AID-met_reco_et-antiIsoTight_FakeLepQual-FR1afterFit-mu.pdf}
\caption{
Results on the template fit to the full $m_{T}^{W}$ and $E_{T}^{miss}$ spectrum for signal and backgrounds, for $W\rightarrow e\nu$ (left) and for $W\rightarrow \mu\nu$ (right) channels, obtained from the template with inverted isolation and inverted likelihood identification criteria.
}
\label{fig:FR1_ff}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/SR_MJ/fakeSlices-met_reco_et__M_T-AIso_AID-met_reco_et-antiIsoTight_FakeLepQual-SRafterFit-el.pdf}
\includegraphics[width=0.45\textwidth]{figures/SR_MJ/fakeSlices-met_reco_et__M_T-AIso_AID-met_reco_et-antiIsoTight_FakeLepQual-SRafterFit-mu.pdf}
\caption{
Results on the of the multijet background template fit extrapolation to the signal region for signal and backgrounds, for $W\rightarrow e\nu$ (left) and for $W\rightarrow \mu\nu$ (right) channels, obtained from the template with inverted isolation and inverted likelihood identification criteria.
}
\label{fig:SR_ff}
\end{figure}
| {
"alphanum_fraction": 0.7950960487,
"avg_line_length": 81.0506329114,
"ext": "tex",
"hexsha": "717503d6af8b9c8ea0715ce53ec9a157683918ef",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "solitonium/ANA-STDM-2020-X-INTX",
"max_forks_repo_path": "chapters/background_mj_ff.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "solitonium/ANA-STDM-2020-X-INTX",
"max_issues_repo_path": "chapters/background_mj_ff.tex",
"max_line_length": 389,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "solitonium/ANA-STDM-2020-X-INTX",
"max_stars_repo_path": "chapters/background_mj_ff.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1683,
"size": 6403
} |
\input{layout}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> EXAMPLE
\subsection{ example bug }
\begin{bug}{}{}
\caption{: desription}
Solution:\\
solution
Link: \href{}{\link{<optional>}}\\
\end{bug}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CPP BUGS
\newpage{}
\section{Cpp Bugs} \label{sec:cpp_bugs}
\input{cpp_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> GIT BUGS
\newpage{}
\section{GIT Bugs} \label{sec:git_bugs}
\input{git_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> JAVA BUGS
\newpage{}
\section{Java Bugs} \label{sec:java_bugs}
\input{java_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MATLAB BUGS
\newpage{}
\section{Matlab Bugs} \label{sec:matlab_bugs}
\input{matlab_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> SIMULINK/DSPACE BUGS
\newpage{}
\section{Simulink/DSpace Bugs} \label{sec:simulink_dspace_bugs}
\input{simulink_dspace_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OCTAVE BUGS
\newpage{}
\section{Octave Bugs} \label{sec:octave_bugs}
\input{octave_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PYTHON BUGS
\newpage{}
\section{Python Bugs} \label{sec:python_bugs}
\input{python_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ROS BUGS
\newpage{}
\section{ROS Bugs} \label{sec:ros_bugs}
\input{ros_bugs}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\end{document}
| {
"alphanum_fraction": 0.4547339323,
"avg_line_length": 26.3090909091,
"ext": "tex",
"hexsha": "12956fceff8127bfc886c467f15c6cf2d83bd172",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8b271e6fa9174e24200e88be59e417abd5f2f59a",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "maximilianharr/code_snippets",
"max_forks_repo_path": "bugs/bugs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8b271e6fa9174e24200e88be59e417abd5f2f59a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "maximilianharr/code_snippets",
"max_issues_repo_path": "bugs/bugs.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8b271e6fa9174e24200e88be59e417abd5f2f59a",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "maximilianharr/code_snippets",
"max_stars_repo_path": "bugs/bugs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 412,
"size": 1447
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\documentclass[11pt]{article}
\usepackage[T1]{fontenc}
% Nicer default font (+ math font) than Computer Modern for most use cases
\usepackage{mathpazo}
% Basic figure setup, for now with no caption control since it's done
% automatically by Pandoc (which extracts  syntax from Markdown).
\usepackage{graphicx}
% We will generate all images so they have a width \maxwidth. This means
% that they will get their normal width if they fit onto the page, but
% are scaled down if they would overflow the margins.
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth
\else\Gin@nat@width\fi}
\makeatother
\let\Oldincludegraphics\includegraphics
% Set max figure width to be 80% of text width, for now hardcoded.
\renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}}
% Ensure that by default, figures have no caption (until we provide a
% proper Figure object with a Caption API and a way to capture that
% in the conversion process - todo).
\usepackage{caption}
\DeclareCaptionLabelFormat{nolabel}{}
\captionsetup{labelformat=nolabel}
\usepackage{adjustbox} % Used to constrain images to a maximum size
\usepackage{xcolor} % Allow colors to be defined
\usepackage{enumerate} % Needed for markdown enumerations to work
\usepackage{geometry} % Used to adjust the document margins
\usepackage{amsmath} % Equations
\usepackage{amssymb} % Equations
\usepackage{textcomp} % defines textquotesingle
% Hack from http://tex.stackexchange.com/a/47451/13684:
\AtBeginDocument{%
\def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code
}
\usepackage{upquote} % Upright quotes for verbatim code
\usepackage{eurosym} % defines \euro
\usepackage[mathletters]{ucs} % Extended unicode (utf-8) support
\usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document
\usepackage{fancyvrb} % verbatim replacement that allows latex
\usepackage{grffile} % extends the file name processing of package graphics
% to support a larger range
% The hyperref package gives us a pdf with properly built
% internal navigation ('pdf bookmarks' for the table of contents,
% internal cross-reference links, web links for URLs, etc.)
\usepackage{hyperref}
\usepackage{longtable} % longtable support required by pandoc >1.10
\usepackage{booktabs} % table support for pandoc > 1.12.2
\usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment)
\usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout)
% normalem makes italics be italics, not underlines
% Colors for the hyperref package
\definecolor{urlcolor}{rgb}{0,.145,.698}
\definecolor{linkcolor}{rgb}{.71,0.21,0.01}
\definecolor{citecolor}{rgb}{.12,.54,.11}
% ANSI colors
\definecolor{ansi-black}{HTML}{3E424D}
\definecolor{ansi-black-intense}{HTML}{282C36}
\definecolor{ansi-red}{HTML}{E75C58}
\definecolor{ansi-red-intense}{HTML}{B22B31}
\definecolor{ansi-green}{HTML}{00A250}
\definecolor{ansi-green-intense}{HTML}{007427}
\definecolor{ansi-yellow}{HTML}{DDB62B}
\definecolor{ansi-yellow-intense}{HTML}{B27D12}
\definecolor{ansi-blue}{HTML}{208FFB}
\definecolor{ansi-blue-intense}{HTML}{0065CA}
\definecolor{ansi-magenta}{HTML}{D160C4}
\definecolor{ansi-magenta-intense}{HTML}{A03196}
\definecolor{ansi-cyan}{HTML}{60C6C8}
\definecolor{ansi-cyan-intense}{HTML}{258F8F}
\definecolor{ansi-white}{HTML}{C5C1B4}
\definecolor{ansi-white-intense}{HTML}{A1A6B2}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Additional commands for more recent versions of Pandoc
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}}
\newcommand{\ImportTok}[1]{{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}}
\newcommand{\BuiltInTok}[1]{{#1}}
\newcommand{\ExtensionTok}[1]{{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{DIP-HW6}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=urlcolor,
linkcolor=linkcolor,
citecolor=citecolor,
}
% Slightly bigger margins than the latex defaults
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\begin{document}
\maketitle
\hypertarget{digital-image-processing---hw6---98722278---mohammad-doosti-lakhani}{%
\section{Digital Image Processing - HW6 - 98722278 - Mohammad Doosti
Lakhani}\label{digital-image-processing---hw6---98722278---mohammad-doosti-lakhani}}
In this notebook, I have solved the assignment's problems which are as
follows:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
This step consists of following tasks:
\begin{enumerate}
\def\labelenumii{\arabic{enumii}.}
\tightlist
\item
Read
\href{http://cseweb.ucsd.edu/~mdailey/Face-Coord/ellipse-specific-fitting.pdf}{This
paper} and summarize it
\item
Convert MATLAB code in fig. 7 to python and extract ellipse
parameters of \texttt{circle.bmp} and \texttt{ellipse.bmp} images.
\item
Plot estimated ellipses using \texttt{cv2.ellipse()} method.
\end{enumerate}
\item
We want to find the parameters of ellipse using RANSAC algorithm. If
only \%40 of edges in the image belong to ellipse's edges and we want
to obtain the correct parameters with probability of 0.999, how many
iterations are required?
\item
Do these steps in this task:
\begin{enumerate}
\def\labelenumii{\arabic{enumii}.}
\tightlist
\item
Estimates parameters of ellipse using code in task 1 on
\texttt{ellipse\_noise.bmp} image
\item
As there are points that do not blong to ellipse, RANSAC is better
solution here. Implement RANSAC
\item
Draw the output on \texttt{ellipse\_noise.bmp} image
\item
Set the probability of achieving correct parameters of ellipse to
0.99 and run algorithm for 10000 times. In how many of iterations,
the estimated parameters are correct?
\item
Analyze your answer
\end{enumerate}
\end{enumerate}
\hypertarget{this-step-consists-of-following-tasks}{%
\subsection{1 This step consists of following
tasks:}\label{this-step-consists-of-following-tasks}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Read This paper and summarize it
\item
Convert MATLAB code in fig. 7 to python and extract ellipse parameters
of circle.bmp and ellipse.bmp images.
\item
Plot estimated ellipses using cv2.ellipse() method.
\end{enumerate}
\hypertarget{a-paper-summarization}{%
\subsubsection{1.A Paper Summarization}\label{a-paper-summarization}}
The proposed method is ellipse specified which means no matter given
data, a ellipse will be output. On top of that, it is computationally
cheap and robust to noises. The major reason that this approach is
robust and fast is that it uses least-square transformation.
First of all, they use a distance matrix with respect to ellipse
equation which is called \emph{distance\_matrix}:
Ellipse Equation: \includegraphics{wiki/eq.jpg}
Distance Matrix: \includegraphics{wiki/dm.jpg}
Now, the parameter \texttt{a} is constrained using matrix called
\texttt{C} which is 6x6 and all these constraints are linear or
\texttt{C.dot(a)\ =\ 1}. But in this paper, constrained \texttt{a} is in
the way that forces the fitted model to be ellipse.
\texttt{4*a*c-b**2\ =\ 1} is the equality constraint where
\texttt{a.T.dot(C).dot(a)\ =\ 1}.
So \texttt{C} is:
\begin{figure}
\centering
\includegraphics{wiki/c.jpg}
\caption{constraint matrix}
\end{figure}
Based on what they have covered so far, the solution of the
quadratically constrained minimization will be:
\begin{figure}
\centering
\includegraphics{wiki/cm.jpg}
\caption{constraint minimization}
\end{figure}
Furthermore, this system can be written as below image where
\texttt{lambda} is Lagrange multiplier and \texttt{S} is
\texttt{D.T.dot(D)}:
\begin{figure}
\centering
\includegraphics{wiki/ss.jpg}
\caption{simplified eigen system}
\end{figure}
This system can be solved using generalized eigenvectors of
\texttt{S.dot(a)\ =\ lambda*C.dot(a)}. In the end, if
\texttt{(lambda,\ u)} solves \texttt{S.dot(a)\ =\ lambda*C.dot(a)}, we
have:
\begin{figure}
\centering
\includegraphics{wiki/mu.jpg}
\caption{mu}
\end{figure}
Then \texttt{a} can be obtained by \texttt{a\ =\ mu*u}.
\hypertarget{b-direct-least-square-of-fitting-ellipse-implementation-and-ellipse-of-circle.bmp-and-ellipse.bmp}{%
\subsubsection{\texorpdfstring{1.B Direct Least Square of Fitting
Ellipse Implementation and Ellipse of \texttt{circle.bmp} and
\texttt{ellipse.bmp}}{1.B Direct Least Square of Fitting Ellipse Implementation and Ellipse of circle.bmp and ellipse.bmp}}\label{b-direct-least-square-of-fitting-ellipse-implementation-and-ellipse-of-circle.bmp-and-ellipse.bmp}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}4}]:} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{cv2}
\PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}184}]:} \PY{k}{def} \PY{n+nf}{direct\PYZus{}least\PYZus{}square}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{)}\PY{p}{:}
\PY{n}{D} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mat}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{vstack}\PY{p}{(}\PY{p}{[}\PY{n}{x}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{x}\PY{o}{*}\PY{n}{y}\PY{p}{,} \PY{n}{y}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{ones}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{x}\PY{p}{)}\PY{p}{)}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{S} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{D}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{D}\PY{p}{)}
\PY{n}{C} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{)}\PY{p}{)}
\PY{n}{C}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{2}
\PY{n}{C}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}
\PY{n}{C}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{2}
\PY{n}{Z} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{inv}\PY{p}{(}\PY{n}{S}\PY{p}{)}\PY{p}{,} \PY{n}{C}\PY{p}{)}
\PY{n}{eigen\PYZus{}value}\PY{p}{,} \PY{n}{eigen\PYZus{}vec} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{eig}\PY{p}{(}\PY{n}{Z}\PY{p}{)}
\PY{n}{eigen\PYZus{}value} \PY{o}{=} \PY{n}{eigen\PYZus{}value}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{pos\PYZus{}r}\PY{p}{,} \PY{n}{pos\PYZus{}c} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{where}\PY{p}{(}\PY{n}{eigen\PYZus{}value}\PY{o}{\PYZgt{}}\PY{l+m+mi}{0} \PY{o}{\PYZam{}} \PY{o}{\PYZti{}}\PY{n}{np}\PY{o}{.}\PY{n}{isinf}\PY{p}{(}\PY{n}{eigen\PYZus{}value}\PY{p}{)}\PY{p}{)}
\PY{n}{a} \PY{o}{=} \PY{n}{eigen\PYZus{}vec}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{pos\PYZus{}c}\PY{p}{]}
\PY{k}{return} \PY{n}{a}
\PY{k}{def} \PY{n+nf}{ellipse\PYZus{}center}\PY{p}{(}\PY{n}{a}\PY{p}{)}\PY{p}{:}
\PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{b}\PY{p}{,}\PY{n}{c}\PY{p}{,}\PY{n}{d}\PY{p}{,}\PY{n}{f}\PY{p}{,}\PY{n}{g}\PY{p}{,}\PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{]}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{num} \PY{o}{=} \PY{n}{b}\PY{o}{*}\PY{n}{b}\PY{o}{\PYZhy{}}\PY{n}{a}\PY{o}{*}\PY{n}{c}
\PY{n}{x0}\PY{o}{=}\PY{p}{(}\PY{n}{c}\PY{o}{*}\PY{n}{d}\PY{o}{\PYZhy{}}\PY{n}{b}\PY{o}{*}\PY{n}{f}\PY{p}{)}\PY{o}{/}\PY{n}{num}
\PY{n}{y0}\PY{o}{=}\PY{p}{(}\PY{n}{a}\PY{o}{*}\PY{n}{f}\PY{o}{\PYZhy{}}\PY{n}{b}\PY{o}{*}\PY{n}{d}\PY{p}{)}\PY{o}{/}\PY{n}{num}
\PY{k}{return} \PY{p}{(}\PY{n+nb}{int}\PY{p}{(}\PY{n}{y0}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,} \PY{n+nb}{int}\PY{p}{(}\PY{n}{x0}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{ellipse\PYZus{}angle\PYZus{}of\PYZus{}rotation}\PY{p}{(}\PY{n}{a}\PY{p}{)}\PY{p}{:}
\PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{b}\PY{p}{,}\PY{n}{c}\PY{p}{,}\PY{n}{d}\PY{p}{,}\PY{n}{f}\PY{p}{,}\PY{n}{g}\PY{p}{,}\PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{]}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{k}{return} \PY{n+nb}{int}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{rad2deg}\PY{p}{(}\PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{arctan}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{b}\PY{o}{/}\PY{p}{(}\PY{n}{a}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{p}{)}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{ellipse\PYZus{}axis\PYZus{}length}\PY{p}{(}\PY{n}{a}\PY{p}{)}\PY{p}{:}
\PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{b}\PY{p}{,}\PY{n}{c}\PY{p}{,}\PY{n}{d}\PY{p}{,}\PY{n}{f}\PY{p}{,}\PY{n}{g}\PY{p}{,}\PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{]}\PY{p}{,} \PY{n}{a}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{up} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{p}{(}\PY{n}{a}\PY{o}{*}\PY{n}{f}\PY{o}{*}\PY{n}{f}\PY{o}{+}\PY{n}{c}\PY{o}{*}\PY{n}{d}\PY{o}{*}\PY{n}{d}\PY{o}{+}\PY{n}{g}\PY{o}{*}\PY{n}{b}\PY{o}{*}\PY{n}{b}\PY{o}{\PYZhy{}}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{b}\PY{o}{*}\PY{n}{d}\PY{o}{*}\PY{n}{f}\PY{o}{\PYZhy{}}\PY{n}{a}\PY{o}{*}\PY{n}{c}\PY{o}{*}\PY{n}{g}\PY{p}{)}
\PY{n}{down1}\PY{o}{=}\PY{p}{(}\PY{n}{b}\PY{o}{*}\PY{n}{b}\PY{o}{\PYZhy{}}\PY{n}{a}\PY{o}{*}\PY{n}{c}\PY{p}{)}\PY{o}{*}\PY{p}{(} \PY{p}{(}\PY{n}{c}\PY{o}{\PYZhy{}}\PY{n}{a}\PY{p}{)}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{+}\PY{l+m+mi}{4}\PY{o}{*}\PY{n}{b}\PY{o}{*}\PY{n}{b}\PY{o}{/}\PY{p}{(}\PY{p}{(}\PY{n}{a}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{a}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{p}{(}\PY{n}{c}\PY{o}{+}\PY{n}{a}\PY{p}{)}\PY{p}{)}
\PY{n}{down2}\PY{o}{=}\PY{p}{(}\PY{n}{b}\PY{o}{*}\PY{n}{b}\PY{o}{\PYZhy{}}\PY{n}{a}\PY{o}{*}\PY{n}{c}\PY{p}{)}\PY{o}{*}\PY{p}{(} \PY{p}{(}\PY{n}{a}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{p}{)}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{+}\PY{l+m+mi}{4}\PY{o}{*}\PY{n}{b}\PY{o}{*}\PY{n}{b}\PY{o}{/}\PY{p}{(}\PY{p}{(}\PY{n}{a}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{a}\PY{o}{\PYZhy{}}\PY{n}{c}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{p}{(}\PY{n}{c}\PY{o}{+}\PY{n}{a}\PY{p}{)}\PY{p}{)}
\PY{n}{res1}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{up}\PY{o}{/}\PY{n}{down1}\PY{p}{)}
\PY{n}{res2}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{up}\PY{o}{/}\PY{n}{down2}\PY{p}{)}
\PY{k}{return} \PY{p}{(}\PY{n+nb}{int}\PY{p}{(}\PY{n}{res1}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{,} \PY{n+nb}{int}\PY{p}{(}\PY{n}{res2}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}185}]:} \PY{c+c1}{\PYZsh{} read images}
\PY{n}{circle} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{imread}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{circle.bmp}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{ellipse} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{imread}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ellipse.bmp}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{x\PYZus{}circle}\PY{p}{,} \PY{n}{y\PYZus{}circle} \PY{o}{=} \PY{n}{circle}\PY{o}{.}\PY{n}{nonzero}\PY{p}{(}\PY{p}{)}
\PY{n}{x\PYZus{}ellipse}\PY{p}{,} \PY{n}{y\PYZus{}ellipse} \PY{o}{=} \PY{n}{ellipse}\PY{o}{.}\PY{n}{nonzero}\PY{p}{(}\PY{p}{)}
\PY{n}{a\PYZus{}circle} \PY{o}{=} \PY{n}{direct\PYZus{}least\PYZus{}square}\PY{p}{(}\PY{n}{x\PYZus{}circle}\PY{p}{,} \PY{n}{y\PYZus{}circle}\PY{p}{)}
\PY{n}{a\PYZus{}ellipse} \PY{o}{=} \PY{n}{direct\PYZus{}least\PYZus{}square}\PY{p}{(}\PY{n}{x\PYZus{}ellipse}\PY{p}{,} \PY{n}{y\PYZus{}ellipse}\PY{p}{)}
\end{Verbatim}
\hypertarget{c-plot-estimates}{%
\subsubsection{1.C Plot Estimates}\label{c-plot-estimates}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}186}]:} \PY{n}{center} \PY{o}{=} \PY{n}{ellipse\PYZus{}center}\PY{p}{(}\PY{n}{a\PYZus{}circle}\PY{p}{)}
\PY{n}{axis} \PY{o}{=} \PY{n}{ellipse\PYZus{}axis\PYZus{}length}\PY{p}{(}\PY{n}{a\PYZus{}circle}\PY{p}{)}
\PY{n}{angle} \PY{o}{=} \PY{n}{ellipse\PYZus{}angle\PYZus{}of\PYZus{}rotation}\PY{p}{(}\PY{n}{a\PYZus{}circle}\PY{p}{)}
\PY{n}{start\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{end\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{360}
\PY{n}{color} \PY{o}{=} \PY{l+m+mi}{150}
\PY{n}{thickness} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{plt}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{n}{cv2}\PY{o}{.}\PY{n}{ellipse}\PY{p}{(}\PY{n}{circle}\PY{p}{,} \PY{n}{center}\PY{p}{,} \PY{n}{axis}\PY{p}{,} \PY{n}{angle}\PY{p}{,} \PY{n}{start\PYZus{}angle}\PY{p}{,} \PY{n}{end\PYZus{}angle}\PY{p}{,} \PY{n}{color}\PY{p}{,} \PY{n}{thickness}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}186}]:} <matplotlib.image.AxesImage at 0x1749279ac50>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_8_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}187}]:} \PY{n}{center} \PY{o}{=} \PY{n}{ellipse\PYZus{}center}\PY{p}{(}\PY{n}{a\PYZus{}ellipse}\PY{p}{)}
\PY{n}{axis} \PY{o}{=} \PY{n}{ellipse\PYZus{}axis\PYZus{}length}\PY{p}{(}\PY{n}{a\PYZus{}ellipse}\PY{p}{)}
\PY{n}{angle} \PY{o}{=} \PY{n}{ellipse\PYZus{}angle\PYZus{}of\PYZus{}rotation}\PY{p}{(}\PY{n}{a\PYZus{}ellipse}\PY{p}{)}
\PY{n}{start\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{end\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{360}
\PY{n}{color} \PY{o}{=} \PY{l+m+mi}{150}
\PY{n}{thickness} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{plt}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{n}{cv2}\PY{o}{.}\PY{n}{ellipse}\PY{p}{(}\PY{n}{ellipse}\PY{p}{,} \PY{n}{center}\PY{p}{,} \PY{n}{axis}\PY{p}{,} \PY{n}{angle}\PY{p}{,} \PY{n}{start\PYZus{}angle}\PY{p}{,} \PY{n}{end\PYZus{}angle}\PY{p}{,} \PY{n}{color}\PY{p}{,} \PY{n}{thickness}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}187}]:} <matplotlib.image.AxesImage at 0x174927ee4a8>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_9_1.png}
\end{center}
{ \hspace*{\fill} \\}
\hypertarget{how-many-iterations-for-40-inlier-data-with-0.999-correct-estimation-probability}{%
\subsection{2 How many Iterations for \%40 Inlier Data With 0.999
Correct Estimation
Probability?}\label{how-many-iterations-for-40-inlier-data-with-0.999-correct-estimation-probability}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}192}]:} \PY{n}{w} \PY{o}{=} \PY{l+m+mf}{0.4}
\PY{n}{p} \PY{o}{=} \PY{l+m+mf}{0.999}
\PY{c+c1}{\PYZsh{} we need at least 6 points to estimates 6 parameters of the ellipse}
\PY{n}{k} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{p}\PY{p}{)} \PY{o}{/} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{n}{np}\PY{o}{.}\PY{n}{power}\PY{p}{(}\PY{n}{w}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Number of needed iterations: }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s1}{\PYZsq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n+nb}{int}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{ceil}\PY{p}{(}\PY{n}{k}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Number of needed iterations: 1684
\end{Verbatim}
\hypertarget{do-these-steps-in-this-task}{%
\subsection{3 Do these steps in this
task:}\label{do-these-steps-in-this-task}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Estimates parameters of ellipse using code in task 1 on
ellipse\_noise.bmp image
\item
As there are points that do not blong to ellipse, RANSAC is better
solution here. Implement RANSAC
\item
Draw the output on ellipse\_noise.bmp image
\item
Set the probability of achieving correct parameters of ellipse to 0.99
and run algorithm for 10000 times. In how many of iterations, the
estimated parameters are correct?
\item
Analyze your answer
\end{enumerate}
\hypertarget{a-estimate-ellipse-on-ellipse_noise.bmp-via-step-1-code}{%
\subsubsection{\texorpdfstring{3.A Estimate Ellipse on
\texttt{ellipse\_noise.bmp} Via Step 1
Code}{3.A Estimate Ellipse on ellipse\_noise.bmp Via Step 1 Code}}\label{a-estimate-ellipse-on-ellipse_noise.bmp-via-step-1-code}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}282}]:} \PY{c+c1}{\PYZsh{} read images}
\PY{n}{ellipse\PYZus{}noise} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{imread}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ellipse\PYZus{}noise.bmp}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{x\PYZus{}ellipse\PYZus{}noise}\PY{p}{,} \PY{n}{y\PYZus{}ellipse\PYZus{}noise} \PY{o}{=} \PY{n}{ellipse\PYZus{}noise}\PY{o}{.}\PY{n}{nonzero}\PY{p}{(}\PY{p}{)}
\PY{n}{a\PYZus{}ellipse\PYZus{}noise} \PY{o}{=} \PY{n}{direct\PYZus{}least\PYZus{}square}\PY{p}{(}\PY{n}{x\PYZus{}ellipse\PYZus{}noise}\PY{p}{,} \PY{n}{y\PYZus{}ellipse\PYZus{}noise}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}196}]:} \PY{n}{center} \PY{o}{=} \PY{n}{ellipse\PYZus{}center}\PY{p}{(}\PY{n}{a\PYZus{}ellipse\PYZus{}noise}\PY{p}{)}
\PY{n}{axis} \PY{o}{=} \PY{n}{ellipse\PYZus{}axis\PYZus{}length}\PY{p}{(}\PY{n}{a\PYZus{}ellipse\PYZus{}noise}\PY{p}{)}
\PY{n}{angle} \PY{o}{=} \PY{n}{ellipse\PYZus{}angle\PYZus{}of\PYZus{}rotation}\PY{p}{(}\PY{n}{a\PYZus{}ellipse\PYZus{}noise}\PY{p}{)}
\PY{n}{start\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{end\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{360}
\PY{n}{color} \PY{o}{=} \PY{l+m+mi}{150}
\PY{n}{thickness} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{plt}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{n}{cv2}\PY{o}{.}\PY{n}{ellipse}\PY{p}{(}\PY{n}{ellipse\PYZus{}noise}\PY{p}{,} \PY{n}{center}\PY{p}{,} \PY{n}{axis}\PY{p}{,} \PY{n}{angle}\PY{p}{,} \PY{n}{start\PYZus{}angle}\PY{p}{,} \PY{n}{end\PYZus{}angle}\PY{p}{,} \PY{n}{color}\PY{p}{,} \PY{n}{thickness}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}196}]:} <matplotlib.image.AxesImage at 0x17492cb6390>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_15_1.png}
\end{center}
{ \hspace*{\fill} \\}
\hypertarget{b-implement-ransac-for-ellipse}{%
\subsubsection{3.B Implement RANSAC for
Ellipse}\label{b-implement-ransac-for-ellipse}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}386}]:} \PY{k+kn}{import} \PY{n+nn}{random}
\PY{k}{def} \PY{n+nf}{ransac}\PY{p}{(}\PY{n}{image}\PY{p}{,} \PY{n}{max\PYZus{}iter}\PY{p}{,} \PY{n}{threshold}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{:}
\PY{n}{ellipse\PYZus{}noise} \PY{o}{=} \PY{n}{image}
\PY{n}{data} \PY{o}{=} \PY{n}{ellipse\PYZus{}noise}
\PY{n}{ics} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{best\PYZus{}ic} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{best\PYZus{}model} \PY{o}{=} \PY{k+kc}{None}
\PY{n}{xn}\PY{p}{,} \PY{n}{yn} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{nonzero}\PY{p}{(}\PY{p}{)}
\PY{n}{nzero} \PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{n}{x1}\PY{p}{,}\PY{n}{y1}\PY{p}{)} \PY{k}{for} \PY{n}{x1}\PY{p}{,} \PY{n}{y1} \PY{o+ow}{in} \PY{n+nb}{zip}\PY{p}{(}\PY{n}{xn}\PY{p}{,} \PY{n}{yn}\PY{p}{)}\PY{p}{]}
\PY{k}{for} \PY{n}{epoch} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{max\PYZus{}iter}\PY{p}{)}\PY{p}{:}
\PY{n}{ic} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{sample} \PY{o}{=} \PY{n}{random}\PY{o}{.}\PY{n}{sample}\PY{p}{(}\PY{n}{nzero}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{)}
\PY{n}{a} \PY{o}{=} \PY{n}{direct\PYZus{}least\PYZus{}square}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{s}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{k}{for} \PY{n}{s} \PY{o+ow}{in} \PY{n}{sample}\PY{p}{]}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{s}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{s} \PY{o+ow}{in} \PY{n}{sample}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{x}\PY{p}{,} \PY{n}{y} \PY{o+ow}{in} \PY{n}{sample}\PY{p}{:}
\PY{n}{eq} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mat}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{vstack}\PY{p}{(}\PY{p}{[}\PY{n}{x}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{x}\PY{o}{*}\PY{n}{y}\PY{p}{,} \PY{n}{y}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{k}{if} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{eq}\PY{p}{,} \PY{n}{a}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{threshold}\PY{p}{:}
\PY{n}{ic} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{ics}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{ic}\PY{p}{)}
\PY{k}{if} \PY{n}{ic} \PY{o}{\PYZgt{}} \PY{n}{best\PYZus{}ic}\PY{p}{:}
\PY{n}{best\PYZus{}ic} \PY{o}{=} \PY{n}{ic}
\PY{n}{best\PYZus{}model} \PY{o}{=} \PY{n}{a}
\PY{k}{return} \PY{n}{a}\PY{p}{,} \PY{n}{ics}
\PY{n}{ellipse\PYZus{}noise} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{imread}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ellipse\PYZus{}noise.bmp}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{a}\PY{p}{,} \PY{n}{\PYZus{}} \PY{o}{=} \PY{n}{ransac}\PY{p}{(}\PY{n}{ellipse\PYZus{}noise}\PY{p}{,} \PY{l+m+mi}{500}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{)}
\end{Verbatim}
\hypertarget{c-draw-the-estimated-ellipse-via-ransac}{%
\subsubsection{3.C Draw the Estimated Ellipse Via
Ransac}\label{c-draw-the-estimated-ellipse-via-ransac}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}387}]:} \PY{n}{center} \PY{o}{=} \PY{n}{ellipse\PYZus{}center}\PY{p}{(}\PY{n}{a}\PY{p}{)}
\PY{n}{axis} \PY{o}{=} \PY{n}{ellipse\PYZus{}axis\PYZus{}length}\PY{p}{(}\PY{n}{a}\PY{p}{)}
\PY{n}{angle} \PY{o}{=} \PY{n}{ellipse\PYZus{}angle\PYZus{}of\PYZus{}rotation}\PY{p}{(}\PY{n}{a}\PY{p}{)}
\PY{n}{start\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{end\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{360}
\PY{n}{color} \PY{o}{=} \PY{l+m+mi}{150}
\PY{n}{thickness} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{plt}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{n}{cv2}\PY{o}{.}\PY{n}{ellipse}\PY{p}{(}\PY{n}{ellipse\PYZus{}noise}\PY{p}{,} \PY{n}{center}\PY{p}{,} \PY{n}{axis}\PY{p}{,} \PY{n}{angle}\PY{p}{,} \PY{n}{start\PYZus{}angle}\PY{p}{,} \PY{n}{end\PYZus{}angle}\PY{p}{,} \PY{n}{color}\PY{p}{,} \PY{n}{thickness}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}387}]:} <matplotlib.image.AxesImage at 0x1749492c2e8>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_19_1.png}
\end{center}
{ \hspace*{\fill} \\}
\hypertarget{d-if-p0.99-with-10000-iteration-how-many-correct-estimations}{%
\subsubsection{3.D If P=0.99, With 10000 Iteration, How Many Correct
Estimations?}\label{d-if-p0.99-with-10000-iteration-how-many-correct-estimations}}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}402}]:} \PY{n}{ellipse\PYZus{}noise} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{imread}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ellipse\PYZus{}noise.bmp}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{n}{a}\PY{p}{,} \PY{n}{ics} \PY{o}{=} \PY{n}{ransac}\PY{p}{(}\PY{n}{ellipse\PYZus{}noise}\PY{p}{,} \PY{l+m+mi}{1000}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}403}]:} \PY{n}{center} \PY{o}{=} \PY{n}{ellipse\PYZus{}center}\PY{p}{(}\PY{n}{a}\PY{p}{)}
\PY{n}{axis} \PY{o}{=} \PY{n}{ellipse\PYZus{}axis\PYZus{}length}\PY{p}{(}\PY{n}{a}\PY{p}{)}
\PY{n}{angle} \PY{o}{=} \PY{n}{ellipse\PYZus{}angle\PYZus{}of\PYZus{}rotation}\PY{p}{(}\PY{n}{a}\PY{p}{)}
\PY{n}{start\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{end\PYZus{}angle} \PY{o}{=} \PY{l+m+mi}{360}
\PY{n}{color} \PY{o}{=} \PY{l+m+mi}{150}
\PY{n}{thickness} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{plt}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{n}{cv2}\PY{o}{.}\PY{n}{ellipse}\PY{p}{(}\PY{n}{ellipse\PYZus{}noise}\PY{p}{,} \PY{n}{center}\PY{p}{,} \PY{n}{axis}\PY{p}{,} \PY{n}{angle}\PY{p}{,} \PY{n}{start\PYZus{}angle}\PY{p}{,} \PY{n}{end\PYZus{}angle}\PY{p}{,} \PY{n}{color}\PY{p}{,} \PY{n}{thickness}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}403}]:} <matplotlib.image.AxesImage at 0x17494a7e320>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_22_1.png}
\end{center}
{ \hspace*{\fill} \\}
I do not know why sometimes my \texttt{direct\_least\_square} function,
generates \texttt{0} parameters and sometimes \texttt{12}, so the could
is unstable for high number of loops. So failed this part of training
because of lack of time. Thank you ;-)
% Add a bibliography block to the postdoc
\end{document}
| {
"alphanum_fraction": 0.5861544224,
"avg_line_length": 60.6732954545,
"ext": "tex",
"hexsha": "dbb04c1655572327172bdb8ceb58e8fe8609d3bd",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-01-24T10:33:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-24T10:33:06.000Z",
"max_forks_repo_head_hexsha": "c2347c1de61e8ebc29e66ce1427c3d3c3762f7e3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Banhkun/Digital-Image-Processing-IUST",
"max_forks_repo_path": "HW06/notebook.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "c2347c1de61e8ebc29e66ce1427c3d3c3762f7e3",
"max_issues_repo_issues_event_max_datetime": "2020-12-22T09:01:04.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-09T19:01:47.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Banhkun/Digital-Image-Processing-IUST",
"max_issues_repo_path": "HW06/notebook.tex",
"max_line_length": 536,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "c2347c1de61e8ebc29e66ce1427c3d3c3762f7e3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "iust-projects/Digital-Image-Processing-IUST",
"max_stars_repo_path": "HW06/notebook.tex",
"max_stars_repo_stars_event_max_datetime": "2019-12-16T06:00:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-05T18:27:39.000Z",
"num_tokens": 18082,
"size": 42714
} |
\chapter{OpenMP vs. MPI vs. PiP}
\section{Hello Programs}
\lstinputlisting[style=program,caption={Hello OMP},label=prg:hello-omp]{../prgs/hello-omp.c}
\begin{lstlisting}[style=example,caption={``Hello OMP'' output},label=out:hello-omp]
$ OMP_NUM_THREADS=5 ./hello-omp
Hello from OMP thread 3
Hello from OMP thread 4
Hello from OMP thread 2
Hello from OMP thread 0
Hello from OMP thread 1
\end{lstlisting}
\lstinputlisting[style=program,caption={Hello MPI},label=hello-mpi]{../prgs/hello-mpi.c}
\begin{lstlisting}[style=example,caption={``Hello MPI'' output},label=out:hello-mpi]
$ mpiexec -np 5 ./hello-mpi
Hello from MPI process 2
Hello from MPI process 3
Hello from MPI process 4
Hello from MPI process 1
Hello from MPI process 0
\end{lstlisting}
\lstinputlisting[style=program,caption={Hello PiP},label=prg:hello-pip]{../prgs/hello-pip.c}
\begin{lstlisting}[style=example,caption={``Hello PiP'' output},label=out:hello-pip]
$ piprun -n 5 ./hello-pip
Hello from PiP task 0
Hello from PiP task 1
Hello from PiP task 2
Hello from PiP task 3
Hello from PiP task 4
\end{lstlisting}
\section{Variables}
\lstinputlisting[style=program,caption={Vars0 OMP},label=prg:vars0-omp]{../prgs/vars0-omp.c}
\begin{lstlisting}[style=example,caption={``Vars0 OMP'' output},label=out:vars0-omp]
$ OMP_NUM_THREADS=5 ./vars0-omp
<1> gvar:0x601050 lvar:0x7ffd47c2bacc
<4> gvar:0x601050 lvar:0x7ffd47c2bacc
<0> gvar:0x601050 lvar:0x7ffd47c2bacc
<3> gvar:0x601050 lvar:0x7ffd47c2bacc
<2> gvar:0x601050 lvar:0x7ffd47c2bacc
\end{lstlisting}
\lstinputlisting[style=program,caption={Vars0 MPI},label=vars0-mpi]{../prgs/vars0-mpi.c}
\begin{lstlisting}[style=example,caption={``Vars0 MPI'' output},label=out:vars0-mpi]
<1> gvar:0x601050 lvar:0x7ffdeec8d088
<2> gvar:0x601050 lvar:0x7ffd2e275998
<3> gvar:0x601050 lvar:0x7ffc163f6c58
<4> gvar:0x601050 lvar:0x7ffef9d2db88
<0> gvar:0x601050 lvar:0x7ffe438034d8
\end{lstlisting}
\lstinputlisting[style=program,caption={Vars0 PiP},label=prg:vars0-pip]{../prgs/vars0-pip.c}
\begin{lstlisting}[style=example,caption={``Vars0 PiP'' output},label=out:vars0-pip]
$ piprun -n 5 ./vars0-pip
<0> gvar:0x7f6a22849050 lvar:0x7f6a21c6e4f8
<1> gvar:0x7f6a21477050 lvar:0x7f6a2089c4f8
<2> gvar:0x7f6a1bfff050 lvar:0x7f6a1b4244f8
<3> gvar:0x7f6a1ac2d050 lvar:0x7f6a1a0524f8
<4> gvar:0x7f6a1985b050 lvar:0x7f6a18c804f8
\end{lstlisting}
\lstinputlisting[style=program,caption={Vars1 OMP},label=prg:vars1-omp]{../prgs/vars1-omp.c}
\begin{lstlisting}[style=example,caption={``Vars1 OMP'' output},label=out:vars1-omp]
$ OMP_NUM_THREADS=5 ./vars1-omp
<0> gvar=3
<4> gvar=3
<2> gvar=3
<3> gvar=3
<1> gvar=3
$ OMP_NUM_THREADS=5 ./vars1-omp
<1> gvar=4
<2> gvar=4
<0> gvar=4
<3> gvar=4
<4> gvar=4
\end{lstlisting}
\lstinputlisting[style=program,caption={Vars1 MPI},label=vars1-mpi]{../prgs/vars1-mpi.c}
\begin{lstlisting}[style=example,caption={``Vars1 MPI'' output},label=out:vars1-mpi]
$ mpirun -np 5 ./vars1-mpi
<0> gvar=0
<1> gvar=1
<2> gvar=2
<3> gvar=3
<4> gvar=4
$ mpirun -np 5 ./vars1-mpi
<0> gvar=0
<1> gvar=1
<2> gvar=2
<3> gvar=3
<4> gvar=4
$ mpirun -np 5 ./vars1-mpi
<0> gvar=0
<1> gvar=1
<2> gvar=2
<3> gvar=3
<4> gvar=4
\end{lstlisting}
\lstinputlisting[style=program,caption={Vars1 PiP},label=prg:vars1-pip]{../prgs/vars1-pip.c}
\begin{lstlisting}[style=example,caption={``Vars1 PiP'' output},label=out:vars1-pip]
$ piprun -n 5 ./vars1-pip
<4> gvar=4
<2> gvar=2
<3> gvar=3
<0> gvar=0
<1> gvar=1
$ piprun -n 5 ./vars1-pip
<3> gvar=3
<2> gvar=2
<4> gvar=4
<0> gvar=0
<1> gvar=1
$ piprun -n 5 ./vars1-pip
<4> gvar=4
<3> gvar=3
<1> gvar=1
<2> gvar=2
<0> gvar=0
\end{lstlisting}
\lstinputlisting[style=program,caption={{\tt pmap.h}},label=prg:pmap]{../prgs/pmap.h}
\lstinputlisting[style=program,caption={Vars2 PiP},label=prg:vars2-pip]{../prgs/vars2-pip.c}
{\footnotesize
\begin{lstlisting}[style=example,caption={``Vars2 PiP'' output},label=out:vars2-pip]
$ piprun -n 3 ./vars2-pip
<1> 1st: gvar=1 @0x7fd396e3b098
<1> 2nd: gvar=1 gvarp=0x7fd39820e098
<2> 1st: gvar=2 @0x7fd395a68098
<2> 2nd: gvar=2 gvarp=0x7fd39820e098
<0> 1st: gvar=0 @0x7fd39820e098
<0> 2nd: gvar=1002 gvarp=0x7fd39820e098
PID: 21569
00400000-00402000 r-xp /PIP/bin/piprun
00601000-00602000 r--p /PIP/bin/piprun
00602000-00603000 rw-p /PIP/bin/piprun
0159e000-015bf000 rw-p [heap]
7fd388000000-7fd388029000 rw-p
7fd388029000-7fd38c000000 ---p
7fd38c000000-7fd38c029000 rw-p
7fd38c029000-7fd390000000 ---p
7fd390000000-7fd390029000 rw-p
7fd390029000-7fd394000000 ---p
7fd394496000-7fd394696000 rw-p
7fd394696000-7fd394697000 ---p
7fd394697000-7fd394e97000 rw-p
7fd394e97000-7fd394e99000 r-xp /lib64/libdl-2.17.so
7fd394e99000-7fd395098000 ---p /lib64/libdl-2.17.so
7fd395098000-7fd395099000 r--p /lib64/libdl-2.17.so
7fd395099000-7fd39509a000 rw-p /lib64/libdl-2.17.so
7fd39509a000-7fd395238000 r-xp /lib64/libc-2.17.so
7fd395238000-7fd395438000 ---p /lib64/libc-2.17.so
7fd395438000-7fd39543c000 r--p /lib64/libc-2.17.so
7fd39543c000-7fd39543e000 rw-p /lib64/libc-2.17.so
7fd39543e000-7fd395443000 rw-p
7fd395443000-7fd395458000 r-xp /lib64/libpthread-2.17.so
7fd395458000-7fd395657000 ---p /lib64/libpthread-2.17.so
7fd395657000-7fd395658000 r--p /lib64/libpthread-2.17.so
7fd395658000-7fd395659000 rw-p /lib64/libpthread-2.17.so
7fd395659000-7fd39565d000 rw-p
7fd39565d000-7fd395665000 r-xp /piplib/libpip.so
7fd395665000-7fd395864000 ---p /piplib/libpip.so
7fd395864000-7fd395865000 r--p /piplib/libpip.so
7fd395865000-7fd395866000 rw-p /piplib/libpip.so
7fd395866000-7fd395868000 r-xp /home/ahori/vars2-pip
7fd395868000-7fd395a67000 ---p /home/ahori/vars2-pip
7fd395a67000-7fd395a68000 r--p /home/ahori/vars2-pip
7fd395a68000-7fd395a69000 rw-p /home/ahori/vars2-pip
#<2> gvar @0x7fd395a68098
7fd395a69000-7fd395a6a000 ---p
7fd395a6a000-7fd39626a000 rw-p
7fd39626a000-7fd39626c000 r-xp /lib64/libdl-2.17.so
7fd39626c000-7fd39646b000 ---p /lib64/libdl-2.17.so
7fd39646b000-7fd39646c000 r--p /lib64/libdl-2.17.so
7fd39646c000-7fd39646d000 rw-p /lib64/libdl-2.17.so
7fd39646d000-7fd39660b000 r-xp /lib64/libc-2.17.so
7fd39660b000-7fd39680b000 ---p /lib64/libc-2.17.so
7fd39680b000-7fd39680f000 r--p /lib64/libc-2.17.so
7fd39680f000-7fd396811000 rw-p /lib64/libc-2.17.so
7fd396811000-7fd396a30000 rw-p
7fd396a30000-7fd396a38000 r-xp /piplib/libpip.so
7fd396a38000-7fd396c37000 ---p /piplib/libpip.so
7fd396c37000-7fd396c38000 r--p /piplib/libpip.so
7fd396c38000-7fd396c39000 rw-p /piplib/libpip.so
7fd396c39000-7fd396c3b000 r-xp /home/ahori/vars2-pip
7fd396c3b000-7fd396e3a000 ---p /home/ahori/vars2-pip
7fd396e3a000-7fd396e3b000 r--p /home/ahori/vars2-pip
7fd396e3b000-7fd396e3c000 rw-p /home/ahori/vars2-pip
#<1> gvar @0x7fd396e3b098
7fd396e3c000-7fd396e3d000 ---p
7fd396e3d000-7fd39763d000 rw-p [stack:21569]
7fd39763d000-7fd39763f000 r-xp /lib64/libdl-2.17.so
7fd39763f000-7fd39783e000 ---p /lib64/libdl-2.17.so
7fd39783e000-7fd39783f000 r--p /lib64/libdl-2.17.so
7fd39783f000-7fd397840000 rw-p /lib64/libdl-2.17.so
7fd397840000-7fd3979de000 r-xp /lib64/libc-2.17.so
7fd3979de000-7fd397bde000 ---p /lib64/libc-2.17.so
7fd397bde000-7fd397be2000 r--p /lib64/libc-2.17.so
7fd397be2000-7fd397be4000 rw-p /lib64/libc-2.17.so
7fd397be4000-7fd397e03000 rw-p
7fd397e03000-7fd397e0b000 r-xp /piplib/libpip.so
7fd397e0b000-7fd39800a000 ---p /piplib/libpip.so
7fd39800a000-7fd39800b000 r--p /piplib/libpip.so
7fd39800b000-7fd39800c000 rw-p /piplib/libpip.so
7fd39800c000-7fd39800e000 r-xp /home/ahori/vars2-pip
7fd39800e000-7fd39820d000 ---p /home/ahori/vars2-pip
7fd39820d000-7fd39820e000 r--p /home/ahori/vars2-pip
7fd39820e000-7fd39820f000 rw-p /home/ahori/vars2-pip
#<0> gvar @0x7fd39820e098
7fd39820f000-7fd3983ad000 r-xp /lib64/libc-2.17.so
7fd3983ad000-7fd3985ad000 ---p /lib64/libc-2.17.so
7fd3985ad000-7fd3985b1000 r--p /lib64/libc-2.17.so
7fd3985b1000-7fd3985b3000 rw-p /lib64/libc-2.17.so
7fd3985b3000-7fd3987d2000 rw-p
7fd3987d2000-7fd3987d4000 r-xp /lib64/libdl-2.17.so
7fd3987d4000-7fd3989d3000 ---p /lib64/libdl-2.17.so
7fd3989d3000-7fd3989d4000 r--p /lib64/libdl-2.17.so
7fd3989d4000-7fd3989d5000 rw-p /lib64/libdl-2.17.so
7fd3989d5000-7fd3989dd000 r-xp /piplib/libpip.so
7fd3989dd000-7fd398bdc000 ---p /piplib/libpip.so
7fd398bdc000-7fd398bdd000 r--p /piplib/libpip.so
7fd398bdd000-7fd398bde000 rw-p /piplib/libpip.so
7fd398de0000-7fd398dff000 r-xp /lib64/ld-2.17.so
7fd398ec5000-7fd398fd4000 rw-p
7fd398ff8000-7fd398ffe000 rw-p
7fd398ffe000-7fd398fff000 r--p /lib64/ld-2.17.so
7fd398fff000-7fd399010000 rw-p /lib64/ld-2.17.so
7ffd5a080000-7ffd5a0a3000 rw-p
7ffd5a0e4000-7ffd5a0e6000 r-xp [vdso]
ffffffffff600000-ffffffffff601000 r-xp [vsyscall]
\end{lstlisting}}
| {
"alphanum_fraction": 0.7712821707,
"avg_line_length": 35.7791666667,
"ext": "tex",
"hexsha": "a25e6a4dfe298e69220df98b0a1006a9a13d5827",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-12-25T17:12:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-28T05:07:38.000Z",
"max_forks_repo_head_hexsha": "9f6048ceb8ef294d6c07a5f1f95fff844bf6f2c9",
"max_forks_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_forks_repo_name": "RIKEN-SysSoft/PiP",
"max_forks_repo_path": "doc/tutorial/tex/hello-pip.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "9f6048ceb8ef294d6c07a5f1f95fff844bf6f2c9",
"max_issues_repo_issues_event_max_datetime": "2021-03-25T01:25:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-25T01:25:11.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_issues_repo_name": "RIKEN-SysSoft/PiP",
"max_issues_repo_path": "doc/tutorial/tex/hello-pip.tex",
"max_line_length": 92,
"max_stars_count": 23,
"max_stars_repo_head_hexsha": "9f6048ceb8ef294d6c07a5f1f95fff844bf6f2c9",
"max_stars_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_stars_repo_name": "RIKEN-SysSoft/PiP",
"max_stars_repo_path": "doc/tutorial/tex/hello-pip.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-25T17:12:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-25T06:04:00.000Z",
"num_tokens": 4080,
"size": 8587
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% GEANT manual in LaTeX form %
% %
% Michel Goossens (for translation into LaTeX) %
% Version 1.00 %
% Last Mod. Jan 24 1991 1300 MG + IB %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Origin{F.Carminati, R.Jones}
\Documentation{F. Carminati}
\Submitted{03.02.93}\Revised{16.12.93}
\Version{Geant 3.16}\Routid{PHYS260}
\Makehead{\v{C}erenkov photons}
\v{C}erenkov photons are produced when a charged particle traverses
a dielectric material.
\section{Physics processes for optical photons}
A photon is called optical when its wavelength is much greater than the
typical atomic spacing, for instance when $\lambda \geq 10nm $
which corresponds to an energy $E \leq 100eV$.
Production of an optical photon in a HEP detector is primarily due to:
\begin{enumerate}
\item \v{C}erenkov effect;
\item Scintillation;
\item Fluorescence.
\end{enumerate}
Fluorescence is taken into account in {\tt GEANT} in the context of the
photoelectric effect ({\tt [PHYS230], [PHYS231]}), but only above the energy cut
{\tt CUTGAM}. Scintillation is not yet simulated by {\tt GEANT}.
Optical photons undergo three kinds of interactions:
\begin{enumerate}
\item Elastic (Rayleigh) scattering;
\item Absorption;
\item Medium boundary interactions.
\end{enumerate}
\subsection{Elastic scattering}
For optical photons elastic scattering is usually unimportant. For
$\lambda=.200\mu m$ we have $\sigma_{Rayleigh} \approx .2b$ for $N_{2}$
or $O_{2}$ which gives a mean free path of $\approx 1.7$ km in air
and $\approx 1$ m in quartz.
An important exception to this is found in aerogel, which is used as a
\v{C}erenkov radiator for some special applications. Because of the
spectral properties of this material, Rayleigh scattering is extremely
strong and this limits its usefulness as a RICH radiator. At present,
elastic scattering of optical photons is not simulated in {\tt GEANT}.
\subsection{Absorption}
Absorption is important for optical photons because it determines the
lower $\lambda$ limit in the {\it window} of transparency
of the radiator. Absorption competes with
photo-ionisation in producing the signal in the detector, so it must be
treated properly in the tracking of optical photons.
\subsection{Medium boundary effects}
When a photon arrives at the boundary of a dielectric medium,
its behaviour depends on the nature
of the two materials which join at that boundary:
\begin{itemize}
\item Case dielectric $\rightarrow$ dielectric. \\
The photon can be transmitted (refracted ray) or reflected (reflected ray).
In case where
the photon can only be reflected, total internal reflection takes place.
\item Case dielectric $\rightarrow$ metal. \\
The photon can be absorbed by the metal or reflected back into the
dielectric. If the photon is absorbed it can be detected according to the
photoelectron efficiency of the metal.
\item Case dielectric $\rightarrow$ black material. \\
A {\it black} material is a tracking medium for which the user has not
defined any optical property. In this case the photon is immediately
absorbed undetected.
\end{itemize}
\section{Photon polarisation}
The photon polarisation is defined as a two component vector normal to the
direction of the photon:
\[
\left ( \begin{array}{c}
a_{1} e^{i \phi_{1}} \\
a_{2} e^{i \phi_{2}}
\end{array} \right ) =
e^{\phi_{o}} \left (
\begin{array}{l}
a_{1} e^{i \phi_{c}} \\
a_{2} e^{-i \phi_{c}}
\end{array} \right )
\]
where
$\phi_{c} = (\phi_{1}-\phi_{2})/2$ is called circularity and
$\phi_{o} = (\phi_{1}+\phi_{2})/2$ is called overall phase. Circularity
gives the left- or right-polarisation characteristic of the photon. RICH
materials usually do not distinguish between the two polarisations and
photons produced by the \v{C}erenkov effect are linearly polarised, that is
$\phi_{c}=0$. The circularity of the photon is ignored by {\tt GEANT}.
The overall phase is important in determining interference effects between
coherent waves. These are important only in layers of thickness comparable
with the wavelength, such as interference filters on mirrors. The effects
of such coatings can be accounted for by the empirical reflectivity factor
for the surface, and do not require a microscopic simulation.
{\tt GEANT} does not keep track of the overall phase.
Vector polarisation is described by the polarisation angle
$\tan \psi = a_{2}/a_{1}$.
Reflection/transmission probabilities are sensitive to the state of linear
polarisation, so this has to be taken into account. One parameter is
sufficient to describe vector polarisation, but to avoid too many
trigonometrical transformations, a unit vector perpendicular to the direction
of the photon is used in {\tt GEANT}.
The polarisation vectors are stored in a special track structure which
is lifted at the link {\tt LQ(JSTACK-1)} when the first \v{C}erenkov
photon is stored in the stack.
\section{Method}
\subsection{Generation of the photons}
For the formulas contained in this chapter, please see \cite{bib-JACK}.
Let $n$ be the refractive index of the dielectric material
acting as a radiator
($n=c/c'$ where $c'$ is the group velocity of light in
the material: it follows that $1 \leq n$). In a dispersive material $n$ is
an increasing function of the photon momentum $p_{\gamma},
dn/dp \geq 0$. A particle travelling
with speed $\beta = v/c$ will emit photons at an angle $\theta$
with respect to its direction, where $\theta$ is given by the
relation:
\[
\cos \theta = \frac{1}{\beta n}
\]
from which follows the limitation for the momentum of the
emitted photons:
\[
n(p_{\gamma}^{min}) = \frac{1}{\beta}
\]
Additionally, the photons must be within the window of transparency of
the radiator. All the photons will be contained in a cone of opening
$\cos \theta_{max} = 1/(\beta n(p_{\gamma}^{max}))$.
The average number of photons produced is given by the relations
(Gaussian units):
\[
dN =
\frac{2 \pi z^{2}e^{2}}{\hbar c} \sin^{2} \theta \frac{d \nu}{c} dx =
\frac{2 \pi z^{2}e^{2}}{\hbar c} \left ( 1- \cos^{2} \theta
\right ) \frac{d \nu}{c} dx =
\frac{2 \pi z^{2}e^{2}}{\hbar c} \left ( 1- \frac{1}{n^{2} \beta^{2}}
\right ) \frac{d \nu}{c} dx =
\]
\[
= \frac{z^{2}e^{2}}{\hbar^{2} c^{2}} \left ( 1- \frac{1}{n^{2} \beta^{2}}
\right ) dp_{\gamma} \: dx \approx
370 z^{2} \frac{\rm photons}{\rm cm \; eV}
\left ( 1- \frac{1}{n^{2} \beta^{2}} \right ) dp_{\gamma} \: dx
\]
and
\[
\frac{dN}{dx} \approx 370 z^{2}
\int^{p_{\gamma}^{max}}_{p_{\gamma}^{min}}{dp_{\gamma}
\left ( 1- \frac{1}{n^{2} \beta^{2}} \right ) }
= 370 z^{2} \left ( p_{\gamma}^{max} -
p_{\gamma}^{min} - \frac{1}{\beta^{2}}
\int^{p_{\gamma}^{max}}_{p_{\gamma}^{min}}{dp_{\gamma}
\frac{1}{n(p_{\gamma})^{2}}} \right )
\]
The number of photons produced is calculated from a Poissonian distribution
with average value $\bar{n} = \mbox{\tt STEP} \: dN/dx$.
The momentum distribution of
the photon is then sampled from the density function:
\[
f(p_{\gamma}) = \left ( 1- \frac{1}{n^{2}(p_{\gamma}) \beta^{2}} \right )
\]
\subsection{Tracking of the photons}
\v{C}erenkov photons are tracked in the routine \Rind{GTCKOV}.
These particles are subject to {\it in flight} absorption (process
{\tt LABS}, number 101) and {\it boundary action}
(process {\tt LREF}, number 102, see above). As explained
above, the status of the photon is defined by 2 vectors, the
photon momentum ($\vec{p}=\hbar \vec{k}$) and photon polarisation
($\vec{e}$). By convention the direction of the polarisation vector
is that of the electric field. Let also $\vec{u}$ be the normal to
the material boundary at the point of intersection, pointing
out of the material which the photon is leaving and toward the one
which the photon is entering.
The behaviour of a photon at the surface
boundary is determined by three quantities:
\begin{enumerate}
\item refraction or reflection angle, this represents the kinematics
of the effect;
\item amplitude of the reflected and refracted waves, this is
the dynamics of the effect;
\item probability of the photon to be refracted or reflected,
this is the quantum mechanical effect which we have to take
into account if we want to describe the photon as a particle and
not as a wave;
\end{enumerate}
As said above, we distinguish three kinds of boundary action, dielectric
$\rightarrow$ black material, dielectric $\rightarrow$ metal, dielectric
$\rightarrow$ dielectric. The first case is trivial,
in the sense that the photon is immediately absorbed and it goes
undetected.
To determine the behaviour of the photon at the boundary,
we will at first treat it as an homogeneous monochromatic plane wave:
\begin{eqnarray*}
\vec{E} & = & \vec{E}_{0} e^{i \vec{k} \cdot \vec{x} - i \omega t} \\
\vec{B} & = & \sqrt{\mu \epsilon} \frac{\vec{k} \times \vec{E}}{k}
\end{eqnarray*}
\subsubsection{Case dielectric $\rightarrow$ dielectric}
In the classical description the incoming wave splits into a reflected
wave (quantities with a double prime) and a refracted wave (quantities
with a single prime). Our problem is solved if we find the following
quantities:
\begin{eqnarray*}
\vec{E'} & = & \vec{E'}_{0} e^{i \vec{k'} \cdot \vec{x} - i \omega t} \\
\vec{E''} & = & \vec{E''}_{0} e^{i \vec{k''} \cdot \vec{x} - i \omega t}
\end{eqnarray*}
For the wave numbers the following relations hold:
\begin{eqnarray*}
|\vec{k}| & = & |\vec{k}''| = k = \frac{\omega}{c} \sqrt{\mu \epsilon} \\
|\vec{k}'| & = & k' = \frac{\omega}{c} \sqrt{\mu' \epsilon'}
\end{eqnarray*}
Where the speed of the wave in the medium is $v=c/\sqrt{\mu \epsilon}$
and the quantity $n=c/v=\sqrt{\mu \epsilon}$ is called {\it refractive
index} of the medium. The condition that the three waves, refracted, reflected
and incident have the same phase at the surface of the medium, gives us the
well known Fresnel law:
\begin{eqnarray*}
(\vec{k} \cdot \vec{x})_{surf} & = & (\vec{k}' \cdot \vec{x})_{surf} =
(\vec{k}'' \cdot \vec{x})_{surf} \\
k \sin{i} & = & k' \sin{r} = k'' \sin{r'}
\end{eqnarray*}
where $i, r, r'$ are, respectively, the angle of the incident, refracted and
reflected ray with the normal to the surface. From this formula
the well known condition emerges:
\begin{eqnarray*}
i & = & r' \\
\frac{\sin{i}}{\sin{r}} & = & \sqrt{\frac{\mu' \epsilon'}{\mu \epsilon}} =
\frac{n'}{n}
\end{eqnarray*}
The dynamic properties of the wave at the boundary are derived from Maxwell's
equations which impose the continuity of the normal components of $\vec{D}$
and $\vec{B}$ and of the tangential components of $\vec{E}$ and $\vec{H}$
at the surface boundary. The resulting ratios between the amplitudes of the
the generated waves with respect to the incoming one are
expressed in the two following cases:
\begin{enumerate}
\item a plane wave with the electric
field (polarisation vector) perpendicular to the plane defined by the
photon direction and the normal to the boundary:
\begin{eqnarray*}
\frac{E_{0}'}{E_{0}} & = & \frac{2 n \cos{i}}{n \cos{i} + \frac{\mu}{\mu'}
n' \cos{r}} =
\frac{2 n \cos{i}}{n \cos{i} + n' \cos{r}} \\
\frac{E_{0}''}{E_{0}} & = & \frac{n \cos{i} - \frac{\mu}{\mu'} n' \cos{r}}
{n \cos{i} + \frac{\mu}{\mu'} n' \cos{r}} =
\frac{n \cos{i} - n' \cos{r}} {n \cos{i} + n' \cos{r}}
\end{eqnarray*}
where we suppose, as it is legitimate for visible or near-visible light, that
$\mu/\mu' \approx 1$;
\item a plane wave with the electric
field parallel to the above surface:
\begin{eqnarray*}
\frac{E_{0}'}{E_{0}} & = & \frac{2 n \cos{i}}
{\frac{\mu}{\mu'} n' \cos{i} + n \cos{r}} =
\frac{2 n \cos{i}} {n' \cos{i} + n \cos{r}} \\
\frac{E_{0}''}{E_{0}} & = & \frac{\frac{\mu}{\mu'} n' \cos{i}
- n \cos{r}}{\frac{\mu}{\mu'} n' \cos{i} + n \cos{r}} =
\frac{n' \cos{i} - n \cos{r}}{n' \cos{i} + n \cos{r}}
\end{eqnarray*}
whith the same approximation as above.
\end{enumerate}
We note that in case of photon perpendicular to the surface, the following
relations hold:
\[
\begin{array}{LcL@{\hspace{4cm}}LcL}
\frac{E_{0}'}{E_{0}} & = & \frac{2n}{n'+n} &
\frac{E_{0}''}{E_{0}} & = & \frac{n'-n}{n'+n}
\end{array}
\]
where the sign convention for the {\it parallel} field has been adopted.
This means that if $n' > n$ there is a phase inversion for the reflected
wave.
Any incoming wave can be separated into one piece polarised parallel to the
plane and one polarised perpendicular, and the two components treated
accordingly.
To mantain the particle description of the photon, the
probability to have a
{\it refracted} (mechanism 107) or {\it reflected}
(mechanism 106) photon must be calculated.
The constraint is that the number of photons be conserved, and this
can be imposed via the conservation of the energy flux at the boundary,
as the number of photons is proportional to the energy. The
energy current is given by the expression:
\begin{eqnarray*}
\vec{S} & = & \frac{1}{2} \frac{c}{4 \pi} \sqrt{\mu \epsilon} \vec{E} \times
\vec{H}^{*}
= \frac{c}{8 \pi} \sqrt{\frac{\epsilon}{\mu}} E_{0}^{2} \hat{k}
\end{eqnarray*}
and the energy balance on a unit area of the boundary requires that:
\begin{eqnarray*}
\vec{S} \cdot \vec{u} & = & \vec{S}' \cdot \vec{u}
- \vec{S}'' \cdot \vec{u} \\
S \cos{i} & = & S' \cos{r} + S'' \cos{i} \\
\frac{c}{8 \pi} \frac{1}{\mu} n E_{0}^{2} \cos{i} & = &
\frac{c}{8 \pi} \frac{1}{\mu'} n' E_{0}'^{2} \cos{r} +
\frac{c}{8 \pi} \frac{1}{\mu} n E_{0}''^{2} \cos{i}
\end{eqnarray*}
If we set again $\mu/\mu' \approx 1$, then the transmission
probability for the photon will be:
\[
T = \left( \frac{E_{0}'}{E_{0}} \right ) ^{2}
\frac{ n' \cos{r}}{n \cos{i}}
\]
and the corresponding probability to be reflected will be $R=1-T$.
In case of reflection the relation between the incoming photon ($\vec{k},
\vec{e}$), the refracted one ($\vec{k}', \vec{e}\:'$)
and the reflected one ($\vec{k}'', \vec{e}\:''$) is given by
the following relations:
\begin{eqnarray*}
\vec{q} & = & \vec{k} \times \vec{u} \\
e_{\parallel} & = & \frac{\vec{e} \cdot \vec{u}}{|q|} \\
e_{\perp} & = & \frac{\vec{e} \cdot \vec{q}}{|q|} \\
e_{\parallel}' & = & e_{\parallel} \frac{2 n \cos{i}} {n' \cos{i} + n \cos{r}}\\
e_{\perp}' & = & e_{\perp} \frac{2 n \cos{i}} {n \cos{i} + n' \cos{r}} \\
e_{\parallel}'' & = & \frac{n'}{n} e_{\parallel}' - e_{\parallel} \\
e_{\perp}'' & = & e_{\perp}' - e_{\perp}
\end{eqnarray*}
After transmission or reflection of the photon, the polarisation vector
is renormalised to 1.
In the case where $\sin{r} = n \: \sin i /n' > 1$ then there cannot
be a refracted wave, and in this case we have a total internal reflection
according to the following formulas:
\begin{eqnarray*}
\vec{k}'' & = & \vec{k} - 2 (\vec{k} \cdot \vec{u}) \vec{u} \\
\vec{e}\:'' & = & -\vec{e} + 2 (\vec{e} \cdot \vec{u}) \vec{u}
\end{eqnarray*}
\subsubsection{Case dielectric $\rightarrow$ metal}
In this case the photon cannot be transmitted. So the probability for the
photon to be absorbed by the metal is estimated according to the table
provided by the user. If the photon is not absorbed, it is reflected.
\subsection{Surface effects}
In the case where
the surface between two bodies is perfectly polished, then the
normal provided by the program is the normal to the surface defined by
the body boundary. This is indicated by the the value {\tt POLISH}$=1$
as returned by the \Rind{GUPLSH} function. When the value returned is
$< 1$, then a random point is generated in a sphere of radius
$1-${\tt POLISH}, and the corresponding vector is added to the normal.
This new normal is accepted if the reflected wave is still inside the
original volume.
\section{Subroutines}
\Shubr{GSCKOV}{(ITMED, NPCKOV, PPCKOV, ABSCO, EFFIC, RINDEX)}
\begin{DLtt}{MMMMMMMMMM}
\item[ITMED] ({\tt INTEGER}) tracking medium for which the optical
properties are to be defined;
\item[NPCKOV] ({\tt INTEGER}) number of bins in the tables;
\item[PPCKOV] ({\tt REAL}) array containing {\tt NPCKOV} values
of the photon momentum in GeV;
\item[ABSCO] ({\tt REAL}) array containing {\tt NPCKOV} values
of the absorption length in centimeters in case of dielectric and of the
boundary layer absorption probabilities in case of a metal;
\item[EFFIC] ({\tt REAL}) array containing {\tt NPCKOV} values of the
detection efficiency;
\item[RINDEX] ({\tt REAL}) array containing {\tt NPCKOV} values of the
refractive index for a dielectric, if {\tt RINDEX(1) = 0} the material
is a metal;
\end{DLtt}
This routine declares a tracking medium either as a radiator or as a
metal and stores the tables provided by the user. In the case of a
metal the {\tt RINDEX} array does not need to be of length {\tt NPCKOV},
as long as it is set
to 0. The user should call this routine if he wants to use \v{C}erenkov
photons. Please note that for the moment only
{\tt BOX}es, {\tt TUBE}s, {\tt CONE}s, {\tt SPHE}res, {\tt PCON}s,
{\tt PGON}s, {\tt TRD2}s and {\tt TRAP}s can be assigned optical properties
due to the current limitations of the \Rind{GGPERP} routine described
below.
\Shubr{GLISUR}{(X0, X1, MEDI0, MEDI1, U, PDOTU, IERR)}
\begin{DLtt}{MMMMMMMMMM}
\item[X0] ({\tt REAL}) current position ({\tt X0(1)=$x$},
{\tt X0(2)=$y$}, {\tt X0(3)=$z$})
and direction ({\tt X0(4)=$p_{x}$}, {\tt X0(5)=$p_{y}$},
{\tt X0(6)=$p_{z}$})
of the photon at the boundary of a volume;
\item[X1] ({\tt REAL}) position ({\tt X1(1)=x, X1(2)=y, X1(3)=z})
beyond the boundary of the current volume,
just inside the new volume along the direction of the photon;
\item[MEDI0] ({\tt INTEGER}) index of the current tracking medium;
\item[MEDI1] ({\tt INTEGER}) index of the tracking medium into which the
photon is entering;
\item[U] ({\tt REAL}) array of three elements containing the normal to
the surface to which the photon is approaching;
\item[PDOTU] ({\tt REAL}) $-\cos \theta$ where $\theta$ is the angle between
the direction of the photon and the normal to the surface;
\item[IERR] ({\tt INTEGER}) error flag, \Rind{GGPERP} could not determine
the normal to the surface if {\tt IERR} $\neq$ {\tt 0};
\end{DLtt}
This routine simulates the surface profile between two media as seen by
an approaching particle with coordinate and direction given by {\tt X0}.
The surface is identified by the arguments {\tt MEDI0} and {\tt MEDI1}
which are the tracking medium indices of the region in which the track
is presently and the one which it approaches, respectively. The input
vector {\tt X1} contains the coordinates of a point on the other side of
the boundary from {\tt X0} and lying in within medium {\tt MEDI1}. The
result is a unit vector {\tt U} normal to the surface of reflection at
{\tt X0} and pointing into the medium from which the track is approaching.
The quality of the surface finish is given by the parameter returned by
the user function \Rind{GUPLSH} (see below).
\Sfunc{GUPLSH}{VALUE = GUPLSH(MEDI0, MEDI1)}
This function must be supplied by the user. It returns a value between 0
and 1 which decsribes the quality of the surface finish between {\tt MEDI0}
and {\tt MEDI1}. The value 0 means maximum roughness with effective plane of
reflection distributed as $\cos \alpha$ where $\alpha$ is the angle
between the unit normal to the {\it effective} plane of reflection and
the normal to the nominal medium boundary at {\tt X0}. The value 1 means
perfect smoothness. In between the surface is modelled as a bell-shaped
distribution in $\alpha$ with limits given by:
\[
\sin \alpha = \pm (1-\mbox{\tt GUPLSH})
\]
At the interface between two media the routine is called to evaluate the
surface. The default version in {\tt GEANT} returns $1$, i.e. a perfectly
polished surface is assumed. When {\tt GUPLSH = 0} the distribution of the
normal to the surface is $\approx \cos \theta$.
\begin{DLtt}{MMMMMMMMMM}
\item[MEDI0] ({\tt INTEGER}) index of the current tracking medium;
\item[MEDI1] ({\tt INTEGER}) index of the tracking medium into which the
photon is entering;
\end{DLtt}
\Shubr{GGCKOV}{}
This routine handles the generation of \v{C}erenkov photons and is called
from \Rind{GTHADR}, \Rind{GTMUON} and \Rind{GTELEC} in radiative
materials for which
the optical characteristics have been defined via the routine \Rind{GSCKOV}.
\Shubr{GSKPHO}{(IPHO)}
\begin{DLtt}{MMMMMMMMMM}
\item[IPHO] ({\tt INTEGER}) number of the \v{C}erenkov photon to store
on the stack to be tracked. If {\tt IPHO = 0} all the generated photons
will be put on the stack to be tracked.
\end{DLtt}
This routines takes the \v{C}erenkov photon {\tt IPHO} generated during
the current step and stores it in the stack for subsequent tracking.
This routine performs for \v{C}erenkov photons the same function that
the \Rind{GSKING} routine performs for all the other particles. The
generated photons are stored in the common \FCind{/GCKIN2/} ({\tt
[BASE030]}).
\Shubr{GTCKOV}{}
This routine is called to track \v{C}erenkov photons. The user routine
\Rind{GUSTEP} is called at every step of tracking. When {\tt ISTOP} = 2
the photon has been absorbed. If {\tt DESTEP} $\neq 0$ then the photon
has been detected.
\Shubr{GGPERP}{(X, U, IERR)}
\begin{DLtt}{MMMMMMMMMM}
\item[X] ({\tt REAL}) array of dimension 3 containing the current position
of the track in the MARS;
\item[U] ({\tt REAL}) array of dimension 3 containing on exit the normal to
the surface of the current volume at the point {\tt X};
\item[IERR] ({\tt INTEGER}) error flag: if {\tt IERR} $\neq 0$ \Rind{GGPERP}
failed to find the normal to the surface of the current volume.
\end{DLtt}
This routine solves the general problem of calculating the unit vector
normal to the surface of the current volume at the point X. The result
is returned in the array {\tt U}. X is assumed to be outside the current
volume and near a boundary
(within {\tt EPSIL}). The current volume is indicated by the common
\FCind{/GCVOLU/}. U points from inside to outside in that
neighbourhood. If {\tt X} is within {\tt EPSIL} of more than one boundary
(i.e. in a corner) an arbitrary choice is made.
If {\tt X} is inside the current volume
or if the current volume is not handled by the routine,
the routine returns with
{\tt IERR=1}, otherwise {\tt IERR=0}. At the moment the routine only
handles {\tt BOX}es, {\tt TUBE}s, {\tt SPHE}res and {\tt CONE}s.
\section{Processes involving \v{C}erenkov photons}
The process of generating a \v{C}erenkov photon is called {\tt CKOV}
and corresponds to the process value 105 (variable {\tt LMEC} in
common \FCind{/GCTRAK/}).
This process is activated only in a radiator defined via the routine
\Rind{GSCKOV}.
The process of photon absorption (name {\tt LABS}, code 101) is controlled
by the {\tt LABS} {\tt FFREAD} data record. By default the process is
activated for all the materials for which optical properties have been
defined.
The action taken at the boundary is identified by the process name {\tt LREF},
code 102.
At a boundary a photon can be either reflected (name {\tt REFL}, code 106)
or refracted (name {\tt REFR}, code 107).
| {
"alphanum_fraction": 0.695936921,
"avg_line_length": 40.860915493,
"ext": "tex",
"hexsha": "eafb2a67825c00a9674875e3cefced0dec8bcfda",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "berghaus/cernlib-docs",
"max_forks_repo_path": "geant/phys260.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "berghaus/cernlib-docs",
"max_issues_repo_path": "geant/phys260.tex",
"max_line_length": 80,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "berghaus/cernlib-docs",
"max_stars_repo_path": "geant/phys260.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z",
"num_tokens": 6935,
"size": 23209
} |
\section{Thermalization via a Nonlinear Boson Diffusion Equation (NBDE)}
\begin{frame}{Deriving the Nonlinear Boson Diffusion Equation I}
\vspace{0.5em}
The following derivation follows reference \cite{Wolschin2018}. \\[0.5em]
\begin{itemize}
\item The starting point for our investigation is the \alert{Boltzmann eqn.} (cf. Pavel's talk). %write it down
\item For \alert{spatial homogeneity} of the the boson distribution function $f(\mathbf{x}, \mathbf{p}, t)$ and a \alert{spherically symmetric momentum dependence} the equation for the single-particle occupation numbers $n_j \equiv n_{\mathrm{th}}(\varepsilon_j,t)$ reads:
\begin{align}
\frac{\partial n_1}{\partial t} &= \sum_{\varepsilon_2,\varepsilon_3,\varepsilon_4}\langle V^{\phantom{.}2}\rangle G(\varepsilon_1+\varepsilon_2,\varepsilon_3+\varepsilon_4)\\
&\times \left[(1+n_1)(1+n_2)n_3n_4 - (1+n_3)(1+n_4)n_1n_2\right]
\end{align}
\item The \alert{collision term} can be written in the form of a \alert{Master eqn.}:
\begin{equation}
\frac{\partial n_1}{\partial_t} = (1+n_1)\sum_{\varepsilon_4}W_{4\rightarrow 1}n_4 - \sum_{\varepsilon_4}W_{1\rightarrow 4}(1+n_4)
\end{equation}
with
\begin{equation}
W_{4\rightarrow 1}= W_{41}g_1 = \sum_{\varepsilon_2, \varepsilon_3} \langle V^{\phantom{.}2}\rangle G(\varepsilon_1+\varepsilon_2,\varepsilon_3+\varepsilon_4)(1+n_2)n_3
\end{equation}
\end{itemize}
\end{frame}
\begin{frame}{Deriving the Nonlinear Boson Diffusion Equation II}
\begin{itemize}
\item In continuum $\sum\rightarrow\int$ and introduce \alert{density of states} $g_j \equiv g(\varepsilon_j)$.
\item If $G$ acquires a width in a finite system:
\begin{equation}
W_{14}=W_{41}=W\left[\frac{1}{2}(\varepsilon_4+\varepsilon_1),\underbrace{\abs{\varepsilon_4-\varepsilon_1}}_{=: x}\right]
\end{equation}
\item Perform a \alert{gradient expansion} of $n_4$ and $g_4n_4$ around $x\approx 0$.
\item Introduce \alert{transport coefficients} via moments of the transition probability:
\begin{align}
D &= \frac{g_1}{2}\int\limits_0^{\infty}\dd x\ W(\varepsilon_1,x) \ x^2 \\
v &= g_1^{-1}\frac{d}{d\varepsilon_1}(g_1D)
\end{align}
\end{itemize}
\end{frame}
\begin{frame}{Deriving the Nonlinear Boson Diffusion Equation III}
\begin{itemize}
\item Nonlinear partial differential equation for $n \equiv n(\varepsilon_1,t) = n(\varepsilon,t)$:
\begin{equation}
\frac{\partial n}{\partial t} = -\frac{\partial}{\partial\varepsilon}\left[v\cdot n(1+n) + n\frac{\partial n}{\partial\varepsilon}\right] + \frac{\partial^2}{\partial\varepsilon^2}\left[Dn\right]\label{eqn:nbde1}
\end{equation}
\item Consider the limit of constant transport coefficients:
\begin{equation}
\frac{\partial n}{\partial t} = -v\frac{\partial}{\partial\varepsilon}\left[n(1+n)\right] + D\frac{\partial^2 n}{\partial\varepsilon^2}\label{eqn:nbde2}
\end{equation}
\item Thermal \alert{Bose-Einstein distribution} provides stationary solution:
\begin{equation}
n_{\mathrm{eq}}(\varepsilon) = \frac{1}{\exp(\frac{\varepsilon-\mu}{T}) - 1}
\end{equation}
\end{itemize}
\end{frame}
\begin{frame}{Some Remarks}
\begin{itemize}
\item The present model does \alert{not} resolve the 2nd-order phase transition.
\item The effects of condensation are included (cf. the following figures).
\item A treatment resolving the singularity at $\epsilon=\mu$ is presented later.
\end{itemize}
\end{frame}
\begin{frame}{Linear Relaxation-Time Approximation (RTA)}
\begin{itemize}
\item Given some initial distribution $n_{\mathrm{i}}(\varepsilon)$ we find an approximated solution for the thermalization process via the RTA:
\begin{equation}
\frac{\partial n_{\mathrm{rel}}}{\partial t} = \frac{(n_{\mathrm{eq}} - n_{\mathrm{
rel}})}{\tau_{\mathrm{eq}}}
\end{equation}
with solution:
\begin{equation}
n_{\mathrm{rel}}(\varepsilon,t) = n_{\mathrm{i}}(\varepsilon)\cdot\exp\left(-\frac{t}{\tau_{\mathrm{eq}}}\right) + n_{\mathrm{eq}}(\varepsilon)\left(1-\exp\left(-\frac{t}{\tau_{\mathrm{eq}}}\right)\right)
\end{equation}
where $\tau_{\mathrm{eq}} = 4D/(9v^2)$.
\item Motivated by the study of early stages of RHICs, the initial distribution is chosen such that:
\begin{equation}
n_{\mathrm{i}}(\varepsilon) = N_{\mathrm{i}}\cdot\theta\left(1-\varepsilon/Q_{\mathrm{s}}\right)\cdot\theta(\varepsilon) \label{eqn:rta_initial}
\end{equation}
with limiting momentum $Q_{\mathrm{s}} \sim \tau_0^{-1} \approx 1\ \mathrm{GeV}$.\mycite{Mueller1999}
\end{itemize}
\end{frame}
\begin{frame}{Results for the RTA}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/rta}
\caption{Relaxation of a finite Bose system towards the equilibrium. \cite{Wolschin2018} \\
Here $T = -D/v \simeq 0.4\ \mathrm{GeV}$, $\tau_{\mathrm{eq}} = 4D/(9v^2) = 0.33\cdot 10^{-23} \mathrm{s} \simeq 1\ \mathrm{fm/c}$ and the timesteps are $\left\{0.1, 0.25, 0.5,\infty\right\}$ (in units of $10^{-23}s$) from top to bottom.}
\end{figure}
\end{frame}
%TODO: Maybe add another slide about conservation etc.
\begin{frame}{Exact Solution of the Nonlinear Boson Diffusion Equation}
\begin{itemize}
\item To solve eqn. (\ref{eqn:nbde1}) analytically, we perform the following \alert{nonlinear transformation:}
\begin{equation}
n(\varepsilon,t) = -\frac{D}{v}\frac{\partial \ln \mathcal{Z}(\varepsilon,t)}{\partial\varepsilon}
\end{equation}
which reduces our problem to a \alert{linear diffusion eqn.} for $ \mathcal{Z}(\varepsilon,t)$:
\begin{equation}
\frac{\partial \mathcal{Z}}{\partial t} = -v\frac{\partial \mathcal{Z}}{\partial \varepsilon} + D\frac{\partial^2 \mathcal{Z}}{\partial \varepsilon^2}
\end{equation}
\item Solutions to this equation can be written as:
\begin{equation}
n(\varepsilon, t)=\frac{1}{2 v} \frac{\int_{-\infty}^{+\infty} \frac{\varepsilon-x}{t} F(x)\cdot G_{\mathrm{free}}(\varepsilon-x,t)\ \dd x}{\int_{-\infty}^{+\infty} F(x)\cdot G_{\mathrm{free}}(\varepsilon-x,t)\ \dd x}-\frac{1}{2}
\end{equation}
\end{itemize}
\end{frame}
\begin{frame}{Additional Definitions}
\begin{itemize}
\item The quantities appearing in the solution are the \alert{free Green's function}
\begin{align}
G_{\mathrm{free}}(\varepsilon-x,t) = \exp\left[-\frac{(\varepsilon-x)^2}{4Dt}\right],\\
\end{align}
and the implementation of the \alert{initial conditions}
\begin{equation}
F(x) = \exp\left[-\frac{1}{2D}(vx+2v\int_0^x n_{\mathrm{i}}(y) \dd y) \right].
\end{equation}
\item They define the \alert{free partition function} via:
\begin{equation}
\mathcal{Z}(\varepsilon,t) = a(t)\cdot\int_{\infty}^{\infty} G_{\mathrm{free}}(\varepsilon,x,t)\cdot F(x)\ \dd x
\end{equation}
\end{itemize}
\end{frame}
\begin{frame}{Results for the Solution of the NBDE I}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/nbde_positive_range}
\caption{Equilibration of a finite Bose system from the NBDE. \cite{Wolschin2018} \\
The integration range is restricted to $x \geq 0$. Here $T\simeq 0.4\ \mathrm{GeV}$, $\tau_{\mathrm{eq}} = 0.33\cdot 10^{-23} \mathrm{s}$ and the timesteps are $\left\{0.005, 0.05, 0.15,0.5\right\}$ (in units of $10^{-23}s$) from top to bottom.}
\end{figure}
\end{frame}
\begin{frame}{Results for the Solution of the NBDE II}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/nbde_full_range}
\caption{Equilibration of a finite Bose system from the NBDE. \cite{Wolschin2018} \\
The integration range is extended to $-\infty \leq x \leq \infty$. Here $T\simeq 0.4\ \mathrm{GeV}$, $\tau_{\mathrm{eq}} = 0.33\cdot 10^{-23} \mathrm{s}$ and the timesteps are $\left\{0.005, 0.05, 0.15,0.5\right\}$ (in units of $10^{-23}s$) from top to bottom.}
\end{figure}
\end{frame}
\begin{frame}{Results for the Solution of the NBDE III}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/nbde_gaussian}
\caption{Equilibration of a finite Bose system from the NBDE for Gaussian initial conditions $n_{\mathrm{i}}(\varepsilon) = N_{\mathrm{i}}\left(\sqrt{2\pi}\sigma\right)^{-1}\exp\left((\varepsilon - \langle\varepsilon\rangle)/(2\sigma^2)\right)$ with $\sigma = 0.04\ \mathrm{GeV}$. \cite{Wolschin2018} \\
Here $T\simeq 0.4\ \mathrm{GeV}$, $\tau_{\mathrm{eq}} = 0.33\cdot 10^{-23} \mathrm{s}$ and the timesteps are $\left\{0.002, 0.006, 0.02, 0.2\right\}$ (in units of $10^{-23}s$) from top to bottom.}
\end{figure}
\end{frame}
\begin{frame}{Treating the Singularity}
This part is based on the publication \cite{Wolschin2020_1} which provides an extension of \cite{Wolschin2018} and was published just recently.\\[0.5em]
\begin{itemize}
\item To account for the singularity at $\varepsilon=\mu < 0$ we have to modify the initial distribution given before (eqn. (\ref{eqn:rta_initial})) as follows:
\begin{equation}
\tilde{n_{\mathrm{i}}}(\varepsilon) = n_{\mathrm{i}}(\varepsilon) + \frac{1}{\exp\left(\frac{\varepsilon-\mu}{T}\right)-1}
\end{equation}
\item The chemical potential $\mu$ has to be treated as a fixed parameter.
\item Considering the limit $\lim_{\varepsilon\rightarrow\mu^{+}}\ n(\varepsilon,t) = \infty\ \forall t$ yields $\mathcal{Z}(\mu,t) = 0$.
\item This results in a modified expression for the Green's function
\begin{equation}
G(\varepsilon,x,t) = G_{\mathrm{free}}(\varepsilon-\mu,x,t) - G_{\mathrm{free}}(\varepsilon-\mu,-x,t)
\end{equation}
\end{itemize}
\end{frame}
\begin{frame}{Results for the RTA for the modified Initial Conditions}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/rta_full}
\caption{Local thermalization of gluons in the linear RTA for $\mu<0$. \cite{Wolschin2020_1} \\
Here $T \simeq 513\ \mathrm{MeV}$ and the timesteps are $\left\{0.02, 0.08, 0.15,0.3,0.6\right\}$ (in units of $\mathrm{fm}/c$) from top to bottom.}
\end{figure}
\end{frame}
%TODO: Elaborate on Solutions#2
\begin{frame}{Results for the full Solution of the NBDE}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{figures/nbde_full_result}
\caption{Local thermalization of gluons from the time-dependent solutions of the NBDE for $\mu<0$. \cite{Wolschin2020_1} \\
Here $T \simeq 513\ \mathrm{MeV}$ and the timesteps are $\left\{6\cdot10^{-5}, 6\cdot10^{-4}, 6\cdot10^{-3},0.12,0.36\right\}$ (in units of $\mathrm{fm}/c$) from top to bottom.}
\end{figure}
\end{frame} | {
"alphanum_fraction": 0.7119506705,
"avg_line_length": 52.1275510204,
"ext": "tex",
"hexsha": "c7ab4a65d72107c9df22709258d44258bf718033",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4fa0a9503f82c007fbb196df3e665772b259355e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mathieukaltschmidt/Thermalization-of-Gluons",
"max_forks_repo_path": "talk/content/04_nbde.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4fa0a9503f82c007fbb196df3e665772b259355e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mathieukaltschmidt/Thermalization-of-Gluons",
"max_issues_repo_path": "talk/content/04_nbde.tex",
"max_line_length": 304,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4fa0a9503f82c007fbb196df3e665772b259355e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mathieukaltschmidt/Thermalization-of-Gluons",
"max_stars_repo_path": "talk/content/04_nbde.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3664,
"size": 10217
} |
\documentclass[jcp,aip,amsmath]{revtex4-1}
\usepackage{xcolor,array}
\usepackage{color}
\usepackage{txfonts}
\newcommand{\half}{\textstyle \frac{1}{2}}
\newcommand{\quarter}{\textstyle \frac{1}{4}}
% note, to complile, do something like:
%#!/bin/bash
%file=$1
%rm ``$file''.aux
%latex ``$file''.tex
%bibtex ''$file"
%latex ``$file".tex
%latex ''$file".tex
%dvips -Ppdf -o ``$file".ps "$file".dvi
%ps2pdf ``$file".ps
%echo `` done compiling $file''
%mupdf "$file.pdf" &
\begin{document}
\title{Evolvr Equations}
\author{Matthew K. MacLeod}
\maketitle
\today
\section{Maths.ex}
The code for following equations is located in lib/evolvr/maths.ex
\subsection{Linear Algebra}
dot product, ie inner product, for two vectors,
\begin{align}
\mathbf{A} \cdot \mathbf{B} = A^\dag B = \sum_i^n A_i * B_i
\end{align}
The Euclidean norm, or distance is defined as
\begin{align}
d_{euclid} (x,y) = \left( \sum_i^n (x_i - y_i)^2 \right)^{1/2}
\end{align}
The Minkowski distance is defined as
\begin{align}
d_{minkowski} (x,y,p) = \left( \sum_i^n |x_i - y_i|^{p} \right)^{1/p}
\end{align}
It is interesting to note when p is 1, the taxicab distance (L1 norm) is recovered and
the L2 norm is recovered when p is 2.
The Mahalanobis distance is defined as
\begin{align}
d_{mahalanobis} (x,y) = \left( \sum_i^n \frac{(x_i - y_i)^2}{s^2_i} \right)^{1/2} = \left( \sum_i^n \frac{(x_i - y_i)^2}{ \left(1/n *(x_i - \bar{x})*(y_i-\bar{y})\right)^2} \right)^{1/2}
\end{align}
The angle, in degrees, is defined
\begin{align}
\theta = \frac{\mathbf{v} \cdot \mathbf{w}}{||\mathbf{v}||*||\mathbf{w}||} * \frac{180}{\pi}
\end{align}
\subsection{Statistics}
standard deviation, $\sigma$, is defined as, the square root of the variance
\begin{align}
\sigma = \sqrt{v} = \sqrt{\frac{1}{n}\sum_i^n (x_i - \mu)^2}
\end{align}
where the mean, $\mu$, is defined as
\begin{align}
\mu = \frac{1}{n} \sum_i^n x_i
\end{align}
the sample covariance is measures how two vectors vary together,
\begin{align}
\bar{\bar{q}}= cov(x,y) = \frac{1}{n-1} \sum_i (x_i - \bar{x}) (y_i - \bar{y})
\end{align}
The reason the sample covariance matrix has $n-1$ in the denominator rather than $n$ is essentially that the population mean is not known and is replaced by the sample mean.
\subsection{Probability}
The normal probability distribution (normal\_pdf in code), the famous bell-shaped curve is
defined as
\begin{align}
f(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma}}\mathrm{exp}\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)
\end{align}
and $\mu$ is again the mean.
The beta distribution can be used for Bayesian inference (conditional probabilities) and is defined
\begin{align}
\mathrm{beta\_pdf} = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}
\end{align}
where $B(\alpha,\beta)$ is a normalization constant to ensure that the total probability integrates to 1.
and is
\begin{align}
B(\alpha,\beta) = \frac{\Gamma(\alpha) \Gamma(\beta) }{\Gamma(\alpha + \beta)}
\end{align}
This function uses the gamma function, which is the generalization of a factorial, and which is approximated in the code using Windschitl reworking of Stirlings as follows
\begin{align}
\Gamma(t) = \int_0^{\infty} x^{t-1} e^{-x} dx \approx \sqrt{\frac{2\pi}{t}}\left(\frac{1}{e}\left(t + \frac{1}{12t -\frac{1}{10t}}\right)\right)^t
\end{align}
Note, especially for easy partial testing, positive integers should follow
\begin{align}
\Gamma(n)=(n-1)!
\end{align}
\subsection{Machine Learning}
\subsubsection{Gradient Descent}
A numerical approximation to the derivative of a function is given by the finite differences method,
also known as the difference quotient is given by,
\begin{align}
f' = f'(x) = \frac{df(x)}{dx} \approx \frac{f(x+h) -f(x)}{h}
\end{align}
note that this method yields the total derivative of a function, not a partial derivative.
Also can be made slightly more accurate easily.
\end{document}
| {
"alphanum_fraction": 0.6925449871,
"avg_line_length": 28.3941605839,
"ext": "tex",
"hexsha": "4a21dd61f74139305528f8dd6ae0c562cef0c45a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6392f0e03a8218ea70e86afdd187dcf728155072",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "matthewmacleod/evolvr",
"max_forks_repo_path": "doc/evolvr_equations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6392f0e03a8218ea70e86afdd187dcf728155072",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "matthewmacleod/evolvr",
"max_issues_repo_path": "doc/evolvr_equations.tex",
"max_line_length": 186,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "6392f0e03a8218ea70e86afdd187dcf728155072",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "matthewmacleod/evolvr",
"max_stars_repo_path": "doc/evolvr_equations.tex",
"max_stars_repo_stars_event_max_datetime": "2017-08-14T22:35:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-12-29T21:13:23.000Z",
"num_tokens": 1303,
"size": 3890
} |
\startcomponent ma-cb-en-paragraphs
\product ma-cb-en
\chapter{Paragraph spacing}
\section{Introduction}
\index{paragraph}
\Command{\tex{par}}
\Command{\tex{paragraph}}
In \TEX\ and \CONTEXT\ the most important unit of text is the
paragraph. You can start a new paragraph by:
\startitemize[packed]
\item an empty line
\item the \TEX\ command \type{\par}
\stopitemize
In your \ASCII\ input file you should use empty lines as
paragraph separators. This will lead to a clearly structured
and well organized file and will prevent mistakes.
In situations where a command has to be closed explicitly you
should use \type{\par}.
\startbuffer
During one of the wars Hasselt lay under siege. After some
time the city was famine stricken, everything edible was
eaten. Except for one cow. The cow was kept alive and
treated very well. \par
Once a day the citizens of Hasselt took the cow for a walk
on the ramparts. The besiegers saw the well fed cow and
became very discouraged. They broke up their camps and
Hasselt was saved. \par
In the Hoogstraat in Hasselt there is a stone tablet with a
representation of the cow that commemorates the siege and
the wisdom of the citizens of Hasselt.
\stopbuffer
\typebuffer
This could also be typed without \type{\par}s and a few empty
lines.
\startbuffer
During one of the wars Hasselt lay under siege. After some
time the city was famine stricken, everything edible was
eaten. Except for one cow. The cow was kept alive and
treated very well.
Once a day the citizens of Hasselt took the cow for a walk
on the ramparts. The besiegers saw the well fed cow and
became very discouraged. They broke up their camps and
Hasselt was saved.
In the Hoogstraat in Hasselt there is a stone tablet with a
representation of the cow that commemorates the siege and
the wisdom of the citizens of Hasselt.
\stopbuffer
\typebuffer
\section{Inter paragraph spacing}
\index{inter paragraph spacing}
\Command{\tex{setupwhitespace}}
\Command{\tex{nowhitespace}}
\Command{\tex{whitespace}}
\Command{\tex{startlinecorrection}} % VZ 2006-11-15 setup->start
\Command{\tex{blank}}
\Command{\tex{setupblank}}
\Command{\tex{startpacked}}
\Command{\tex{startunpacked}}
The vertical spacing between paragraphs can be specified by:
\shortsetup{setupwhitespace}
This document is produced with \type{\setupwhitespace[medium]}.
When inter paragraph spacing is specified there are two
commands available that are seldom needed:
\starttyping
\nowhitespace
\whitespace
\stoptyping
When a paragraph consists of a horizontal line or a framed
text like this:
\startbuffer
\framed{Ridderstraat 27, 8061GH Hasselt}
\stopbuffer
\getbuffer
Sometimes spacing is suboptimal. For that purpose you could
carry out a correction with:
\shortsetup{startlinecorrection}
So if you would type:
\startbuffer
\startlinecorrection
\framed{Ridderstraat 27, 8061GH Hasselt}
\stoplinecorrection
\stopbuffer
\typebuffer
you will get a better output. Only use these commands if
really needed!
\getbuffer
Another command to deal with vertical spacing is:
\shortsetup{blank}
The bracket pair is optional and within the bracket pair you
can type the amount of spacing. Keywords like \type{small},
\type{medium} and \type{big} are related to the fontsize.
\startbuffer
In official writings Hasselt always has the affix Ov. This is an
abbrevation for the province of {\em Overijssel}.
\blank[2*big]
The funny thing is that there is no other Hasselt in the Netherlands.
So it is redundant.
\blank
The affix is a leftover from the times that the Netherlands and
Belgium were one country under the reign of King Philip II of Spain.
\blank[2*big]
Hasselt in Belgium lies in the province of Limburg. One wonders if
the Belgian people write Hasselt (Li) on their letters.
\stopbuffer
\typebuffer
The command \type{\blank} without the bracket pair is the
default space.
The example would become:
\getbuffer
The default spacing can be set up with:
\shortsetup{setupblank}
If you want to surpress vertical spacing you can use:
\shortsetup{startpacked}
In this manual the whitespace is set at \type{medium}. In
the next situation this set up is ignored and the lines are
packed.
\startbuffer
\startpacked
Hasselt (Ov) lies in Overijssel.
Hasselt (Li) lies in Limburg.
Watch out: we talk about Limburg in Belgium. There is
also a Dutch Limburg.
\stoppacked
\stopbuffer
\typebuffer
This will become:
\getbuffer
It is not hard to imagine why there is also:
\shortsetup{startunpacked}
You can force vertical space with \type{\godown}. The
distance is specified within the brackets.
\shortsetup{godown}
\section{Indentation}
\index{indentation}
\index{paragraph+indentation}
\Command{\tex{indenting}}
\Command{\tex{noindenting}}
\Command{\tex{setupindenting}}
You can set up the amount of the indentation with:
\shortsetup{setupindenting}
A reasonable indentation is achieved by:
\starttyping
\setupindenting[medium]
\stoptyping
This will lead to indented paragraphs. By default,
indentation after white space (as issued by \type {\blank})
is suppressed.
You can locally influence the indentation state by using
\shortsetup{indenting}
When for instance you say \type {never}, from that moment
on indentation will be surpressed. Saying \type {none},
only influences the next paragraph.
If you choose to use indentations, and at a certain place you
explicitly {\em do not} want to indent, you can also say:
\starttyping
\noindenting
\stoptyping
\stopcomponent
| {
"alphanum_fraction": 0.7844103687,
"avg_line_length": 23.8173913043,
"ext": "tex",
"hexsha": "b142b75fcde0b8e9e1bf232d93d1ec6764c69bf8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "marcpaterno/texmf",
"max_forks_repo_path": "contextman/context-beginners/en/ma-cb-en-paragraphs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "marcpaterno/texmf",
"max_issues_repo_path": "contextman/context-beginners/en/ma-cb-en-paragraphs.tex",
"max_line_length": 69,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "marcpaterno/texmf",
"max_stars_repo_path": "contextman/context-beginners/en/ma-cb-en-paragraphs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1387,
"size": 5478
} |
\chapter{Assignment: Overfitting}
\label{hw:overfitting}
\newthought{Overfitting is something we try to avoid at all times.} But overfitting comes in many shapes and sizes. For this exercise we will use a \emph{blood-loneliness} data set with the File widget. This data set relates gene expressions in blood with a measure of loneliness obtained from a sample of elderly persons. Let's try to model loneliness with logistic regression and see how well the model performs.
\marginnote{To load the blood loneliness data set copy and paste the below URL to the URL field of the File widget.
\break\break
\url{http://file.biolab.si/datasets/blood-loneliness-GDS3898.tab}}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{overfitting1.png}%
\caption{$\;$}
\label{fig:wf1}
\end{figure}
\begin{enumerate}
\item Train the Logistic Regression model on the data and observe its performance. What is the result?
\item We have many features in our data. What if we select only the most relevant ones, the genes that actually matter? Use Rank to select the top 20 best performing features.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{overfitting2.png}%
\caption{$\;$}
\label{fig:wf2}
\end{figure}
\item How are the results now? What happened? Is there a better way of performing feature selection?
\end{enumerate}
| {
"alphanum_fraction": 0.7410329986,
"avg_line_length": 46.4666666667,
"ext": "tex",
"hexsha": "05b3380f8d98b0b1a032fe3b0ab2a7c157bd71ec",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-03-21T20:35:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-19T16:55:20.000Z",
"max_forks_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "PrimozGodec/orange-lecture-notes",
"max_forks_repo_path": "assignments/030-overfitting/overfitting.tex",
"max_issues_count": 10,
"max_issues_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378",
"max_issues_repo_issues_event_max_datetime": "2021-03-25T19:15:34.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-26T13:33:10.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "PrimozGodec/orange-lecture-notes",
"max_issues_repo_path": "assignments/030-overfitting/overfitting.tex",
"max_line_length": 414,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "PrimozGodec/orange-lecture-notes",
"max_stars_repo_path": "assignments/030-overfitting/overfitting.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T03:47:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-10-13T14:31:00.000Z",
"num_tokens": 359,
"size": 1394
} |
% !TEX root = ../thesis.tex
\chapter{Context}
\label{chap:context}
\cleanchapterquote{No computer has ever been designed that is ever aware of what it's doing; but most of the time, we aren't either.}{Marvin Minsky}{(Father of the field of AI)}
% ------------------------------------------------------------------------------
In our journey to separation of form and content in neural networks, we will now go through the stepping stones that led to \citeauthor{Gatys2015B}'s Neural Style algorithm: 1) advances in object recognition using deep neural networks, 2) the development of techniques for visualizing intermediate processing steps within deep neural networks, and 3) the evolution of style transfer tasks.
In Chapter~\ref{chap:theory} we introduced convolutional neural networks (CNNs) and gave a glimpse of how they can be used for image recognition tasks.
In this chapter, we will describe, in detail, the challenges in object recognition and how CNNs, traditionally outperformed by other techniques, have become the state of the art.
Interestingly, it is not so well understood how some network architectures are better than others and, therefore, how current ones can be improved.
Also in this chapter, we will see a number of visualization techniques that have been recently developed with the intention of eliciting new intuitions to help us move forward in the field of neural networks, some of them with unexpectedly appealing results.
Considerations on their potential artistic implications quickly appeared, and so this chapter also covers an introduction to artistic rendering and a few influential style transfer techniques.
% ------------------------------------------------------------------------------
\section{Object Recognition}
\label{sec:context:object-recognition}
We refer to the task of general object recognition as recognizing any object in natural images and it has been a very difficult problem to solve until recently.
In \autoref{fig:sec:context:object-recognition} we can see a typical task of general object recognition on a natural image.
\citet{Pinto2008} concluded in 2008 that the difficulty of the task stems from the fact that any 3D object in the world has an infinite number of representations in 2D images, as position, pose, lighting, and background vary with respect to the observer.
At the same time, as we argued in Chapter~\ref{chap:intro}, the brain is able to perform general object recognition in the real world without any apparent struggle, and so replicating the human visual system started to be perceived as a possible strategy to fully capture the complexity of the task.
\begin{figure}[t]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{gfx/object-recognition-1}
\label{fig:sec:context:object-recognition-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{gfx/object-recognition-2}
\label{fig:sec:context:object-recognition-2}
\end{subfigure}
\caption{
Typical object recognition task \cite{Wolf}.
Object recognition consists of locating objects on a natural image and giving them correct labels.
}
\label{fig:sec:context:object-recognition}
\end{figure}
As we mentioned in Chapter~\ref{chap:theory}, CNNs successfully emulate biological visual systems.
\citeauthor{LeCun2004} in \cite{LeCun2004} had already compared CNNs in 2004 with traditional techniques such as K-Nearest Neighbors and Support Vector Machines (SVM) for small-scale object recognition, finding that CNNs obtained better results in general and, especially, in non-normalized conditions.
\citeauthor{LeCun2004} used the NORB dataset for training, being at that time the largest available one, with 97200 pre-processed image pairs of 50 different objects of 5 categories.
\citeauthor{Pinto2008} also argued NORB and other datasets available at that time, in the order of tens of thousands of images, such as Caltech-101/256 \cite{Fei-Fei2007,Griffin2007} or CIFAR-10/100 \cite{Krizhevsky2009}, were not large nor diverse enough to be used for general object recognition training.
They found systems trained with them were highly susceptible to the variations described above in this section (i.e. pose, position, lighting, etc.) and claimed that, for a learning system to be robust against them, it would need to capture the essence of the domain rather than rely on trivial regularities of the training samples.
Such a dataset, one that could consistently represent the domain (i.e. all the object in the real world without with all their variance), would not only required to be much larger than those available back in 2008 but also to contain unbiasedly-selected natural images.
Being clear the need for larger datasets at that point, ImageNet \cite{Deng2009} was handcrafted via crowdsourcing with the aim of organizing a fraction of the vast number of images on the Internet and making them available for object recognition tasks.
ImageNet is a large-scale, comprehensive, and diverse dataset of 15 million high-resolution accurately-labeled images, organized in a semantic hierarchy of 22000 categories.
In it, each noun in English, coming from the lexical database WordNet \cite{Wilkniss1998}, is associated with hundreds of clean images.
The achievements of ImageNet are two-fold.
On the one hand, the images depict representations of the words under many different perspectives and lighting conditions, allowing robustness against visual variability.
On the other hand, the hierarchical structure of the dataset makes it possible to interlink concepts, allowing algorithms to recognize several concepts at the same time (e.g. dog, therefore mammal and animal as well).
Based on the ImageNet dataset, the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has established itself as the de-facto benchmark for object recognition algorithms.
Run as a yearly competition since 2010, it has triggered some of the most important advances in the field \cite{Russakovsky2015}.
Deep neural networks did not enter the scene until 2012.
CNNs had been terribly expensive to apply in large scale to high-resolution images until then, but \citet{Krizhevsky2012} finally managed to train a sufficiently large network for ILSVRC leveraging multiple GPUs.
They run a GPU-optimized implementation of 2D convolutional neural networks and used ImageNet as their training set, which was diverse enough to prevent severe overfitting.
The relevance of deep neural networks in object recognition tasks did nothing but increase in the last few years, as they keep getting the top scores each year at ILSVRC \cite{Russakovsky2015}.
At the time of writing this thesis, the use of GPU computing in deep neural networks is now generalized, facilitated by deep learning frameworks implementing GPU-optimized implementations of convolutions and all the necessary operations both for classification and training \cite{Bahrampour2015}.
Deep learning frameworks are just programming libraries that provide out-of-the-box tools for easily implementing deep neural networks, allowing researchers to focus on the architecture.
Some of the most popular ones currently are Caffe, developed by the Berkeley Vision and Learning Center \cite{Jia2014}; Theano, developed by l’Université de Montréal \cite{Bergstra2010}; TensorFlow, developed by Google \cite{Abadi2015}; or Torch, supported by Facebook and Nvidia \cite{Collobert2002}.
Deep learning frameworks have helped much to the reproducibility of experiments with neural networks, as they simplify the requirements for sharing architectures as well as trained models between researchers.
In Chapter~\ref{chap:system}, we will see how the VGG network \cite{Simonyan2014}, one of the winners of ILSVRC 2014, rivaling human performance \cite{Russakovsky2015}, implemented in Caffe, and publicly available \cite{Simonyan2014web}, is used to effectively separate style and content.
Next, we will talk about the interest that grew for understanding what exactly happens within intermediate layers of CNNs trained for object recognition, resulting in a number of visualization techniques with surprising results.
% ------------------------------------------------------------------------------
\section{Visualization of Convolutional Neural Networks}
\label{sec:context:deep-visualization}
For a long time, neural networks have been perceived as ``black boxes''.
Although we can perceive the training phase as a simple function optimization process, trained models consist of a large number of trained non-linear parameters that we cannot easily comprehend in a sensible way \cite{Yosinski2015}.
Deep neural networks, like CNNs, are particularly difficult to study due to their size.
Learnable parameters in them are in the order of millions and this makes us understand very little about why some models work better than others.
Gaining insight into how trained models function is one way towards improving them, as new intuitions may arise or wrong assumptions may become apparent.
In an effort to shed some light, several studies have been done lately in the direction of providing techniques for visualizing different representations of what happens within CNNs trained for object recognition \cite{Dosovitskiy2015,Yosinski2015,Zeiler2014}.
We know each layer of trained CNNs extracts increasingly higher-level features of an input image until eventually, the last layer emits a decision of what was depicted in the image.
As we described in \autoref{sub:concepts:convnets:properties}, lower layers tend to extract edges or corners, while intermediate ones look for more complex elements like shapes, faces, or textures.
Finally, the last few layers use them to interpret high-level concepts like forests or streets.
Some of the visualization techniques we are about to see help us inspect how this actually happens.
\citet{Mahendran2014} study how to obtain different visualizations from the internal representations at intermediate CNN layers showing that, as expected, CNNs gradually build an increasing amount of tolerance towards variance.
This is visible in \autoref{fig:sec:context:deep-visualization:deep-visualization-reconstructions-1}, as representations of the initial monkey image become fuzzier, less specific layer after layer.
For constructing these visualizations, \citeauthor{Mahendran2014} use gradient descent to find a new image that matches the feature space at a particular layer in a very similar way as we will discuss in more detail in Chapter~\ref{chap:system}.
Interestingly, selecting different subsets of feature channels produces texturized versions of the original image, as shown in \autoref{fig:sec:context:deep-visualization:deep-visualization-reconstructions-2}.
\begin{figure}[b]
\includegraphics[width=\textwidth]{gfx/deep-visualization-reconstructions-1}
\caption{
CNN layer visualizations of a monkey image generated via gradient descent \cite{Mahendran2014}.
Whereas the first layers (top row) maintain faithful representations of the original image, although increasingly fuzzy, invariance seems be appear in the last few ones (bottom row).
}
\label{fig:sec:context:deep-visualization:deep-visualization-reconstructions-1}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\textwidth]{gfx/deep-visualization-reconstructions-2}
\caption{
CNN visualization of the first convolutional layer selecting different subsets of feature channels for the gradient descent generation \cite{Mahendran2014}.
Depending on the selected channels, the visualizations are tuned towards different image parameters, producing texturized versions of the original image.
}
\label{fig:sec:context:deep-visualization:deep-visualization-reconstructions-2}
\end{figure}
\citet{Simonyan2014B} propose another visualization method, also through gradient descent, that generates class notions from a trained CNN.
That means a new image is generated in a way that the trained CNN would classify it with total certainty as the desired object.
In \autoref{fig:sec:context:deep-visualization-class}, some examples of this technique are displayed with clearly psychedelic results.
The generated images do not resemble natural images very much, more like graphic artifacts instead, but the network will anyway classify them correctly as these artifacts have been generated to cause precisely the right neural activations.
\begin{figure}[t]
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{gfx/deep-visualization-class-1}
\caption{Dumbbell}
\label{fig:sec:context:deep-visualization-class-1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{gfx/deep-visualization-class-2}
\caption{Dalmatian}
\label{fig:sec:context:deep-visualization-class-2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{gfx/deep-visualization-class-3}
\caption{Bell pepper}
\label{fig:sec:context:deep-visualization-class-3}
\end{subfigure}
\par\medskip
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{gfx/deep-visualization-class-4}
\caption{Keyboard}
\label{fig:sec:context:deep-visualization-class-4}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{gfx/deep-visualization-class-5}
\caption{Kit fox}
\label{fig:sec:context:deep-visualization-class-5}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{gfx/deep-visualization-class-6}
\caption{Goose}
\label{fig:sec:context:deep-visualization-class-6}
\end{subfigure}
\caption{
Images generated with gradient descent to artificially generate a desired classification, given a trained CNN \cite{Simonyan2014B}.
}
\label{fig:sec:context:deep-visualization-class}
\end{figure}
Google's DeepDream algorithm \cite{Mordvintsev2015} uses this same visualization technique to produce dream-like images in a process they call ``inceptionism'', useful for getting an idea of the level of abstraction a particular layer has achieved in its understanding of images.
DeepDream works as some sort of glorified \emph{pareidolia} that enhances whatever a trained CNN recognizes in an original image, replicating it in different sizes on every pass.
\autoref{fig:sec:context:deep-visualization:dream-buildings} shows images depicting scenery produced by DeepDream using random noise images as input and a CNN trained on places as the dreamer.
It can be explained because the a CNN trained on recognizing places has probably seen lots of structures, fountains and trees.
Also, because white noise images carry no information, we could say that these images are purely a product of the neural network's understanding of the world.
\begin{figure}[p]
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\textwidth]{gfx/dream-buildings-1}
\end{subfigure}
\par\medskip
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\textwidth]{gfx/dream-buildings-2}
\end{subfigure}
\caption{
Images generated with DeepDream from a white noise image and a CNN trained on places by the MIT Computer Science and AI Laboratory \cite{Mordvintsev2015}.
Pagodas, fountains, bridges, and trees seem to be the most recurrent elements on them because the CNN was trained with thousand of images of them.
Therefore, when ``dreaming,'' that is what the network sees.
}
\label{fig:sec:context:deep-visualization:dream-buildings}
\end{figure}
One final example of visualizations is given by \citet{Nguyen2014}, highlighting some of the problems of current CNN models.
Although they already rival human performance in many aspects, we can easily fool them by generated images that, by no means, resemble anything of what they claim to recognize.
For instance, in \autoref{fig:sec:context:deep-visualization:deep-visualization-fooling} we observe some of these fooling images that CNNs classify with total certainty as belonging to the categories shown below them.
These images have been generated with an evolutionary algorithm that applies random modifications and keeps those that better fit the desired category.
\begin{figure}[t]
\includegraphics[width=\textwidth]{gfx/deep-visualization-fooling}
\caption{
Mislabeled images by a CNN trained for object recognition \cite{Nguyen2014}.
Called ``fooling images'' and obtained with an evolutionary algorithm that applies random perturbations (mutation and/or crossover) on images and keeps those that produce the highest prediction in a desired category.
}
\label{fig:sec:context:deep-visualization:deep-visualization-fooling}
\end{figure}
All visualization methods described above hold some degree of artistic ``intention,'' as their respective authors observe.
First, \citeauthor{Mahendran2014}'s texturized images in \autoref{fig:sec:context:deep-visualization:deep-visualization-reconstructions-2} emerge naturally without the trained network having been encouraged to show it in any way.
Then, \citeauthor{Mordvintsev2015} speculate about the artistic implications of DeepDream, both as a rendering tool and as a step towards understanding the essence of the creative process in the human brain.
Lastly, \citeauthor{Nguyen2014} submitted some of these images to a selective art contest, the \emph{University of Wyoming 40th Annual Juried Student Exhibition}, got a third of them accepted, and some of them even received an award.
Artistic rendering, however, has been traditionally studied as a field of computer vision, and the problem of the separation of style and content was identified long ago as one of the main challenges for applying \emph{style transfer}.
% ------------------------------------------------------------------------------
\section{Style Transfer}
\label{sec:context:style-transfer}
We can understand artworks and paintings as the composition of form and content, as we commented in Chapter~\ref{chap:intro}.
Style transfer is the task of synthesizing a new stylized image with the same content as some source image and the style of a style sample.
To accomplish style transfer, not only the style-content combination is important, but even more so is the definition and extraction of style and content.
The difficulty of the latter is intrinsic to the separation of style and content, which is a fundamental problem, since not even in the history of art literature seems to be a consensus as to what is content and what is style \cite{Xie2007}.
Because of this fundamental difficulty, we cannot use any formally-described evaluation for style transfer, and so we resort to appealing the human visual system as it will be the ultimate consumer of the generated images \cite{Lin2011}.
The main criteria commonly used are the score from an \emph{artistic Turing test} and the \emph{perceptual quality}.
Whereas the artistic Turing test score will be higher the more difficult it is for humans to distinguish generated images from those created by hand \cite{Kyprianidis2013}, the perceptual quality refers to the subjective degree of similarity with the original images providing content and style.
Style transfer is studied in the field of \emph{non-photorealistic rendering}, in development since the late 1980s as a branch of computer vision, which attempts to provide tools for mimicking the style of paintings or animated cartoons in digital art, both in 2D and 3D \cite{Lee2010, Kyprianidis2013}.
Non-photorealistic rendering has become a highly multidisciplinary field, being influenced by progress done in computer graphics, perceptual modeling, and human-computer interaction.
We will focus only on small collection of 2D image-based style transfer techniques.
In particular, we will analyze those based on texture synthesis most closely related to Neural Style \cite{Gatys2015B}.
Style transfer is often approached as a texture transfer operation, sharing many of the challenges of texture synthesis \cite{Ashikhmin2003}.
Given a small texture example, texture synthesis algorithms try to generate a texture of arbitrary size.
On the other hand, texture transfer takes a target image and a texture source and modifies part of the target image with information with the given texture, while trying to maximize the perceptual quality of the result.
\citet{Hertzmann2001} proposed in 2001 a statistical-based method for style transfer called Image Analogies.
It generates a new image $B'$ that relates to $B$ in the same way as $A'$ relates to $A$, being $A$, $A'$, and $B$ the inputs to the system and $B'$ the output.
Conceptually, given an image $A$ and its filtered version $A'$, the system learns the transformation $A \mapsto A'$ and applies it to a target image $B$ to create an analogously filtered image $B'$.
\autoref{fig:sec:context:style-transfer:style-transfer-analogy} shows the algorithm at a glance.
The limitation of this approach is clear, as both an original image and its artistic version are required.
\begin{figure}[t]
\includegraphics[width=\textwidth]{gfx/style-transfer-analogy}
\caption{
The Image Analogies algorithm \cite{Hertzmann2001}.
Given the apple image and its filtered version, the algorithm learns the luminance transformation that must be applied to the boats image to produce an analogous filtered new image.
}
\label{fig:sec:context:style-transfer:style-transfer-analogy}
\end{figure}
Later, in 2003, \citet{Ashikhmin2003} developed an algorithm called Fast Texture Transfer that produces similar results but requires only a single texture sample.
The algorithm uses \emph{Markov random fields} to model the sample texture and then it generates the stylized image by growing texture patches, blending them pixel by pixel with those of the target image.
A limitation of this style transfer approach, also shared with \citeauthor{Hertzmann2001}'s, is that texture coherence can only be preserved locally \cite{Lee2010} and, thus, only small texture patches are applied, as shown in \autoref{fig:sec:context:style-transfer:style-transfer-analogy}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{gfx/style-transfer-fast-texture}
\caption{
Results of applying Fast Texture Transfer with different texture samples \cite{Ashikhmin2003}.
At the left, the target image on which the style is applied.
At the center, the target image with a hatching drawing texture applied.
At the right, the target image with a charcoal drawing texture applied.
}
\label{fig:sec:context:style-transfer:style-transfer-analogy}
\end{figure}
\citet{Xie2007} proposed in 2007 a new method called Feature Guided Texture Synthesis (FGTS) that better preserves the content of the target image, as we can observe in \autoref{fig:sec:context:style-transfer:style-transfer-feature}.
In this algorithm, first, a feature field is calculated from the target image trying to find edges and corners, assuming those pixels carry more information for the human visual system.
Then, a style transfer process similar to the previous one applies the texture but preserves the pixels specified in the feature field.
This method, like the ones described before, uses low-level statistical features.
\citeauthor{Xie2007} already mentioned the disadvantage of this and anticipated a sharper separation of style and content using higher-level, larger-scale features involving prior human knowledge.
\begin{figure}[p]
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{gfx/style-transfer-feature-1}
\caption{Style example}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{gfx/style-transfer-feature-2}
\caption{Target image}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{gfx/style-transfer-feature-3}
\caption{Feature image}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{gfx/style-transfer-feature-4}
\caption{Feature field}
\end{subfigure}
\par\medskip
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{gfx/style-transfer-feature-5}
\caption{Analogies}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{gfx/style-transfer-feature-6}
\caption{FGTS}
\end{subfigure}
\caption{
Comparison between Image Analogies and Feature Guided Texture Synthesis \cite{Xie2007}.
On the top row, from left to right: the style example (a) to be transferred to the target image (b), the computed features from the target image (c), and the feature field defining which pixels carry the most content information and therefore should be preserved (d).
Bottom left (e), the result produced by image analogy with the most severe content losses highlighted in red.
Bottom right (f), the result produced by FGTS without the content losses seen in (e).
}
\label{fig:sec:context:style-transfer:style-transfer-feature}
\end{figure}
\begin{figure}[p]
\includegraphics[width=\textwidth]{gfx/style-transfer-directional}
\caption{
Results of applying Directional Texture Transfer \cite{Lee2010}.
First, the flow of the target image (T) is calculated and then interpolated with that of the style source (S), eventually used to guide the strokes that compose the result image (R).
}
\label{fig:sec:context:style-transfer:style-transfer-directional}
\end{figure}
Lastly, in 2010, \citet{Lee2010} proposed an improved method called Directional Texture Transfer that finally transferred larger-scale patterns.
The algorithm attains this goal by taking into account the flow of a target image and interpolating it with that of a style source, using the result as the guide to place brush strokes.
We can see in \autoref{fig:sec:context:style-transfer:style-transfer-directional} how this approach preserves the different regions of the target image very sharply, as the strokes are transferred with different intensity.
This method creates compelling results, but it is limited to stroke-based rendering and far from the generalization we are looking for in a holistic separation of style and content.
\citeauthor{Kyprianidis2013}, in a style transfer techniques survey \cite{Kyprianidis2013} published in 2013, identified a clear tendency towards global-scope approaches, like \citeauthor{Lee2010}'s, rather than to local-level statistical measurements, like the first three ones we discussed in this section.
Global scope approaches require taking larger-scale patterns into account, which is, precisely, what deep neural networks trained for object recognition are very good at.
Two years after \citeauthor{Kyprianidis2013}'s survey, \citeauthor{Gatys2015B}'s Neural Style algorithm \cite{Gatys2015B}, resorted to deep neural networks and outperformed every other technique used in style transfer to date.
Even more interesting, it has given us a way to crack open the problem of separation of style and content.
| {
"alphanum_fraction": 0.7892356899,
"avg_line_length": 84.213622291,
"ext": "tex",
"hexsha": "5ff689c227c0eb68bc6fcd0712ce25155d8e9b08",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6bf2d5363df683db37e15ea6a828160210401763",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "OmeGak/ai-thesis",
"max_forks_repo_path": "content/chapter-context.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6bf2d5363df683db37e15ea6a828160210401763",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "OmeGak/ai-thesis",
"max_issues_repo_path": "content/chapter-context.tex",
"max_line_length": 389,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6bf2d5363df683db37e15ea6a828160210401763",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "OmeGak/ai-thesis",
"max_stars_repo_path": "content/chapter-context.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6145,
"size": 27201
} |
\section{Discussion \& Future Work}
We investigated how directly manipulable abstract blocks affect communication in collaborative sketching. In an observational design study, we found that abstract blocks gave groups a way to break down drawing compositions through movable, simple shapes that de-emphasized details and permanence. They allowed \textit{abstraction} groups to communicate higher-level concepts and explore different ideas and compositions relative to \textit{freeform} groups, which used planning phases to discuss more superficial details rather than compositional concepts. In this section, we discuss the implications of these findings for future creativity support tools.
\subsection{Tools for Supporting Abstract Blocking}
This chapter investigated how scaffolding abstraction through abstract blocks affects exploration and communication in the task of collaborative drawing.
In any collaborative task, communication can leave ephemeral, intangible artifacts that aid groups in coordinating actions and developing rapport \cite{Davis2017,Davis2016}. For example, group dynamic and physical gestures are important for ideation and building consensus in collaborative drawing tasks \cite{Bly1988, Tang1991}. In our observations, groups used both verbal and non-verbal forms of communication (like gesturing) to make decisions and share ideas \cite{goldin1999}. However, for \textit{abstraction} groups, abstract blocks helped participants concretely express intangible ideas through the placement of directly manipulable shapes, giving groups a malleable visualization of a concept that groups could modify as they conversed. The tangibility of making abstract concepts feel more concrete and importantly, changeable, seemed to be the primary benefit of abstraction blocks.
Groups could, of course, use blocking and similar abstraction strategies without a need for physical blocking tools. However, we observed in our study that \textit{freeform} groups did not use such strategies with the exception of one group who had prior experience in composition concepts through photography. Though sketches are meant to be blocking tools themselves, novices especially do not often separate exploratory sketching from refined drawing \cite{welch2000sketching}, as we observed in \textit{freeform} groups. When a concrete abstraction tool---abstract blocks---was made available, \textit{abstraction} groups were more inclined to create abstract representations. Some groups chose to use text to label blocks, others chose to lightly sketch, but importantly, \textit{abstraction} groups did not decorate blocks with heavy detail. Abstract blocks may have helped form common understanding of the drawing among collaborators \cite{olson2000distance}, helping groups maintain discourse on higher-level concepts.
Abstraction through abstract blocks may be even more beneficial in online settings where intangible communication is difficult. In such settings, groups must rely on technological affordances to effectively collaborate \cite{Hollan1992,Jensen2018}.
Future collaborative tools could explicitly encourage abstract blocking, as opposed to requiring creators to figure out how to create abstract representations of their work themselves through existing tools (\textit{e.g.}, shape tools in Microsoft PowerPoint or Adobe Photoshop).
One might also imagine creativity support tools where abstraction-oriented tools and detail-oriented tools can be used in a complementary manner. Our study also showed that there is a time and place for both kinds of representations in a collaborative creative task, and that there is value in being able to move between abstract and concrete representations of a drawing. Rather than needing two separate creativity tools---one for drafting a prototype and one for generating a final creative work---future work might investigate how a tool could help a group seamlessly navigate between two views of the same work.
\subsection{Supporting Creative Collaboration}
Our studies were done at a relatively small scale; abstract blocks could also support structured, large-scale crowdsourcing workflows. One way people seek to collaborate with a crowd in creative work is through asking for feedback or guidance online through discussion forums or creative communities \cite{kuznetsov2010rise,settles2013,wasko2005should} like Behance (www.behance.com). Much of the exchange and communication occurs via text comment feedback and relies on the creator to interpret suggestions, make changes, and update their work. Methods for structuring communication and exchange among crowd collaborators include assessment \cite{Dow2012} and framing \cite{hicks2016framing} prompts, but much of creative work relies on being to \textit{see} and understand changes. In our observations, we saw that collaborators formed a consensus around sketch ideas through abstract blocking. Similar to bringing in examples as a way to show ``something like this'' \cite{kang2018paragon}, abstract blocks enable quick ways to visually demonstrate ideas without focusing on details.
We also observed that groups used abstract blocks as a way to delegate tasks among collaborators, using the blocks as structured components for each collaborator. At a larger scale, abstract blocking could aid in managing input from a large number of contributors. For example, in leader-facilitated crowdsourcing approaches such as flash teams, a leader could assign modular tasks to a team through blocks \cite{Retelny2014,Salehi2018,Valentine2017} or signal which parts of a creative work are open for collaboration \cite{kim2014ensemble}. In remixing communities, contributors might use blocking to build upon certain structural components of an existing project \cite{Resnick2009}. From a social perspective, abstract blocks can also give a tangible way for interactively exploring ideas, which could be particularly useful in creative livestreams where a crowd can engage with an artist and contribute ideas beyond text chat suggestions and audience voting games \cite{Fraser2019}. Future work should examine how abstract blocking might impact communication at a larger-scale across domains.
\subsection{Rich Abstraction or Simple Details?}
One limitation of using abstract blocks was a decrease in improvisation. Serendipitous creativity can occur through collaborators building and improvising off others' ideas \cite{Davis2017,Davis2016}. We saw this in the \textit{freeform} groups where groups would chain ideas together, adding on different details as the group discussed. Because \textit{abstraction} groups created a concrete composition plan ahead of drawing, we observed more expedient drawing, but with less improvisation overall. However, we did observe improvisation and exploration during the planning process when groups were first discussing sketch ideas. \textit{Abstraction} groups explored alternative composition ideas rather than details early on, reflecting the expert process we observed. One area for future work is examining what level of abstraction is appropriate at different stages of creative work. Could abstraction blocks be used to also specify concrete details? This could allow both for exploration of high-level compositions as well as exploration of lower-level details while removing the need for higher-fidelity sketching. Tools could provide different modes for different levels of abstraction, similar to multi-layered interfaces that adaptively disclose functionality as the user progresses \cite{shneiderman2002promoting}.
Another limitation to using abstract blocks is it is unclear \textit{how much} detail should be abstracted. Some groups in our study simply used text labels on blocks to communicate an idea. Others used lightly sketched images. One group used image search to provide a reference for their idea. Each of these representations might differentially impact communication and understanding of the ideas presented. In addition, most \textit{abstraction} groups did not discuss nuanced composition details such as the direction a character is facing or their position in the drawing. Instead, much of the discussion emphasized placement since the abstract blocks focused on adjusting the sketch's overall layout. We saw more of a discussion of these details in the \textit{freeform} group.
Our observational design study presents several opportunities for how tools might implement abstraction strategies such as abstract blocking. The first is in supporting large-scale collaboration. We observed how tangible and malleable blocks helped groups communicate and iterate upon abstract, exploratory ideas. One question for future research is how abstract blocks might support communicating composition details while maintaining flexibility and high-level focus without compromising the benefits afforded by removing detail in the first place. Another opportunity is in comparing the use of abstraction blocks in individuals versus a group and seeing how exploration strategies might change based on the level of communication and collaboration. Lastly, we examined abstraction blocks in sketching, but the concept could apply to any domain where high-level goals can be broken into chunks. Future work should examine the nature of abstraction and how abstraction blocks might be utilized in domains outside of visual work.
\subsubsection{Summary}
In this chapter, we describe and evaluate using abstraction blocks, a technique where visual content is \emph{abstracted} to simple shapes to facilitate communication and collaboration on visual creative work such as drawings. By abstracting elements of visual work, we found in an observational design study that people are more likely to explore high-level concepts such as composition and perspective and are less likely to fixate on details of an early idea. Collaborative tasks rely on effective communication between contributors. For novices especially, communicating higher-level goals and changes can be challenging without domain or procedural knowledge. Abstraction blocks scaffold this communication, using simple shapes as a flexible medium to create conceptual chunks rather than detailed sketches. This abstraction makes the invisible visible and the intangible tangible. Collaborative creativity support systems, both for visual creative work and beyond, should include mechanisms for attuning users to the right level of abstraction to explore and communicate higher-level concepts.
\subsubsection{Acknowledgements}
We thank our research participants for their time and efforts. This research was funded in part by Adobe Research.
This chapter, in part, is being prepared for submission for publication by Tricia J. Ngoon, Joy O. Kim, and Scott Klemmer. The dissertation author was the primary investigator and author of this material. | {
"alphanum_fraction": 0.8339001746,
"avg_line_length": 339.96875,
"ext": "tex",
"hexsha": "6ce1281497849f011c17fbd6676a09623deaeaed",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e4956137d90f55e0437b94576eb2de655f0bc64",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tngoon/ucsd-thesis",
"max_forks_repo_path": "abstraction/5_discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e4956137d90f55e0437b94576eb2de655f0bc64",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tngoon/ucsd-thesis",
"max_issues_repo_path": "abstraction/5_discussion.tex",
"max_line_length": 1324,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1e4956137d90f55e0437b94576eb2de655f0bc64",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tngoon/ucsd-thesis",
"max_stars_repo_path": "abstraction/5_discussion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2008,
"size": 10879
} |
\subsection{Getting values by field}
| {
"alphanum_fraction": 0.7692307692,
"avg_line_length": 9.75,
"ext": "tex",
"hexsha": "3aa226a5c39970729caddb8ccfca79b2604810a1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/computer/objects/03-02-field.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/computer/objects/03-02-field.tex",
"max_line_length": 36,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/computer/objects/03-02-field.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9,
"size": 39
} |
\documentclass[12pt]{article}
\usepackage[english]{babel}
%\usepackage{subcaption}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{amsmath}
\graphicspath{{images/}}
\usepackage{geometry}
\geometry{
a4paper,
total={170mm,257mm},
left=20mm,
top=20mm,
}
\begin{document}
\section{Image Classification using Convolutional Neural Network}
\subsection{Introduction}
In this project, we are given a dataset containing the images of dogs and cats. The images in the dataset are divided for training and testing. The goal of this project is to create a convolutional neural network which can learn to recognize an image and say whether it is dog or a cat. This report contains a detailed description of the pipeline and the details of the neural net used.
\subsection{Software Used:}
\begin{enumerate}
\item Python 3
\item Google Colab
\item Jupyter Notebook
\end{enumerate}
\subsection{Packages Used:}
\begin{enumerate}
\item Pytorch
\item torchvision
\end{enumerate}
\subsection{Implementation using Google Colab:}
We using Google Colab to train our neural net using the GPU available present in it. We uploaded the dataset to google drive and linked the Google Colab and drive. Hence, we were able to access the dataset present in Google drive directly from Google Colab.
\subsection{Pipeline Employed}
\begin{enumerate}
\item \textbf{Data Preparation:} The input dataset provided contains of train and test folder. The cat and dog images in train folder were mixed. So, we prepared a separate folders for both cat and dog images in train folder.
\item \textbf{Pre-Processing the dataset:}Before training the neural net, we converted the image into gray. This is because color of the image did not give any feature. With or without color, the number of features remained the same. Moreover, in color images there were three channels which increased number of computations when convolution operation is performed in neural net. After converting the images to gray we stored it in a .npy file in order to access it further during training.
\item \textbf{Neural Net Architecture:}
This is the neural network we used for training and testing the model.
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{nn}
\caption{Neural Network}
\label{fig:Neural Network}
\end{figure}
The neural network consists of three hidden layers. The first layer consists of 32 channels. The second layer consists of 64 channels. The third layer consists of 128 channels. The neural network takes input
\item \textbf{Training the Neural Net:}
After creating the neural net, we need to pass the data for neural net to train. We now load the stored .npy file while contains the training images of dogs and cats of the training data stored in numpy array. In addition to this, the accuracy and loss obtained in each EPOCH is stored to plot.
\item \textbf{Testing the Neural Net:}
The testing is performed on the neural net in the similar way as training. The labelled data which is not considered for training is considered for testing and accuracy and loss are calculated for varying number of EPOCH.
\subsection{Improving Accuracy of the neural network:}
Initially, by considering number of EPOCH as 2, the accuracy was about 75. Improvement in the accuracy was obtained by increasing the number of EPOCH to 20. The final batch size considered after tweaking was 100.
\subsection{Outputs:}
Following are the graphs obtained for accuracy and loss with respect to number of EPOCH both for training and testing data. From the output graphs, it can be inferred that the accuracy of the neural net increases and loss decreases with the increases in number of EPOCHS.\\
\textbf{Training Results:}
\begin{figure}[h]
\centering
\includegraphics[width=9.5cm]{accuracy_train}
\caption{Accuracy Results}
\label{fig:Accuracy Results}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=9.5cm]{loss_train}
\caption{Loss Results}
\label{fig:Loss Results}
\end{figure}
\end{enumerate}
\section{Team Members:}
1. Eashwar Sathyamurthy
2. Akwasi A Obeng
3. Achal P Vyas
\end{document} | {
"alphanum_fraction": 0.7686672968,
"avg_line_length": 50.9879518072,
"ext": "tex",
"hexsha": "9c3e7f0296923a8aea68a7ebe7e236809b39df26",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4f4b6cc47cd3b4476b322ac30e21a1c72407674b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mesneym/Classification-Neural-Network",
"max_forks_repo_path": "report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4f4b6cc47cd3b4476b322ac30e21a1c72407674b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mesneym/Classification-Neural-Network",
"max_issues_repo_path": "report/report.tex",
"max_line_length": 491,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "4f4b6cc47cd3b4476b322ac30e21a1c72407674b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mesneym/Classification-Neural-Network",
"max_stars_repo_path": "report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-23T07:14:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-23T07:14:51.000Z",
"num_tokens": 996,
"size": 4232
} |
\hspace{\parindent}Several test types have been done for the app. Most of the testing was done during the app development, so further Unit testing was reduced as most of it was done before.
\subsection{Unit testing}
\hspace{\parindent}The main part of the testing was done by unit testing.
Four major unit tests have been created that test functionalities of the main parts of the app. Each main test has several subtests. For code conciseness purposes main tests are not a set of small tests, but just one large test that tests multiple different parts of the subsystem.\\ \\
When counting the total number of tested subcomponents, we have done approximately XY small tests, separated into four major groups, which are:
\begin{itemize}
\item Login tests (7)
\item Trip Creation Activity tests (12)
\item Navigation tests (6)
\item Map screen tests (5)
\end{itemize}
These tests are now going to be showcased and presented.
\subsubsection{Login unit testing}
The tests done for login testing:
\begin{itemize}
\item Inputting username
\item Inputting password
\item Testing login
\item Testing login with false authentication
\item Testing login with correct authentication
\item Logging in with Google Account
\item Logging in with Facebook acocunt
\end{itemize}
The code:
\begin{verbatim}
@RunWith(AndroidJUnit4::class)
class LoginTest : TestCase() {
@OptIn(
ExperimentalFoundationApi::class,
kotlinx.coroutines.ExperimentalCoroutinesApi::class,
androidx.compose.animation.ExperimentalAnimationApi::class,
androidx.compose.material.ExperimentalMaterialApi::class
)
@get:Rule
val composeTestRule = createAndroidComposeRule<LoginActivity>()
@OptIn(
ExperimentalCoroutinesApi::class,
androidx.compose.foundation.ExperimentalFoundationApi::class,
androidx.compose.animation.ExperimentalAnimationApi::class,
androidx.compose.material.ExperimentalMaterialApi::class
)
@Test
fun checkLogin() {
val login = composeTestRule.onNodeWithTag("login")
val usr = composeTestRule.onNodeWithTag("username")
val pwd = composeTestRule.onNodeWithTag("password")
usr.performTextInput("[email protected]")
pwd.performTextInput("vins")
login.performClick()
composeTestRule.onNode(hasText("Authentication failed"))
assertFalse(composeTestRule.activity.isNewUser ?: false)
pwd.performTextClearance()
pwd.performTextInput("vinsvins")
login.performClick()
Thread.sleep(5000)
getInstrumentation().runOnMainSync {
val activity =
ActivityLifecycleMonitorRegistry.getInstance().getActivitiesInStage(
Stage.RESUMED
).first()
assertTrue(activity is MainActivity)
}
}
}
\end{verbatim}
\subsubsection{Trip creation activity unit testing}
The tests done for trip creation activity testing:
\begin{itemize}
\item Tests whether the trip can be created correctly.
\item The following elements of the trip creation screen are tested:
\item Inputting text to name field
\item Inputting text to description field
\item Inputting multiple tags
\item Searchig destinations with city name
\item Adding steps
\item Adding main destinations
\item Exiting without saving
\item Exiting with saving
\item Trying to save trip without filling all the required fields
\item Failing to save trip
\end{itemize}
\begin{verbatim}
- @RunWith(AndroidJUnit4::class)
class TripCreationActivityTest : TestCase() {
@get:Rule
val composeTestRule = createAndroidComposeRule<TripCreationActivity>()
@OptIn(ExperimentalTestApi::class)
@Test
fun create() {
composeTestRule.onNodeWithTag("name").performTextInput("Nome")
composeTestRule.onNodeWithTag("description").
performTextInput("Description")
val tag = composeTestRule.onNodeWithTag("tag")
tag.performTextInput("tag1")
tag.performTextInput(",")
tag.performTextInput("tag2")
tag.performTextInput(",")
composeTestRule.onNodeWithTag("main").performClick()
Thread.sleep(1000)
composeTestRule.onNodeWithTag("searchText").
performTextInput("verona")
composeTestRule.onNodeWithTag("searchText").
performImeAction()
Thread.sleep(4000)
composeTestRule.
onAllNodes(hasText("Verona",true,true)).
onFirst().assertExists()
val addStep = composeTestRule.onAllNodesWithTag("add").onFirst()
addStep.assertExists()
addStep.performClick()
val setMain = composeTestRule.onAllNodesWithTag("main").onFirst()
setMain.assertExists()
setMain.performClick()
composeTestRule.onNodeWithTag("back").performClick()
Thread.sleep(500)
val nodes = composeTestRule.onAllNodes(hasText("verona",true,true))
assertNotNull(nodes)
nodes.assertCountEquals(2)
composeTestRule.onNodeWithTag("confirm").performClick()
Thread.sleep(2000)
InstrumentationRegistry.getInstrumentation().runOnMainSync {
val activity =
ActivityLifecycleMonitorRegistry.getInstance().
getActivitiesInStage(
Stage.DESTROYED
).first()
assertTrue(activity is TripCreationActivity)
}
}
@Test
fun creationFailure() {
for (i in 0..3) {
if (i == 0)
composeTestRule.onNodeWithTag("name").
performTextInput("Nome")
if (i == 1)
composeTestRule.onNodeWithTag("description")
.performTextInput("Description")
if (i == 2) {
val tag = composeTestRule.onNodeWithTag("tag")
tag.performTextInput("tag1")
tag.performTextInput(",")
tag.performTextInput("tag2")
tag.performTextInput(",")
}
if (i == 3) {
composeTestRule.onNodeWithTag("main").performClick()
Thread.sleep(1000)
composeTestRule.onNodeWithTag("searchText").
performTextInput("verona")
composeTestRule.onNodeWithTag("searchText").
performImeAction()
Thread.sleep(4000)
composeTestRule.onAllNodes(hasText("Verona", true, true)).onFirst().assertExists()
val addStep = composeTestRule.onAllNodesWithTag("add").onFirst()
addStep.assertExists()
addStep.performClick()
val setMain = composeTestRule.onAllNodesWithTag("main").onFirst()
setMain.assertExists()
setMain.performClick()
composeTestRule.onNodeWithTag("back").performClick()
Thread.sleep(500)
val nodes = composeTestRule.onAllNodes(hasText("verona", true, true))
assertNotNull(nodes)
nodes.assertCountEquals(2)
}
composeTestRule.onNodeWithTag("confirm").performClick()
Thread.sleep(1000)
val text = composeTestRule.activity.getString(R.string.not_enough_info)
if (i != 3)
onView(withText(text)).check(matches(isDisplayed()))
Thread.sleep(2000)
}
}
}
\end{verbatim}
\subsubsection{Navigation unit testing}
The tests done for trip creation activity testing:
\begin{itemize}
\item Navigation of the bottom tab
\item Navigation of home screen
\item Navigation of map screen
\item Navigation of trip screen
\item Navigation of explore screen
\item Navigation of profile screen
\end{itemize}
\begin{verbatim}
@RunWith(AndroidJUnit4::class)
class NavigationTest : TestCase() {
@get:Rule
val composeTestRule = createAndroidComposeRule<MainActivity>()
@Test
fun navigation() {
for (i in 0..6) {
composeTestRule.onNodeWithTag("tab" + nextInt(0, 4)).performClick()
Thread.sleep(3000)
}
composeTestRule.onNodeWithTag("tab4").performClick()
assertNotNull(composeTestRule.onNode(hasText("Astro",true,true)))
composeTestRule.onAllNodesWithTag("positive").onFirst().performClick()
Thread.sleep(2000)
composeTestRule.onAllNodesWithTag("negative").onFirst().performClick()
composeTestRule.onNodeWithTag("tab3").performClick()
composeTestRule.onNodeWithTag("around").performClick()
Thread.sleep(3000)
getInstrumentation().runOnMainSync {
val activity =
ActivityLifecycleMonitorRegistry.getInstance().
getActivitiesInStage(
Stage.RESUMED
).first()
assertTrue(activity is AroundMeActivity)
// seems to not work in Debug test mode
// works in release
}
}
}
\end{verbatim}
\subsubsection{Map screen unit testing}
The tests done for trip creation activity testing:
\begin{itemize}
\item Navigate to tab
\item Draw a polygon in the map tab
\item Retrieve results of the drawn polygon
\item Selection of the shown destination
\item Detail displaying of the destination
\end{itemize}
\begin{verbatim}
@RunWith(AndroidJUnit4::class)
class MapScreenTest : TestCase() {
@get:Rule
val composeTestRule = createAndroidComposeRule<MainActivity>()
@OptIn(ExperimentalTestApi::class)
@Test
fun checkSelection() {
composeTestRule.onNodeWithTag("tab1").performClick()
Thread.sleep(1000)
composeTestRule.onNodeWithTag("draw").performClick()
Thread.sleep(1000)
val device = UiDevice.getInstance(getInstrumentation())
// It requires to disable animations by phone settings
// + declare injection_event permission
// For the release, this requirements have been turned off
try {
composeTestRule.onRoot().performGesture {
down(Offset(50f,50f))
moveBy(Offset(500f,500f))
moveBy(Offset(500f,0f))
moveBy(Offset(-500f,500f))
}
composeTestRule.onNodeWithTag("draw").performClick()
}
catch (e: Exception) {
Log.e("TEST",e.localizedMessage)
}
val marker = device.findObject(UiSelector().descriptionContains("Venice"))
marker.click()
}
}
\end{verbatim}
\newpage
\subsection{Deep linking testing}
\hspace{\parindent}Deep linking testing has been done by providing the application with three different types of links - the correct type of link accepted by the application (schema, host, and attribute part are accepted), semi-correct type of link (schema and host are accepted, the attribute is not), and the correct link with false attribute value.\\ \\
For testing purposes, Android Debug Bridge (adb) and manual link clicking have been used. Here are the results of several tests done via adb:\\
\textbf{Test 1}
\begin{verbatim}
adb shell am start
>adb shell am start -W -a android.intent.action.VIEW
Starting: Intent { act=android.intent.action.VIEW }
Status: ok
LaunchState: COLD
Activity: android/com.android.internal.app.ResolverActivity
TotalTime: 457
WaitTime: 527
Complete
\end{verbatim}
\textbf{Test 2}
\begin{verbatim}
>adb shell am start -W -a
android.intent.action.VIEW -d "https://polaris.travel.app/find/tripID"
Starting: Intent { act=android.intent.action.VIEW dat=https://polaris.travel.app/... }
Status: ok
LaunchState: WARM
Activity: android/com.android.internal.app.ResolverActivity
TotalTime: 251
WaitTime: 258
Complete
\end{verbatim}
\textbf{Test 3}
\begin{verbatim}
>adb shell am start -W -a
android.intent.action.VIEW -d "https://polaris.travel.app/find/tripID"
Starting: Intent { act=android.intent.action.VIEW dat=https://polaris.travel.app/... }
Status: ok
LaunchState: COLD
Activity: web/android.default.webApp
TotalTime: 251
WaitTime: 258
\end{verbatim}
The tests that have correct deep links have opened the Polaris activity, while the link with incorrect link has opened the
link directly in the browser.
\newpage
\subsection{Failure test}
We have done a process of a failure test when if comes to the app functioning in both online and offline mode. As the app features both states, it is important that the app works in offline mode properly even when there is no Internet connection, and that it restores to a proper flow once the Internet connection is present.\\
This is the following set of actions done for this test:
\begin{enumerate}
\item Shut down the server
\item Enter the app and navigate it with internet connection enabled
\item Check the app doesn't crash and gracefully alerts the user of the problem
\item Put the server up
\item Check everything is correctly received
\item Change endpoint
\item Redo points 2. and 3.
\item Put the server in a inconsistent status of responses
\item Redo points 2. and 3.
\item Restore server in a consistent configuration
\item Disabling local connectivity (offline status)
\item Redo points 2. and 3.
\end{enumerate}
All of the tests regarding failure have been passed successfully.
\newpage
\subsection{Support of different devices}
\hspace{\parindent}The app has been tested on several different device types and screen sizes. The testing has been done either on real world devices or on the virtual devices, ranging from Android 8 to Android 11. The list of the devices the app has been tested is the following:
\begin{itemize}
\item Samsung Galaxy S6
\item Samsung Galaxy A50
\item Samsung Galaxy A51
\item Google Pixel 3a
\item Samsung Galaxy Tablet S5 (Tablet)
\item Google Nexus 7 (Tablet)
\item Google Pixel C (Tablet)
\end{itemize}
Here are a few screenshot examples of the same screen working on devices of two different sizes (phone and tablet):
\begin{figure}[!htb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.95\textwidth]{../Images/UI/LoginBig.jpg}
\caption{\label{fig:dbapiuser}\textbf{Login screen - Tablet}}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\textwidth]{../Images/UI/Login.jpg}
\caption{\label{fig:dbapiuser}\textbf{Login screen - Phone}}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.95\textwidth]{../Images/UI/HomeBig.jpg}
\caption{\label{fig:dbapiuser}\textbf{Home screen - Tablet}}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\textwidth]{../Images/UI/DestinationsMain.jpg}
\caption{\label{fig:dbapiuser}\textbf{Home screen - phone}}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.95\textwidth]{../Images/UI/ExploreBig.jpg}
\caption{\label{fig:dbapiuser}\textbf{Explore screen - Tablet}}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\textwidth]{../Images/UI/ExploreDark.jpg}
\caption{\label{fig:dbapiuser}\textbf{Explore screen - Phone}}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.95\textwidth]{../Images/UI/TripCreateBig.jpg}
\caption{\label{fig:dbapiuser}\textbf{Trip create screen - Tablet}}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.8\textwidth]{../Images/UI/TripCreateDark.jpg}
\caption{\label{fig:dbapiuser}\textbf{Trip create sreen - Phone}}
\end{minipage}
\end{figure}
\newpage | {
"alphanum_fraction": 0.6987649556,
"avg_line_length": 36.4074941452,
"ext": "tex",
"hexsha": "4d3f0f91ce76286d9b6ebce052eae78497bc4511",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bf71efbc93c84f25492acbd0dd96d9d9a15bc0d4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "robertodavinci/android-dev-travel-app",
"max_forks_repo_path": "Documentation/Document/Files/test.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bf71efbc93c84f25492acbd0dd96d9d9a15bc0d4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "robertodavinci/android-dev-travel-app",
"max_issues_repo_path": "Documentation/Document/Files/test.tex",
"max_line_length": 356,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bf71efbc93c84f25492acbd0dd96d9d9a15bc0d4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "robertodavinci/android-dev-travel-app",
"max_stars_repo_path": "Documentation/Document/Files/test.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3629,
"size": 15546
} |
\documentclass[aps,prl,reprint]{revtex4-2}
\usepackage{gensymb}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{dsfont}
\usepackage{relsize}
\usepackage{wrapfig}
\usepackage{graphicx}
\usepackage{hyperref}
\hypersetup{colorlinks=true, citecolor=blue, urlcolor=blue, linkcolor=blue}
\begin{document}
% Use the \preprint command to place your local institutional report
% number in the upper righthand corner of the title page in preprint mode.
% Multiple \preprint commands are allowed.
% Use the 'preprintnumbers' class option to override journal defaults
% to display numbers if necessary
%\preprint{}
%Title of paper
\title{FUEL Cell Lab}
% repeat the \author .. \affiliation etc. as needed
% \email, \thanks, \homepage, \altaffiliation all apply to the current
% author. Explanatory text should go in the []'s, actual e-mail
% address or url should go in the {}'s for \email and \homepage.
% Please use the appropriate macro foreach each type of information
% \affiliation command applies to all authors since the last
% \affiliation command. The \affiliation command should follow the
% other information
% \affiliation can be followed by \email, \homepage, \thanks as well.
\author{Trevor Smith, Alex Storrer}
\email[]{[email protected]}
\homepage[]{https://github.com/trevorm4x/}
%\thanks{}
%\altaffiliation{}
\affiliation{Northeastern University, PHYS3600}
\date{\today}
\begin{abstract}
The efficiency of converting various forms of energy to and from
electrical energy were experimentally measured. The efficiency of a
Silicon-based Solar Cell was found to be $15\ \pm\ 2\ \%$, in agreement
with typical performance which is also around 17.5\%. The efficiency
of an Electrolyzer was found to
be $87\ \pm\ 6\ \%$, which is higher than typical performance of 80\%. The
efficiency of a Hydrogen Fuel Cell was found to be $49\ \pm\ 5\ \%$,
which is lower than typical performance of 60\%.
\end{abstract}
\maketitle
% body of paper here - Use proper section commands
% References should be done using the \cite, \ref, and \label commands
\section{Introduction}
Energy in various forms is of incredible relevance in the world today. Especially, electrical energy is the medium through
which most forms of energy are transmitted before being used. In this lab we explore the translation of energy from
electromagnetic radiation, or light, to electrical energy, to chemical energy, and back to electrical energy.
A photovoltaic cell is a diode that forces any charge carriers freed by a photon hitting an electron through an electric
circuit. Because of intrinsic properties of this problem efficiency is quite low, between fifteen and low-twenties percent.
An electrolyzer uses electricity to remove the bonds between hydrogen and oxygen in water, producing oxygen an hydrogen
gas. Hydrogen gas is of particular relevance as the output, as oxygen is abundant in the atmosphere to react with the
hydrogen. A hydrogen fuel cell does exactly this, essentially serving as a battery after separating the electrons and protons within hydrogen.
\section{Apparatus}
The apparatus consisted of the following.
\begin{itemize}
\item Hydrogen fuel cell apparatus, H-Tec, Model T-126
\item Optical Power Meter, Thorlabs, Model PM100-121C
\item 2 DVMs; decade resistor (1-100M$\Omega$)
\item Strong light source (flood light); 2.0V/ 2A power supply
\end{itemize}
\section{Efficiency of a Photovoltaic Cell}
\subsection{Procedure}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{../Images/l1_PartA.jpg}
\caption{\label{figA}Electrical circuit for measuring photovoltaic (PV) output voltage and current through a variable resistor (R). This curve reasonably matches theory.}
\end{figure}
A photovoltaic cell was used to power a simple circuit as given by \ref{figA},
while converting energy from a strong light source.
In order to measure the efficiency of the PV cell, the incident power in the
form of light and the output power were both measured.
A ThorLabs power meter was used to measure the intensity of the light at each
individual PV cell, (intensity is given by power divided by area) and the
average intensity across the PV cell was multiplied by its measured area to
find the power input to the system in the form of light. \\
In such a system, in order to measure the output power it is necessary to match
the resistance of the circuit to some unknown internal resistance of the PV cell. As such, a variable resistor was used to test various resistances in a
logarithmic scale, and the peak output power was considered the optimal power
of the system.
\subsection{Results}
The intensity of light measured across the eight PV cells was found to be a
distribution of mean $28\ W/m^2$ and standard deviation $4\ W/m^2$. Multiplied
by the area of the PV cells, the total light power hitting the PV cell was
calculated at 0.22 Watts. \\
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{../Images/l1_a_1.png}
\caption{\label{I(V)A}Current as a function of Voltage as resistance was varied.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{../Images/l1_a_2.png}
\caption{\label{P(R)A}Power response of the circuit at various resistances.}
\end{figure}
The power response of the circuit was found to be a normal distribution when
viewed on a logarithmic scale, with a maximum at 70 $\Omega$ and 32.5 mW. \\
\newpage
\begin{equation}
\mathlarger{\eta_{PV}}=P_E/P_L
\label{eta_PV}
\end{equation}
Where\ $\eta$\ is\ the\ efficiency\ of\ the\ fuel\ cell\ and\ P\ is\ Power.\\
Finally, the efficiency of the cell was computed by \ref{eta_PV} to be $15\ \pm\ 2\%$.
\subsection{Conclusions}
While there is a wide variety in efficiencies of PV Cells [\ref{Solar Cell}],
this falls somewhere inside it on the lower end. However, the significant
uncertainty in this
calculation is worth discussing. Primarily, this uncertainty stems from the
wide distribution of light intensity hitting the cell at different points. In
future verions of this experiment, more sophisticated models for the intensity
mapping onto the PV Cell can significantly increase the measuring precision.
This uncertainty was found to be much larger than the 3\% uncertainty in the
power meter itself. \\
\section{Electrolyzer}
\subsection{Procedure}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{../Images/l1_PartB.jpg}
\caption{\label{figB}Electrical circuit for measuring Electrolyzer input voltage and current.}
\end{figure}
An electrolyzer was supplied distilled water and electricity to produce a
given quantity of Hydrogen gas, where the current and voltage were measured
via the circuit given in \ref{figB}. The amount of time to produce a
target amount of Hydrogen was measured, with the total energy drawn by the
circuit and the total chemical energy produced by the electrolyzer compared
to compute the electrolyzer efficiency.
\subsection{Results}
\begin{equation}
PV = \frac{nRT}{m}
\label{pvnrt}
\end{equation}
Where P is pressure, V is volume, n is the number of molecules, R is a constant, T is temperature, and m is mass.\\
\begin{equation}
E_{H_2} = m_{H_2} \times HHV
\label{HHV}
\end{equation}
Where E is energy, m is mass, and HHV is Higher Heat Value. \\
The total time to produce $5\ cm^3$ of Hydrogen gas was found to be 112
seconds, at constant voltage and current of 1.892 V and 0.3215 A. Multiplying
these quantities E = PIT, the energy consumed was found to be 68.4 joules.
The mass of the Hydrogen was calculated using \ref{pvnrt}, assuming ideal
conditions, and the chemical energy was estimated by multiplying by the accepted
value for the Higher Heat Value of hydrogen, 141.9 MJ/kg \ref{HHV}. With a mass
of $ 4.19 \cdot 10^{-7} kg$, the stored chemical energy was found to be 59
joules via \ref{HHV}. Finally, the efficiency of the electrolyzer was found by
\ref{eta_EL} to be $87\ \pm\ 6\%$.
\begin{equation}
\mathlarger{\eta}_{EL}=E_{Hydrogen}/E_E
\label{eta_EL}
\end{equation}
\subsection{Conclusions}
This efficiency is higher than usual industrial values for electrolyzer
efficiency given by \cite{Electrolyzer}. Assuming this particular electrolyzer
is not more efficient than most, this error most likely stems from several
measurements related to the volume of gas. Specifically:
\begin{itemize}
\item
The graduations on the reservior are 1 $cm^3$, which is 1/5 the
total volume we were measuring.
\item
The volume is used to trigger the timer, where discerning the
exact point the marking is reached is difficult.
\end{itemize}
\section{Hydrogen Fuel Cell}
\subsection{Procedure}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{../Images/l1_PartD.jpg}
\caption{\label{figD}Electrical circuit for measuring Fuel Cell output voltage and current.}
\end{figure}
A hydrogen fuel cell was supplied with hydrogen gas, in order to power a circuit
given by \ref{figD}. In the first stage of this phase of the experiment, an
optimal resistance was found by varying R using a decade resistor,
so as to match the internal resistance of the Fuel Cell. \\
The second stage of this phase of the experiment was to then use this optimal
resistance, and a few values on either side of it, to find how much electrical
energy was produced by consuming a given quantity of hydrogen gas. The
quantity of gas and the electrical power output over time were then compared to
compute the efficiency of the Fuel Cell.
\subsection{Results}
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{../Images/l1_d_1_trial_b.png}
\caption{\label{I(V)D}Current as a function of Voltage as resistance was varied.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{../Images/l1_d_2_trial_b.png}
\caption{\label{P(R)D}Power response of the circuit at various resistances.}
\end{figure}
The optimal resistance in terms of the power response of the circuit
was found to be $15\ \Omega$ given the curve in fig. \ref{P(R)D},
however there was a
delay after changing the resistance for the system to respond in terms of
output power, which resulted in a difference between the optimal resistance
computed in this stage and the optimal resistance computed at a later stage.\\
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{../Images/l1_d_3.png}
\caption{\label{FC_n}Fuel Cell efficiency at several input powers, with varying resistance.}
\end{figure}
In the next stage of this phase of the experiment, different resistances were
used to convert what was intended to be a single given quantity of hydrogen gas
into electrical energy. However, due to time constraints some configurations of
the circuit were only timed to consume 3 $cm^3$ of hydrogen gas, instead of 5.\\
The quantity of gas consumed was converted to mass using \ref{pvnrt}, assuming
ideal conditions. This mass was then converted into energy using the accepted
Lower Heat Value of hydrogen as given by \ref{HHV}. The electrical energy
measured by E = tIV was then compared with the chemical energy in the hydrogen
by \ref{eta_FC}, and plotted in \ref{FC_n}.
\begin{equation}
\mathlarger{\eta}_{FC}=E_{FC}/E_{Hydrogen}
\label{eta_FC}
\end{equation}
The peak efficiency was found to be $49\ \pm\ 5$.
\subsection{Conclusions}
This efficiency is much lower than the accepted value given by \cite{Fuel Cell}.
As with the Electrolyzer experiment, the uncertainty in this experiment was in
large part due to the graduations on the gas reservior. Another likely source of
error is also our ability to find the optimal parameters for the circuit w.r.t
the Resistance. Given the calculated values for Power and Time, it is
likely an optimal resistance would have been between 9 and 10 Ohms. Where
the capability of the decade resistor only allows resistances in steps of 1 Ohm,
and the difference between 9 and 10 Ohms meant a very small change in power but
a large change in the time required to consume a given quantity of hydrogen gas,
it is clear we did not measure the Fuel Cell capability under ideal conditions.
\section{Summary}
The efficiency of converting various forms of energy to and from
electrical energy were experimentally measured. \\
The efficiency of a
Silicon-based Solar Cell was found to be $15\ \pm\ 2\ \%$, in agreement
with typical performance, albeit lower than usual. The main source of error
was the spread of intensity across the solar cell, which could be improved
in future experiments.\\
The efficiency of an Electrolyzer was found to
be $87\ \pm\ 6\ \%$, which is higher than typical performance. The most
significant source of uncertainty in this and the Hydrogen Fuel Cell experiment
stemmed from the graduations on the gas reservior, especially because there
are two measurements that stem from it. In all likeliness these errors may
cancel, but error propogation was not handled in that way. It may be that this
value was higher because of the difference between lab conditions and standard
conditions, resulting in lower density of gas and overpredicting of the mass
produced. \\
The efficiency of a Hydrogen Fuel Cell was found to be $49\ \pm\ 5\ \%$,
which is lower than typical performance. This again was likely in large part
due to the graduations on the gas reservior, but also could be due to a
limitation in our ability to optimize the circuit w.r.t. resistance.
However, the same difference between ideal conditions and lab conditions
leading to a high result would lead to a low measured efficiency in this
experiment, as less gas would be actually being consumed than expected and as
such the cell was actually using less energy than measured.
\begin{widetext}
\begin{center}
\begin{table}[h]
\renewcommand{\arraystretch}{1.35}
\setlength{\tabcolsep}{10pt}
\caption{\label{}Measured and accepted values of the speed of light and refractive index of various materials.}
\begin{tabular}{|c|c|c|c|c|}
%\hline
\toprule
Apparatus & $\eta$ (\%) & Accepted $\eta$ value & Refs. & Deviation \\
\colrule
Photovoltaic Cell & $15 \pm 2$ & $17 \pm 2.5$ & \cite{Solar Cell} & $0\sigma$ \\
\colrule
Elecrolyzer & $87 \pm 6$ & 80 & \cite{Electrolyzer} & $2\sigma$ \\
\colrule
Hydrogen Fuel Cell & $49 \pm 5$ & 60 & \cite{Fuel Cell} & $-3\sigma$ \\
%\hline
\botrule
\end{tabular}
\end{table}
\end{center}
\end{widetext}
\begin{thebibliography}{9}
%
\bibitem{HHV}
Wikipedia, Heat of Combustion: \\
\href{https://en.wikipedia.org/wiki/Heat_of_combustion}{https://www.wikepedia.com}
%
\bibitem{Solar Cell}
Energysage, Most Efficient Solar Panels\\
\href{https://news.energysage.com/what-are-the-most-efficient-solar-panels-on-the-market/#:~:text=How%20efficient%20are%20solar%20panels,are%20not%20above%2020%25%20efficiency.}{https://www.energysage.com/}
%
\bibitem{Electrolyzer}
Carbon Commentary, Hydrogen made by Electolysis\\
\href{https://www.carboncommentary.com/blog/2017/7/5/hydrogen-made-by-the-electrolysis-of-water-is-now-cost-competitive-and-gives-us-another-building-block-for-the-low-carbon-economy}{https://www.carboncommentary.com}
%
\bibitem{Fuel Cell}
Energy.gov, Fuel Cell Fact Sheet\\
\href{https://www.energy.gov/sites/prod/files/2015/11/f27/fcto_fuel_cells_fact_sheet.pdf}{https://www.energy.gov}
\end{thebibliography}
\end{document}
%
% ****** End of file apstemplate.tex ******
| {
"alphanum_fraction": 0.7693259525,
"avg_line_length": 41.5,
"ext": "tex",
"hexsha": "de10528a017cf77704ed69db889f1712275d7972",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "709efb785bcf1771f240137224f0f174886495b7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "trevorm4x/APL-fuel-first",
"max_forks_repo_path": "Lab/Lab1/1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "709efb785bcf1771f240137224f0f174886495b7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "trevorm4x/APL-fuel-first",
"max_issues_repo_path": "Lab/Lab1/1.tex",
"max_line_length": 217,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "709efb785bcf1771f240137224f0f174886495b7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "trevorm4x/APL-fuel-first",
"max_stars_repo_path": "Lab/Lab1/1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4018,
"size": 15355
} |
\newcommand\mcase[1]{\noindent\textbf{Case }#1\\\noindent}
\subsection{Soundness}
\begin{lemma}
\label{lemma:constrsubst}
Given constraints $C$ and $D$ and substitution $\unif$, if $\entail{D}{C}$
then $\entail{\unif D}{\unif C}$.
\begin{proof}
By induction over the entailment judgment.
\end{proof}
\end{lemma}
\begin{lemma}
\label{lemma:constrimply}
Given a typing derivation $\inferS{C}{\E}{e}{\tau}$ and
a constraint $D \in \mathcal S$ in solved form such that $\entail{D}{C}$, then
$\inferS{D}{\E}{e}{\tau}$
\begin{proof}
By induction over the typing derivation
\end{proof}
\end{lemma}
\begin{lemma}
\label{lemma:typ:weakening}
Given a type environment $\E$, $\E' \subset \E$, a term $e$ and a variable $x\in\E$,\\
if $\inferS{C}{\E'}{e}{\tau}$
then $\inferS{C\Cand \Weaken_{x}(\E')}{\E';\bvar{x}{\E(x)}}{e}{\tau}$
\begin{proof}
Trivial if $x \in \Sv$. Otherwise, by induction over the typing derivation.
\end{proof}
\end{lemma}
% We say that a use map $\Sv$ and an environment $\E$ are equivalent,
% written as $\Sv \equivC \E$, if, for
% any kind $k$, $\Cleq{\Sv}{k} \equivC \Cleq{\E|_{dom(\Sv)}}{k}$.
We define the flattening $\Eflat\E$ of an environment $\E$, as the environment
where all the binders are replaced by normal ones. More formally:
\begin{align*}
\Eflat\E
=& \left\{ \bvar x \schm \mid \bvar{x}{\schm} \in\E
\vee \bbvar {x}{k}{\schm}\in\E
\vee \svar{x}{\schm}^n\in\E
\right\}\\
&\cup \left\{ \bvar{\tvar}{k} \mid \bvar{\tvar}{k}\in\E \right\}
\end{align*}
\begin{lemma}
\label{lemma:env:flat}
Given a type environment $\E$ and a term $e$ such
that $\inferW{\Sv}{(C,\unif)}{\E}{e}{\tau}$
then $\Eflat\Sv \subset \E$.
\begin{proof}
By induction over the typing derivation.
\end{proof}
\end{lemma}
% \begin{lemma}
% \label{lemma:equivsplit}
% Given a type environment $\E$ and
% use maps $\Sv_1$, $\Sv_2$,
% if $\bsplit{C}{\Sv}{\Sv_1}{\Sv_2}$, then
% there exists $\E_1$, $\E_2$ such that $\Sv_1 \equivC \E_1$,
% $\Sv_2 \equivC \E_2$,
% $\Sv \equivC \E$, $\bsplit{C'}{\E}{\E_1}{\E_2}$ and $C\equivC C'$.
% \begin{proof}
% TODO
% \end{proof}
% \end{lemma}
\begin{theorem}[Soundness of inference]
Given a type environment $\E$ containing only value bindings,
$\E|_\tau$ containing only a type binding and a term $e$,\\
if $\inferW{\Sv}{(C,\unif)}{\E;\E_\tau}{e}{\tau}$
then $\inferS{C}{\unif(\Sv;\E_\tau)}{e}{\tau}$, $\unif C = C$ and $\unif \tau = \tau$
\begin{proof}
We proceed by induction over the derivation of $\vdash_w$.
Most of the cases follow the proofs from \hmx closely.
For brevity, we showcase three rules: the treatment of binders
and weakening, where our inference algorithm differ significantly
from the syntax-directed rule, and the $Pair$ case
which showcase the treatment of environment splitting.
\mcase{$\ruleIVar$}
We have $\Sv = \Sone{x}{\sigma}$.
Without loss of generality, we can consider $\unif_x = \unif'|_{\fv{\E}} = \unif'|_{\fv{\sigma}}$.
Since $\Sv\Sdel{x}$ is empty and by definition of normalize, we
have that
$\entail C \unif'(C_x) \Cand \Cleq{\unif_x\Sv\Sdel{x}}{\kaff_\infty}$,
$\unif' \leq \unif$ and $\unif'C = C$.
By definition, $\unif_x\unif' = \unif'$.
By rule $Var$, we obtain
$\inferS{C}{\unif_x(\Sv;\E_\tau)}{x}{\unif_x\unif'\tau}$, which concludes.
\\
\mcase{$\ruleIAbs$}
By induction, we have
$\inferS{C'}{\Sv_x;\E_\tau}{e}{\tau}$, $\unif C = C$
and $\unif \tau = \tau$.\\
By definition of normalize and \cref{lemma:constrsubst}, we have
$\entail{C}{C'\Cand\Cleq{\unif'\Sv}{\unif'\kvar} \Cand \Weaken_{x}(\unif'\Sv_x)}$ and
$\unif \leq \unif'$.
By \cref{lemma:constrsubst}, we have $\entail{C}{\Cleq{\Sv}{\unif\kvar}}$.
We now consider two cases:
\begin{enumerate}[leftmargin=*,noitemsep,topsep=5pt]
\item If $x\in\Sv_x$, then $\Weaken_{x}(\unif\Sv_x) = \Ctrue$
and by \cref{lemma:env:flat}, $\Sv_x = \Sv;\bvar{x}{\alpha}$.
We can deduce\\
$\inferS{C'\Cand \Weaken_{\bvar{x}{\tvar}}(\unif\Sv_x)}{\unif\Sv;\E_\tau;\bvar{x}{\unif(\tvar)}}{e}{\tau}$.
\item If $x\notin\Sv_x$, then $\Sv = \Sv_x$ and
$\Weaken_{\bvar{x}{\tvar}}(\unif\Sv_x) = \Cleq{\unif\tvar}{\kaff_\infty}$.
By \cref{lemma:typ:weakening},
we have\\
$\inferS{C'\Cand \Weaken_{\bvar{x}{\tvar}}(\unif\Sv_x)}{\unif\Sv;\E_\tau;\bvar{x}{\unif(\tvar)}}{e}{\tau}$
\end{enumerate}
% By \cref{lemma:env:flat}, we know that if $x\in\Sv_x$, then
% By \cref{lemma:typ:weakening,}, we deduce
% that
% $\inferS{C'\Cand \Weaken_{x}(\unif\Sv_x)}{\unif\Sv;\bvar{x}{\unif(\tvar)}}{e}{\tau}$.
By \cref{lemma:constrimply}, we
have $\inferS{C}{\unif(\Sv;\E_\tau);\bvar{x}{\unif(\tvar)}}{e}{\tau}$.\\
By rule $Abs$, we obtain
$\inferS{C}{\unif(\Sv;\E_\tau)}{\lam{x}{e}}{\unif(\tvar)\tarr{\unif(\kvar)}\tau}$ which concludes.
\\
% $\ruleSDLam$
% \mcase{$\ruleIApp$}
% By induction, we have
% $\inferS{C_1}{\unif_1\Sv_1}{e_1}{\tau_1}$, $\unif_1 C_1 = C_1$,
% and $\unif_1 \tau_1 = \tau_1$
% and
% $\inferS{C_2}{\unif_2\Sv_2}{e_2}{\tau_2}$, $\unif_2 C_2 = C_2$
% and $\unif_2 \tau_2 = \tau_2$.
% %
% By normalization, $\entail{C}{D}$, $\unif \leq \unif'$ and $\unif C = C$.
% By \cref{lemma:constrimply} and by substitution, we have
% $\inferS{C}{\unif\Sv_1}{e_1}{\unif\tau_1}$ and
% $\inferS{C}{\unif\Sv_2}{e_2}{\unif\tau_2}$.
% We directly have that
% $\bsplit{\unif C_s}{\unif \Sv}{\Sv_1}{\unif\Sv_2}$
% and by \cref{lemma:constrimply}, $\entail{\unif C}{\unif C_s}$.
% %
% Finally, since the constraint $\Cleq{\tau_1}{\tau_2\tarr{\kvar}\tvar}$
% has been solved by normalization,
% we have $\unif'\tau_1 = \unif\tau_2\tarr{k}\unif\tvar$ for some kind $k$.
% By rule $App$, we obtain
% $\inferS{C}
% {\unif\Sv}{\app{e_1}{e_2}}{\unif(\tvar)}$,
% which concludes.
% \\
% \mcase{$\ruleILet$}
% By induction, we have
% $\inferS{C_1}{\unif_1\Sv_1}{e_1}{\tau_1}$, $\unif_1 C_1 = C_1$,
% and $\unif_1 \tau_1 = \tau_1$
% and
% $\inferS{C_2}{\unif_2\Sv_2}{e_2}{\tau_2}$, $\unif_2 C_2 = C_2$
% and $\unif_2 \tau_2 = \tau_2$.
% %
% By definition of $\text{gen}$, $\entail{C_\schm}{C_1}$.
% %
% By normalization, $\entail{C}{D}$, $\unif \leq \unif'$ and $\unif C = C$.
% By \cref{lemma:constrimply} and by substitution, we have
% $\inferS{C}{\unif\Sv_1}{e_1}{\unif\tau_1}$ and
% $\inferS{C}{\unif\Sv_2}{e_2}{\unif\tau_2}$.
% %
% \TODO{}
% $
% \inferrule[Let]
% { \inferS{C \Cand D}{\E_1}{e_1}{\tau_1} \\
% (C_\schm,\schm) = \generalize{D}{\E}{\tau_1}\\
% \entail{C}{C_\schm} \\
% \inferS{C}{\E;\bvar{x}{\sigma}}{e_2}{\tau_2} \\
% \addlin{\lsplit{C}{\E}{\E_1}{\E_2}}\\
% }
% { \inferS{C}
% {\E}{\letin{x}{e_1}{e_2}}{\tau_2} }$
\mcase{$\ruleIPair$}
By induction, we have
$\inferS{C_1}{\unif_1(\Sv_1;\E^1_\tau)}{e_1}{\tau_1}$, $\unif_1 C_1 = C_1$,
and $\unif_1 \tau_1 = \tau_1$
and
$\inferS{C_2}{\unif_2(\Sv_2;\E^2_\tau)}{e_2}{\tau_2}$, $\unif_2 C_2 = C_2$
and $\unif_2 \tau_2 = \tau_2$.
%
Wlog, we can rename the type $\E^1_\tau$ and $\E^2_\tau$ to be disjoint
and define $\E_\tau = \E^1_\tau \cup \E^2_\tau$.
By normalization, $\entail{C}{D}$, $\unif \leq \unif'$ and $\unif C = C$.
By \cref{lemma:constrimply} and by substitution, we have
$\inferS{C}{\unif\Sv_1}{e_1}{\unif\tau_1}$ and
$\inferS{C}{\unif\Sv_2}{e_2}{\unif\tau_2}$.
We directly have that
$\bsplit{\unif C_s}{\unif \Sv}{\Sv_1}{\unif\Sv_2}$
and by \cref{lemma:constrimply}, $\entail{\unif C}{\unif C_s}$.
%
By rule $Pair$, we obtain
$\inferS{C}
{\unif(\Sv;\E_\tau)}{\app{e_1}{e_2}}{\unif(\tyPair{\tvar_1}{\tvar_2}))}$,
which concludes.
\\
% \mcase{$\ruleIMatch$}
% \mcase{$\ruleIBorrow$}
% We trivially have
% $\bvar{\borrow x}{\borrowty{\unif(k)}{\unif(\tau)}} \in \unif\Sv$.
% By induction, we have $\unif C = C$ and $\unif(\tau) = \tau$.
% Since $\kvar$ is new, $\unif(\kvar) = \kvar$.
% By rule $Borrow$, we obtain
% $\inferS{C}{\unif\Sv}{\borrow{x}}{\borrowty{\kvar}{\tau}}$,
% which concludes.
% \\
% \mcase{$\ruleIRegion$}
\end{proof}
\end{theorem}
\subsection{Completeness}
We now state our algorithm is complete: for any given
syntax-directed typing derivation, our inference algorithm can find
a derivation that gives a type at least as general.
For this, we need first to provide a few additional definitions.
\begin{definition}[More general unifier]
Given a set of variable $U$ and $\unif$, $\unif'$ and $\phi$
substitutions. \\
Then
$\unif \leq^{\phi}_{U} \unif'$ iff $(\phi \circ \unif)|_{U} = \unif'|_U$.
\end{definition}
\begin{definition}[Instance relation]
Given a constraints $C$ and two schemes
$\schm = \forall \Multi\tvar. \qual{D}{\tau}$ and
$\schm' = \forall \Multi\tvar'. \qual{D'}{\tau'} $.
Then $\entail{C}{\schm \preceq \schm'}$
iff $\entail{C}{\subst{\tvar}{\tau''} D}$
and $\entail{C\Cand D'}{\Cleq{\subst{\tvar}{\tau''}\tau}{\tau'}}$
\end{definition}
We also extend the instance relation to environments $\E$.
We now describe the interactions between splitting and the
various other operations.
\begin{lemma}
\label{split:flat}
Given $\bsplit{C}{\E}{\E_1}{\E_2}$, Then $\Eflat\E = \Eflat\E_1 \cup \Eflat\E_2$.
\begin{proof}
By induction over the splitting derivation.
\end{proof}
\end{lemma}
\begin{lemma}
\label{split:gen}
Given $\lsplit{C}{\E_1}{\E_2}{\E_3}$,
$\bsplit{C'}{\E'_1}{\E'_2}{\E'_3}$
and $\unif$ such that
$\E'_i\subset\E''_i$ and $\entail{}{\unif\E''_i \preceq \E_i}$
for $i\in\{1;2;3\}$.
Then $\entail{C}{\unif C'}$.
\begin{proof}
By induction over the derivation of $\bsplit{C'}{\E'_1}{\E'_2}{\E'_3}$.
\end{proof}
\end{lemma}
We can arbitrarily extend the initial typing environment in an inference
derivation, since
it is not used to check linearity.
\begin{lemma}
\label{infer:extend}
Given $\inferW{\Sv}{(C,\unif)}{\Eflat\E}{e}{\tau}$ and $\E'$ such
that $\E \subseteq \E'$, then $\inferW{\Sv}{(C,\unif)}{\Eflat\E'}{e}{\tau}$
\begin{proof}
By induction over the type inference derivation.
\end{proof}
\end{lemma}
Finally, we present the completeness theorem.
\begin{theorem}[Completeness]
Given $\inferS{C'}{\E'}{e}{\tau'}$ and
$\entail{}{\unif'\E \preceq \E'}$.
Then $$\inferW{\Sv}{(C,\unif)}{\Eflat\E}{e}{\tau}$$
for some environment $\Sv$,
substitution $\unif$, constraint $C$ and type $\tau$ such
that
\begin{align*}
\unif &\leq^{\phi}_{\fv{\E}} \unif'
&\entail{C'&}{\phi C}
&\entail{&}{\phi \schm \preceq \schm'}
&\Sv&\subset\E
% &( C, \schm, \unif) &\leq (C',\schm',\unif')
\end{align*}
where $\schm' = \generalize{C'}{\E'}{\tau'}$
and $\schm = \generalize{C}{\E}{\tau}$
\end{theorem}
\begin{proof}
Most of the difficulty of this proof comes from proper handling of
instanciation and generalization for type-schemes.
This part is already proven
by \citet{sulzmann1997proofs} in the context
of \hmx. As before, we will only present few cases
which highlights the handling of bindings and environments.
For clarity, we will only present the part of the proof that
only directly relate to the new aspect introduced by \lang.
%
\\
% \mcase{$\ruleSDVar$}
% $\ruleIVar$
\mcase{$
\inferrule[Abs]
{
\inferS{C'}
{\E'_x;\bvar{x}{\tau'_2}}{e}{\tau'_1} \\
\addlin{\entail{C}{\Cleq{\E'_x}{k}}}
}
{ \inferS{C'}{\E'_x}
{\lam[k]{x}{e}}{\tau'_2\tarr{k}\tau'_1} }
$ and $\entail{}{\unif'\E \preceq \E'}$.}
Let us pick $\tvar$ and $\kvar$ fresh.
Wlog, we choose $\unif'(\tvar) = \tau_2$ and $\unif'(\kvar) = k$
so that $\entail{}{\unif'\E_x \preceq \E'_x}$.
By induction:
\begin{align*}
\inferW{\Sv_x}{(C,\unif)}{\Eflat\E_x;\bvar{x}{\tvar}&}{e}{\tau}
&\unif &\leq^{\phi}_{\fv{\E_x}\cup\{\tvar;\kvar\}} \unif'
\end{align*}
\begin{align*}
\entail{C'&}{\phi C}
&\entail{&}{\phi \schm \preceq \schm'}
&\Sv_x&\subset\E_x;\bvar{x}{\tvar}
\end{align*}
\begin{align*}
\schm' &= \generalize{C'}{\E_x';\bvar{x}{\tau'_2}}{\tau'_1}
&\schm &= \generalize{C}{\E_x;\bvar{x}{\tvar}}{\tau_1}
\end{align*}
Let $C_a = C\Cand \Cleq{\Sv}{\kvar} \Cand \Weaken_{\bvar{x}{\tvar}}(\Sv_x)$
and
By definition, $\unif_D\Sdel{\alpha;\kvar} \leq^{\phi_D}_{\fv{\E_x}} \unif$
which means we have
$\unif_D\Sdel{\alpha;\kvar} \leq^{\phi\circ\phi_D}_{\fv{\E_x}} \unif'$.
We also have that $\Sv_x\Sdel{x}\subset \E_x$.
Since $\entail{C}{\Cleq{\E'_x}{k}}$, we have $\entail{C}{\unif'\Cleq{\Sv}{\kvar}}$.
If $x\in\Sv_x$, then $\Weaken_{\bvar{x}{\tvar}}(\Sv_x) = \Ctrue$.
Otherwise we can show by induction
that $\entail{C'}{\unif'\Weaken_{\bvar{x}{\tvar}}(\Sv_x)}$.
We also have $\unif C = C$, which gives us $\entail{C'}{\unif'(C_a)}$.
We can deduce
$\entail{C'}{\unif'(C_a)}$.\\
This means $(C',\unif')$ is a normal form of $C_a$, so a principal normal form
exists. Let $(D,\unif_D) = \normalize{C_a}{\unif\Sdel{\alpha;\kvar}}$.
By the property of principal normal forms,
we have $\entail{C'}{\rho D}$ and
$\unif_D \leq^{\rho}_{\fv{\E_x}} \unif'$.
By application of {\sc Abs$_I$}, we have
$\inferW{\Sv_x\Sdel{x}}{(C,\unif_D\Sdel{\tvar,\kvar})}{\Eflat\E_x}
{\lam{x}{e}}{\unif_D(\tvar)\tarr{\unif_D(\kvar)}\tau_1}$.\\
The rest of the proof proceeds as in the original \hmx proof.
\\
% $\ruleIAbs$
\mcase{$
\inferrule[Pair]
{
\inferS{C'}{\E'_1}{e_1}{\tau'_1} \\
\inferS{C'}{\E'_2}{e_2}{\tau'_1} \\
\lsplit{C}{\E'}{\E'_1}{\E'_2}
}
{ \inferS{C'}
{\E'}{\app{e_1}{e_2}}{\tyPair{\tau'_1}{\tau'_2}} }
$\\
and $\entail{}{\unif'\E \preceq \E'}$}
The only new elements compared to \hmx is
the environment splitting.
By induction:
\begin{align*}
\inferW{\Sv_1}{(C_1,\unif_1)}{\Eflat\E_1&}{e}{\tau_1}
&\unif_1 &\leq^{\phi_2}_{\fv{\E}} \unif'_1
&\entail{C'&}{\phi C_1}
&\entail{&}{\phi_2 \schm_1 \preceq \schm'_1}
\end{align*}
\begin{align*}
\Sv_1&\subset\E_1
&\schm'_1 &= \generalize{C'}{\E'_1}{\tau'_1}
&\schm_1 &= \generalize{C}{\E_1}{\tau_1}
\end{align*}
and
\begin{align*}
\inferW{\Sv_2}{(C_2,\unif_2)}{\Eflat\E_2&}{e}{\tau_2}
&\unif_2 &\leq^{\phi_2}_{\fv{\E}} \unif'_2
&\entail{C'&}{\phi C_2}
\end{align*}
\begin{align*}
\entail{&}{\phi_2 \schm_2 \preceq \schm'_2}
&\Sv_2&\subset\E_2
\end{align*}
\begin{align*}
\schm'_2 &= \generalize{C'}{\E'_2}{\tau'_2}
&\schm_2 &= \generalize{C}{\E_2}{\tau_2}
\end{align*}
By \cref{split:flat,infer:extend}, we have
\begin{align*}
\inferW{\Sv_1}{(C_1,\unif_1)}{\Eflat\E&}{e}{\tau_1}
&\inferW{\Sv_2}{(C_2,\unif_2)}{\Eflat\E&}{e}{\tau_2}
\end{align*}
Let $\bsplit{C_s}{\Sv}{\Sv_1}{\Sv_2}$.
We know that $\entail{}{\unif'\E \preceq \E'}$,
$\entail{}{\unif'_i\E_i \preceq \E'_i}$ and $\Sv_i\subset\E_i$.
By \cref{split:gen},
we have $\entail{C}{\unif' C_s}$.
The rest of the proof follows \hmx.
\end{proof}
\begin{corollary}[Principality]
Let $\inferS{\Ctrue}{\E}{e}{\schm}$ a closed typing judgment.\\
Then $\inferW{\Sv}{(C,\unif)}{\Eflat\E}{e}{\tau}$
such that:
\begin{align*}
(\Ctrue,\schm_o) &= \generalize{C}{\unif\E}{\tau}
&\entail{&}{\schm_o \preceq \schm}
% &( C, \schm, \unif) &\leq (C',\schm',\unif')
\end{align*}
\end{corollary}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../main"
%%% End:
| {
"alphanum_fraction": 0.6004023884,
"avg_line_length": 32.7829787234,
"ext": "tex",
"hexsha": "d84bdb1e6c6de6e8c18f040f700d50213bc5fe61",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-06-13T17:05:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-05-08T11:59:52.000Z",
"max_forks_repo_head_hexsha": "5ec0c1b7cefc68de96d4e39f5e965b28945a6321",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "peterthiemann/uniqueness",
"max_forks_repo_path": "kindly/infer/metatheory.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5ec0c1b7cefc68de96d4e39f5e965b28945a6321",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "peterthiemann/uniqueness",
"max_issues_repo_path": "kindly/infer/metatheory.tex",
"max_line_length": 111,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "5ec0c1b7cefc68de96d4e39f5e965b28945a6321",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "peterthiemann/uniqueness",
"max_stars_repo_path": "kindly/infer/metatheory.tex",
"max_stars_repo_stars_event_max_datetime": "2020-08-24T19:19:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-08T07:19:41.000Z",
"num_tokens": 6559,
"size": 15408
} |
% !TeX spellcheck = en_GB
\section{Verification}
In this section we present tests performed in order to verify that our implementation reflects correctly our model.
\subsection{Deterministic and Simple Cases}
In this scenarios we test the behaviour of the system in \textbf{easy and completely deterministic cases} in order to verify the model. In all of these tests the behaviour is the one expected and it has been verified using the graphical environment (Qtenv) and debugging prints.
\subsection{Degeneracy Tests}
In the \textbf{degeneracy tests} we verify the behaviour of our simulator with parameters set to 0 values. In all tests the simulator works properly. In particular the following observations can be inferred:
\begin{itemize}
\item If the \textbf{number of Channel} \textbf{C} is 0 then the simulation stops immediately because has no sense running a simulation with 0 channels.
\item If the \textbf{Time-slot size} $T_{slot}$ is 0 then the simulation doesn't stop and goes to infinite because on instant 0 time slots are continuously triggered.
\item If the exponential \textbf{mean inter-arrival time} $\frac{1}{\lambda}$ is 0 then the simulation doesn't stop and continues to infinite because packets arrive at time 0 continuously.
\end{itemize}
\subsection{Consistency Test}
The \textbf{consistency test} verifies that the system react consistently with the output. In order to test this we have performed different tests, here an example with the following parameters:
\paragraph{Test 1}
\begin{itemize}
\item \textbf{N} (couples t\textbf{}x-rx): \textbf{1}; \textbf{C} (channels): \textbf{500}; \textbf{p} (send probability): \textbf{1}; $\dfrac{1}{\lambda}$ (Mean inter-arrival time): \textbf{10s} (deterministic); Time slot size: \textbf{5s};
\end{itemize}
\paragraph{Test 2: Two couple TX-RX with half packets arrival rate}
\begin{itemize}
\item \textbf{N: 2}; \textbf{C: 500}; \textbf{p: 1}; $\dfrac{1}{\lambda}$ = \textbf{20s} (deterministic); \textbf{$T_{slot}$: 5s}
\end{itemize}
\noindent We \textbf{expect that the result of the two tests are more or less equal} because the behaviour of one source transmitting every 10 seconds must be similar to the behaviour of two sources transmitting every 20 seconds. We set the \textbf{number of channels at 500 in order to neglect the effect of collisions} (in any case it is possible that a collision occur, but in the following tests no collisions have been detected).
\noindent The graph in figure \ref{img: consistencyTest1a} and in figure \ref{img: consistencyTest1b} show the results of the tests previously explained. We can see that the behaviour is very similar in both cases and so that the systems works as we expect. In fact the Mean Throughput per slot tend in both cases to \textbf{0.50 packets per slot}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{img/consistencyTest1aWithAxis.png}
\caption{Consistency Test 1}
\label {img: consistencyTest1a}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{img/consistencyTest1bWithAxis.png}
\caption{Consistency Test 2}
\label {img: consistencyTest1b}
\end{figure}
%\noindent \colorbox{yellow}{The oscillations at the beginning} are due to the fact that the mean inter-arrival time is bigger with respect to the time slot size. In fact we can observe that in the second test there are larger oscillations because the difference between the mean inter-arrival time and the time slot size is bigger than the one of the first test.
%\noindent \colorbox{yellow}{Now let analyse the \textbf{response time}}, we can see its measure in both tests in figure \ref{img: consistencyTest1a_responsetime} and \ref{img: consistencyTest1b_responsetime}.
%\begin{figure}[H]
% \centering
% \includegraphics[width=0.9\textwidth]{img/consistencytest1a_responsetime.png}
% \caption{\colorbox{yellow}{Consistency Test 2} - Response Time}
% \label {img: consistencyTest1a_responsetime}
%\end{figure}
%\begin{figure}[H]
% \centering
% \includegraphics[width=0.9\textwidth]{img/consistencytest1b_responsetime.png}
% \caption{\colorbox{yellow}{Consistency Test 2} - Response Time}
% \label {img: consistencyTest1b_responsetime}
%\end{figure}
%\noindent \colorbox{yellow}{The \textbf{mean response time in both cases is 5 seconds}}, this is a \textbf{lower bound} for the response time because in both tests 5 seconds is the size of the slot time so it's impossible to have a response time lower than the slot size.
%\noindent \colorbox{yellow}{We can see that in the second test we have a less continue plot}, this is due to the fact that the mean inter-arrival time is bigger in the second case and so the packet are more distant in time.
%\noindent \colorbox{yellow}{We can conclude that also for what concerns the response time} the consistency is ensured because it is the same in the case in which we have a source transmitting each 10 seconds and in the case in which we have two sources transmitting each 20 seconds.
\subsection{Continuity Test}
In this test the aim is to prove that the output changes slightly if the input changes a bit. In order to do prove this, different test were made. As an example two simulations were carried out with the parameters shown in table \ref{tab: continuity test}. With this parameters we \textbf{have changed slightly the input}, and so we \textbf{expect that the outputs don't show particular differences}. Obviously some differences will be present (in particular due to collisions and the increasing in the number of transmitters) but they shouldn't affect a lot the results.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|ll}
\cline{1-3}
{\textbf{Test}} & { \textbf{N (Couples Tx-Rx)}} & { \textbf{C (Channels)}} & & \\ \cline{1-3}
1 & 8 & 20 & & \\ \cline{1-3}
2 & 10 & 20 & & \\ \cline{1-3}
\end{tabular}
\caption{Continuity test parameters}
\label{tab: continuity test}
\end{table}
\noindent The remaining parameters are the same of all of four tests:
\begin{itemize}
\item Send probability \textbf{p}: \textbf{1}
\item Mean Inter-arrival time $\dfrac{1}{\lambda}$: \textbf{10s} (deterministic)
\item $T_{slot}$: \textbf{5s}
\end{itemize}
\noindent The results of the simulation as shown in the figure \ref{img: continuityTest1a} and \ref{img: continuityTest1b} (we measure the mean throughput).
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{img/continuityTest1a.png}
\caption{Continuity Test 1 - Mean Throughput per Slot - Collisions detected: 272}
\label {img: continuityTest1a}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{img/continuityTest1b.png}
\caption{Continuity Test 2 - Mean Throughput per Slot - Collisions detected: 390}
\label {img: continuityTest1b}
\end{figure}
\noindent We can observe that the output changes slightly between the two cases. In particular it's possible to infer that if the number of transmitter increases then the throughput increases too (this statement will be proved with experiments), but this increasing is reduced by the collisions, in fact if the number of channels is fixed, the more the transmitter the more the collisions.
\noindent %\colorbox{yellow}{ we can see that in the first case} the mean throughput is settled to a value about 3.6 and in the second case about 4. So changing slightly the input changes slightly the output.
\noindent %\colorbox{yellow}{If we analyze the response time} we can see that the difference is bigger between the two cases because the response time is sensible to the variation of transmitters and channels, and this is something that must be taken into account. In fact, as we have seen previously, there are a lot of collisions in the second case and this is the main reason for the increasing of response time in the second test w.r.t the first one.
%\begin{figure}[H]
% \centering
% \includegraphics[width=0.8\textwidth]{img/continuityTest1a_responsetime.png}
% \caption{\colorbox{yellow}{Continuity Test 1} - Response Time - Collisions detected: 272}
% \label {img: continuityTest1a_responsetime}
%\end{figure}
%\begin{figure}[H]
% \centering
% \includegraphics[width=0.8\textwidth]{img/continuityTest1b_responsetime.png}
% \caption{\colorbox{yellow}{Continuity Test 2} - Response Time - Collisions detected: 390}
% \label {img: continuityTest1b_responsetime}
%\end{figure}
%\noindent \colorbox{yellow}{It's possible to see the response time} measured in the two tests in the figure \ref{img: continuityTest1a_responsetime} and \ref{img: continuityTest1b_responsetime}. In any case we can see that the continuity test is correct because the system works as we expect: channels fixed, the more the transmitters, the more the collisions, the more the response time.
\paragraph{Monotonicity Assessment:}
\noindent In addition to this, some simulation were taken in order to \textbf{assess the monotonicity} of some KPI by changing some factors, here some examples:
\begin{itemize}
\item \textbf{Mean Throughput}: By \textbf{increasing N} (the numbers of couples tx-rx), with a high number of channels to avoid collisions, an \textbf{increase on the mean throughput is expected}. The following results were obtained:
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{img/continuityTest_Throughput_TXRX_Varying.png}
\caption{Continuity Test - Increasing Number of TX-RX (Main factors: \textbf{N} = 1, 2, 5, 10, 20, 100; \textbf{C} = 20000; $\dfrac{1}{\lambda}$ = 20 ms; $T_{slot}$ = 5ms; $p$ = 1)}
\label {img: continuityTestThTXRX}
\end{figure}
%\begin{figure}[H]
% \centering
% \includegraphics[width=0.9\textwidth]{img/continuityTest_Throughput_IntTimeVarying.png}
% \caption{\colorbox{yellow}{Continuity Test} - Increasing Mean Inter-arrival Time (Main factors: \textbf{N} = 2; \textbf{C} = 20000; $\dfrac{1}{\lambda}$ = 5ms, 10ms, 20ms, 50ms, 100ms, 500ms; $T_{slot}$ = 5ms; $p$ = 1)}
% \label {img: continuityTestThLambda}
%\end{figure}
\item \textbf{Mean Response Time}: By \textbf{Increasing N} (the number of couples tx-rx), with low numbers of channels, an \textbf{increase on the mean response time is expected}. The following results were obtained:
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{img/ContinuityTest_ResponseTIme_TXRXVarying}
\caption{Continuity Test Mean Response Time- Increasing Number of TX-RX}
\label {img: continuityTestTXRXResponse}
\end{figure}
%\begin{figure}[H]
% \centering
% \includegraphics[width=0.9\textwidth]{img/ContinuityTest_ResponseTIme_VaryingP.png}
% \caption{\colorbox{yellow}{Continuity Test Mean Response Time} - Increasing Sending Probability P(Main factors: \textbf{N} = 5; \textbf{C} = 4; $\dfrac{1}{\lambda}$ = 200ms; $T_{slot}$ = 5ms; $p$ = 0.2, 0.4, 0.6, 0.8, 1)}
% \label {img: continuityTestResponseLambda}
%\end{figure}
\end{itemize}
\textbf{For the Mean Response Time was also checked the steady state reach} by just plotting that the relative vector stabilizes at some point (this was done in general to make conclusion with this KPI). Here an example for the response time value, by increasing the value of the Transmission Probability $p$ increases also the mean response time, which converges indeed:
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{img/ContinuityTest_ResponseTime_VectorP.png}
\label {img: responseTimeConvergence}
\end{figure}
\subsection{Test Simulations with Binomial Model (1)}
This test simulation has been performed with the following parameters:
\paragraph{Parameters}
\begin{itemize}
\item Number of Couples Tx-Rx (\textbf{N}): 1
\item Number of Channels (\textbf{C}): 1
\item Send probability (\textbf{p}): \$\{0.05, 0.1, 0.15, 0.2, 0.4, 0.5, 0.6, 0.8\}
\item Mean inter-arrival time ($\dfrac{1}{\lambda}$): 1s (deterministic)
\item Time-slot size ($T_{slot}$): 2s
\item Simulation duration ($T_{sim}$): 3600s
\item Repeat: 100
\item Seed-set: \$\{repetition\}
\end{itemize}
In this simplified context there are \textbf{no collisions} (only one couple) and the transmitter will have, for every slot, at least one packet to sent ($\dfrac{1}{\lambda} < T_{slot}$ with deterministic inter arrival of packets). We can model this particular case as a repeated Bernoullian Experiment, in which a success event correspond to a successful packet sent. In this simplified model we can define X as the number of success in n repeated trials (in independent condition), so $X \sim Bin(n, p)$. For this reason the PMF is the following:
\begin{equation}
p(i) = P\{X = i\} = \binom{n}{i} p^{i} (1-p)^{n-i}
\end{equation}
Where $n$ represents the number of repeated trials and $i$ the number of successes in those trials. With this distribution the mean and the variance are:
\begin{align*}
E[X] = np
\end{align*}
In our context we can state the following:
\begin{equation}
n = \left \lfloor{\dfrac{T_{sim}}{T_{slot}}}\right \rfloor = 1800
\end{equation}
We would expect in the case of $p = 0.5$:
\begin{equation}
E[X] = np = 900
\end{equation}
And the results after the run of 100 test simulation with different seeds, the following results are returned (with 95\% CI):
\begin{equation}
\overline{X} \in [893.08, 901.94]
\end{equation}
Which is in line with our expectations. The latter computations have been repeated for different values of p and the following plot can sum up the results:
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{img/plotTheoreticalMeanBinomial.png}
\caption{Test Binomial Model}
\end{figure}
The 95\% CI are too small to be seen clearly so were not shown in the latter plot.
\subsection{Test Simulations with Collisions (2)}
This test simulation has been performed with the following parameters:
\paragraph{Parameters}
\begin{itemize}
\item Number of couple Tx-Rx (\textbf{N}): \$\{2, 5, 10, 30\}
\item Number of Channels (\textbf{C}): 1
\item Send probability (\textbf{p}): \$\{0.05, 0.1, 0.15, 0.2, 0.4, 0.6, 0.8\}
\item Mean inter-arrival time ($\dfrac{1}{\lambda}$): 1s (\textbf{deterministic})
\item Time-slot size ($T_{slot}$): 2s
\item simulation-duration ($T_{sim}$): 3600s
\item Repeat: 40
\item Seed-set: \$\{repetition\}
\item \textbf{No Back-off} in case of collision
\end{itemize}
\noindent The aim of this verification is to assess if the mean throughput is comparable with some equations that will be found even in the case of presence of collisions. Also in this simulation we are in the hypothesis that each transmitters has a packet to send for every slot ($\dfrac{1}{\lambda} < T_{slot}$ with deterministic inter arrival of packets).\\
\noindent The \textbf{probability of a successful transmission in a particular time-slot}, in this case, is the following:
\begin{equation}
P\{" successful\ transmission"\} = P\{"only\ one\ tx\ transmits"\} = N\cdot p \cdot(1-p)^{N-1}
\end{equation}
This probability of successful transmission can be seen as the mean throughput of the system in the single channel case. In fact, let $N_{p}$ the number of packets successfully sent to the corresponding receiver, $N_{t}$ the number of time-slot considered in the count:
\begin{align*}
Tp\ (slot) = \frac{N_{p}}{N_{t}} = \frac{N_{t}\cdot P\{"successful\ transmission"\}}{N_{t}} = N\cdot p\cdot (1-p)^{N-1}
\end{align*}
By comparing the above formula with the results of the simulation the following results are obtained (\textbf{95\% CI too small to be seen}):
\begin{figure}[H]
\begin{minipage}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{img/SecondVerificationN2.png}
\end{minipage}
\begin{minipage}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{img/SecondVerificationN5.png}
\end{minipage}
\end{figure}
\begin{figure}[H]
\begin{minipage}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{img/SecondVerificationN10.png}
\end{minipage}
\begin{minipage}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{img/SecondVerificationN30.png}
\end{minipage}
\end{figure}
\noindent At this point we can state that a proper amount of verification of the implementation of the model has been carried out to make some simulation and gather some insight. %\colorbox{yellow}{Before doing so, an} observation of the result can be carried out at this point: with a good number of couple tx-rx a huge sending probability (i.e. greater than 0.5) is pointless to obtain a high throughput. This result will be considered during the scenario calibration in the next chapter. | {
"alphanum_fraction": 0.7559303525,
"avg_line_length": 65.1501976285,
"ext": "tex",
"hexsha": "199f5b7151b275d91c1420e5bf798d70beb603d6",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-02-06T09:39:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-02-06T09:39:48.000Z",
"max_forks_repo_head_hexsha": "055c30da1352aa22c128456bc2407c6a7619d4b5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gerti98/PerformanceEvaluationGroupProject",
"max_forks_repo_path": "doc/chapters/verification.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "055c30da1352aa22c128456bc2407c6a7619d4b5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gerti98/PerformanceEvaluationGroupProject",
"max_issues_repo_path": "doc/chapters/verification.tex",
"max_line_length": 572,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "055c30da1352aa22c128456bc2407c6a7619d4b5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gerti98/PerformanceEvaluationGroupProject",
"max_stars_repo_path": "doc/chapters/verification.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4768,
"size": 16483
} |
\section*{Version 0}
\paragraph{0.0.0 - 18.11.2019}
\begin{itemize}
\item Created the git repository
\item Copied the template for the KIT bachelors degree to be used as the basis of this document
\item Added a first version of the "premise" chapter and "document overview"
\item Added chapter about the system requirements of the unreal engine.
\end{itemize}
\paragraph{0.0.1 - 19.11.2019}
\begin{itemize}
\item Added the chapter about installing unreal engine
\item Added the chapter about the other application
\end{itemize}
Possible improvements in the future:
\begin{itemize}
\item Extend the other application chapter with the list of all the applications that was given in the book
\end{itemize}
\paragraph{0.0.2 - 21.11.2019}
\begin{itemize}
\item Added a draft of the chapter about design choices. The chapter
\begin{itemize}
\item Section about the scope of the project: Rather make the scope smaller, because a VR game is a lot of work
\item Section about "Why VR?" why should you even make a VR project. Could traditional media do the same?
\item Section about the method of locomotion in VR. Why teleportation might be a good idea.
\end{itemize}
\item Began working on the "ressources" chapter, which is supposed to list a few helpful ressources (books mainly) in the pursuit of making a VR representation
\end{itemize}
\paragraph{0.0.3 - 22.11.2019}
\begin{itemize}
\item Wrote the chapter about the book "Unreal for mobile and standalone VR"
\item removed the appendix section for now. Since I dont need that yet.
\item removed the inclusion of the preamble.tex
\item removed the inclusion of the list of tables for now
\item Added preliminary sections for the "tutorials" section.
\end{itemize}
% What has to be done for the first version of this document
% - Wirte the tutorials, that I wanted to write
% - Fix the missing citations and stuff in the other chapters
\paragraph{0.0.4 - 26.11.2019}
\begin{itemize}
\item Added all the missing citations
\item Decided, that for the first version there will only be a sequencer section in the tutorials chapter
\item Started to write the sequencer tutorial
\end{itemize}
% for the next version
% - Finish writing the sequencer tutorial
% - Add images to the sequencer tutorial
| {
"alphanum_fraction": 0.7721631206,
"avg_line_length": 35.25,
"ext": "tex",
"hexsha": "29966856b8fb279afe8ddf9d8aeb0a5d1eeaae64",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f410f3e4850c4617d8e05a5d459910b8f93f4b15",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "the16thpythonist/vr-unreal-starting-guide",
"max_forks_repo_path": "chap/changelog.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f410f3e4850c4617d8e05a5d459910b8f93f4b15",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "the16thpythonist/vr-unreal-starting-guide",
"max_issues_repo_path": "chap/changelog.tex",
"max_line_length": 159,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f410f3e4850c4617d8e05a5d459910b8f93f4b15",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "the16thpythonist/vr-unreal-starting-guide",
"max_stars_repo_path": "chap/changelog.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-22T01:45:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-22T01:45:04.000Z",
"num_tokens": 586,
"size": 2256
} |
\message{ !name(feasibility.tex)}
\message{ !name(feasibility.tex) !offset(-2) }
%!TEX root = ../PEI.tex
\label{sec:feasibility:apps}
\glsresetall
We are considering applications in isolation. DeviceManager could have
applications registered on device events (updates, deletion etc.,)
that would cause more traffic to the data store.
Mininet Hi-Fi \cite{Handigol:2012tg}\footnote{Version 2.0 (mininet-2.0.0-113012-amd64-ovf) available at \url{http://mininet.org}}. Unfortunely we had to hack the box configuration to avoid that hosts triggered gratioutious data packets that would trigger traffic to the controller when initializing the switch-controller connection. We changed the \texttt{net\.ipv6\.conf\.all\.disable\_ipv6,net\.ipv6\.conf\.default\.disable\_ipv6} fields in the file \texttt{/etc/sysctl\.conf} to true\footnote{This is a known problem https://mailman.stanford.edu/pipermail/mininet-discuss/2010-November/000167.html (contains solution)}.
We do not log creation, and deletion of tables. Those will not appear in the workloads. This is nor relevant since it usually happens upon a first switch connection or first host packet processed by an application for that switch
\section{Learning Switch}
\label{sec:feasibility:ls}
\glsresetall
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\todo{IT IS NOT IP. IT IS A MAC.}
\todo{Learning switch is what Kandoo calls a local app}
\begin{figure}[ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pic/feasibility/ls-events-broadcast}
\caption{Broadcast packet.}
\label{fig:ls:interaction:broadcast}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pic/feasibility/ls-events-unicast}
\caption{Unicast packet.}
\label{fig:ls:interaction:unicast}
\end{subfigure}
\caption[Learning Switch workloads]{Broadcast packets trigger a write for the source address of the respective packet. Unicast packets have to additionally read the source address port location.}
\label{fig:ls:interaction}
\end{figure}
The Learning Switch application emulates the hardware layer 2 switch
forwarding process. For each switch a different \emph{\texttt{MAC}-to-switch-port}
table is maintained in the data store. Each table is populated using
the source address information (i.e., \texttt{MAC} and port) present in every OpenFlow
\texttt{packet-in} request for the purpose of maintaining the location
of devices. After learning this location, the controller can install
rules in the switches to forward packets from a source to a
destination. Until then, the controller must instruct the switch to
\emph{flood} the packet to every port, with the exception of
the ingress port. Despite being distributed, each switch-table is
only accessed by the controller managing the switch in
question. Even so, we justify the study of this application for two
reasons: (i) it benefits from the fault-tolerance property of
our distribution and (ii) it is commonly used as the
single-controller benchmark application in the literature \cite{Tootoonchian:2012uia}.
\todo{to be common you need more than 1 example}
Figure \ref{fig:ls:interaction} shows the detailed interaction between the
switch, controller (Learning Switch) and data store for two possible
cases. First (figure \ref{fig:ls:interaction:broadcast}), the case for broadcast packets that require
one write operation to store the switch-port of the
source address. Second (figure \ref{fig:ls:interaction:unicast}), the case for unicast
packets, that not only stores source information, but also read the
switch egress port for the destination address.
\note{Nao esta claro que a tabela nesta app e single
reader, single writer. NInguem entendeu...}
Original: Map<IOFSwitch, Map<MacVlanPair,Short>> . Modified : Map<IOFSwitch, Map<MacVlanPair,Short> . Originally instanced with concurrent hashmaps for the main map and each table map.
CHANGE: We still maintain a concurrentMap for the main switch-to-KeyValueTable map. This is required since the manipulation of the map is done concurrenty by: each switch thread and by the REST API.
There isn't any problem since KeyValueTable is thread safe \ref{section.datastore.thread.safety}
Original: It is critical that the size of each switch table is limited
due to resource usage. Mac addresses have to be recycled as devices enter and leave the network. Or simple as traffic dynamics change. Either way in some networks we can't possibly support all the hosts \ref{sec.learning.switch.study.size} for table. Remember each table can contain all hosts in the network! Traditionally the LearningSwitch uses a LRULinkedHashMap (Least Recently Used Linked Hash Map). This imposes a limit on the number of entries present in the table. The behaviour of this LRULInkedHashMap is to remove the oldest entry in the table whenever the threshold is reached. The default threshold (used in all our experiences unless we state otherwise) is of 1000 hosts per table. (Remember that each table can potentially keep an entry for each host in the all network! ).
Change: We replicate this kind of table in the data store. Which was fairly simple. Off course the behaviour changes in the case we use a cache. As discussed in section \ref{sec.learning.switch.lru.cache}
The controller only replies to one switch. This causes the problem set by \cite{of.cpp} of mac broadcasting REVISE EXAMPLE. The path is set switch by switch.
The application is configured with a idle timeout of 5 and a hard timeout of 0 seconds. As such the switch deletes the entry after 5 seconds of inactivity. This will result in a delete from the switch table that will trigger an OpenFlow \texttt{FLOW\_REMOVED } message from the switch to the controller. The Learning switch applications will process this OF message and delete the corresponding entry in the data store. Then, (immediately after) it will instruct the switch the remove the reverse entry from its table. The switch upon processing this message will trigger another \texttt{FLOW\_REMOVED} message to the controller. This recursive process is not infinitive. The switch will only trigger \texttt{FLOW\_REMOVED} messages when it deletes an entry from the table. If the controller instructs it to remove something that isn't there, we will not trigger any message (sec. 6.4 of \cite{openflow-spec}).
\subsection{Broadcast Packet}
This workload corresponds to the operations performed in the data
store when processing broadcast packets in a OpenFlow
\texttt{packet-in} request. Table \ref{table:lsw0:broadcast} shows that for the
purpose of associating the source address of the packet to the ingress
switch-port where it was received, the Learning Switch application performs one
write operation with a request size of 113 bytes and reply size of 1
byte.
\begin{table}[ht]
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Associate source address to ingress port & W & 113 & 1 \\ \bottomrule
\end{tabular}
\caption[Workload lsw-0-broadcast( Broadcast Packet) operations]{Workload lsw-0-broadcast( Broadcast Packet) operations and sizes (in bytes).}
\label{table:lsw0:broadcast}
\end{table}
\subsection{Unicast Packet}
Workload \textbf{lsw1-1} is the result of processing an OpenFlow request
triggered by an unicast packet. Thus, when compared to the previous
workload (\textbf{lsw1-0} covering broadcast packets), an additional
operation is required to query the switch-port location of the
destination address. Table \ref{table:lsw0:unicast} provides a summary all the data
store operations in this workload.
\begin{table}[ht]
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Associate source address to ingress port & W & 113 & 1\\\midrule
Read egress port for destination address & R & 36 & 77 \\\bottomrule
\end{tabular}
\caption[Workload lsw-0-unicast( Unicast Packet) operations]{Workload lsw-0-unicast( Unicast Packet) operations and sizes (in bytes).}
\label{table:lsw0:unicast}
\end{table}
\subsection{Optimizations}
Our serialization process between ewsdn and now has changed. In ewsdn we used longer tablenames and java serialization. Aftewards we changed our codes so that each tablename is the concatenation of ``LS'' and the switch identifier (which can be as long as FIXME characters. Also we have manually convert the information stored in the data store. Notice for example that between the examples the read egress port operation, the return value (a switch port idenfifiers) varies is 12 times greater in the java serialization process.
\begin{table}[ht]
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Associate source address to ingress port & W & 29 & 1\\\midrule
Read egress port for destination address & R & 27 & 6 \\\bottomrule
\end{tabular}
\caption[Workload lsw-1-unicast( Unicast Packet) operations]{Workload lsw-1-unicast( Unicast Packet) operations and sizes (in bytes).}
\label{table:lsw1:unicast}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{../data/reportGenerator/lsw-0-broadcastlsw-0-unicastlsw-1-broadcastlsw-1-unicasttxLatCmp.pdf}
\caption[Learning Switch workloads performance comparison]{Learning Switch workloads performance comparison (90th percentile). }
\end{figure}
\subsection{Cache}
\label{sec.learning.switch.lru.cache} Discuss cache implications of
least recently used.
We only get values from cache if they exist despite the time they are
present there. This happens since Learning Switch installs rules with
a idle timeout. So when rules expire, the switch warns the Learning
Switch application which in turn deletes the entries in the data
store. So we don't actually require to be worried about stale values
in the cache being invalid (e.g., a host moved from one port to
another). If they are, the idle mechanism associated with rules
installed in the data plane will, eventually, correct the situation.
Off course we are not strongly consistent anymore.
We don't actually improve on the micro-benchmarks tested measures
shown throughout this chapters. We do not improve simply because with
cache we do not avoid or improve any of the data store interactions
present in table \ref{table:work:lsw1-1} (that shows the latest
learning switch workload). With cache we will only improve on the
long run, since we can now avoid the two type of requests present in
that table. First we can avoid re-writing the source address to source
port association when we already now it. In the original learning
switch association this is re-write is not costly ($\Omega(1)$ ) and also
has the functional impact of refreshing the entry timestamp such that
the least recently used table can keep up consistently with the last
active hosts and delete the not active ones. In the cache
implementation this is not true anymore. Now, the active hosts
actually get forgotten somewhere in time as newly (unknown) entries
are added to the data store. This is not problematic since the host,
being active, will benefit from latency a lot before actually being
erased from the the data store due to the newly added hosts. This off
course is explicitly dependant of the maximum number of entries per
table.
\footnote{\url{http://docs.oracle.com/javase/6/docs/api/java/util/LinkedHashMap.html}}
(Note that insertion order is not affected if a key is re-inserted
into the map. (A key k is reinserted into a map m if m.put(k, v) is
invoked when m.containsKey(k) would return true immediately prior to
the invocation.)). So NOTHING I SAID WAS TRUE! :) BUT IT APPLIES TO
GETS : In access-ordered linked hash maps, merely querying the map
with get is a structural modification.)
FIXME : YOU MUST ACTUALLY CONFIRM ALL THIS. AND ADD SUPPORT IN THE
IMPLEMENTATION OF THE LEARNING SWITCH.
When avoiding this write
in the cache implementation we must actually be sure that we only
avoid to write to the data store when the association is known in
cache and it is actually correct (the ingress port is the same from
the packet being processed).
The second avoidance is the read operation that queries for the egress
port of the current processed packet. We do not actually need to read
from the data store it the entry is present in the cache. First the
data is not been modified by any other controller since we are the only
ones which manipulate our switch tables.
With cache we no longer read from the database. We do not need since
put updates the cache. So if it isn't on the cache it is not in the
data store.
% \begin{figure}
% \centering
% \includegraphics[width=\textwidth]{pic/feasibility/ls-events-unicast}
% \caption[]{\textbf{}}
% \end{figure}
\label{cenas}
% \begin{figure}
% \centering
% \includegraphics[width=\textwidth]{pic/feasibility/ls-events-broadcast}
% \caption[]{\textbf{}}
% \end{figure}
\section{Load Balancer}
\label{sec:feasibility:lb}
\glsresetall
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The Load Balancer application employs a round-robin algorithm to distribute the
requests addressed to a \gls{vip} . In order to
understand its behaviour we begin by the data model currently used. Figure
\ref{fig:lb-model} shows the three different entities used in the Load
Balancer. The VIP entity .
representss a virtual endpoint with a specified \gls{ip}, port and
protocol address. Each VIP can be associated with one or more pools
of
servers. Given that the distribution algorithm is round-robin, each pool
has a current assigned server (\texttt{current-member} attribute in the figure). Finally, the third entity --- Member
--- represents a real server. Each of those entities, corresponds
to a different tablee in the data store, indexed by the entity
key attributes represented in the figure (in bold). Moreover, a fourth table is
required to associate \gls{i} addresses to VIP resources. In light of
this data model, the load balancer logic requires the following
operations from the data store: (i) check if the source address is
associated with a VIP resource; (ii) if so, read the VIP, Pool and
Member information required to install flows in the switch and (iii)
update the pool \texttt{current-member} attribute. This description corresponds to the case where OpenFlow
\texttt{packet-in} requests are indeed addressed at a VIP
resource. The respective workload which, \textbf{markedly - replace,
aly says is wrong TODO} is the heavier in
the Load Balancer application, is described in detail in the workload
\textbf{lbw1-0} section ahead. Alternatively, workload
\textbf{lbw1-1} captures the workload created by every
packet unrelated to a VIP. Finally, workload \textbf{lbw1-2}
considers the special case of ARP requests questioning the hardware
address of a VIP \texttt{IP}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{./pic/feasibility/lb-model.pdf}
\caption{\small Simplified Load Balancer entities data model. The data
store contains a table for each entity, indexed by their keys (represent as bold attributes). }
\label{fig:lb-model}
\end{figure}
We can see the model. FIXME (take from final.tex)
Initially we also found different index tables: (FIXME How should i
present this kind of stuff? show the model, and then show a table in
the same figure if possible. The table should show the indexes listed
with names, types and simple explanation.)
\texttt{vipIpToId} to map Ip (integer) into VIPID (string)
\texttt{vipIpToMac} to map Mac (MacAddress) into VIPID (string)
\texttt{memberIpToId} to map Ip (integer) into Member ID.
\texttt{clientToMember} to map IPClient into Member Remember
connected clients.
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pic/feasibility/lb-events-broadcast}
\caption{ARP packet address at a VIP.}
\label{fig:lb:interaction:arp2Vip}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pic/feasibility/lb-events-unicast}
\caption{}
\label{fig:lb:interaction:ip2Vip}
\end{subfigure}
\caption[Load Balancer workload events]{A \texttt{\gls{arp}} request message addressed at a VIP \gls{ip} that results in a direct \gls{arp} reply. On the left a normal \gls{ip} packet addressed at VIP should be resolved (who is responsible) and replied by installing the appropriate rules}
\label{fig:lb:interaction}
\end{figure}
\subsection{Packets to a VIP}
When the Load Balancer receives a data packet addressed
at a VIP, it triggers the operations seen in table
\ref{table:workload:lbw1-0}.
The first operation fetches a VIP resource associated with the
destination \texttt{IP} address of the packet.
If it succeeds (reply different from 0), then it proceeds to read
the chosen pool for the returned VIP\footnote{The current implementation of this
application always chooses the first existent pool.}.
Afterwards it updates (third operation) the fetched pool, along with the newly modified
\texttt{current-member}.
The forth and final operation retrieves
the address information for the selected Member.
\note{ discutir a questão de update concorrente (segunda e terceira operacao )}
\begin{table}[H]
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Get VIP id for the destination IP & R & 104 & 8\\\midrule
Get VIP Info (pool information) & R & 29 & 509\\\midrule
Get the choosen pool & R & 30 & 369\\\midrule
Conditional replace pool after round-robin & W & 772 & 1\\\midrule
Read the chosen Member & R & 32 & 221 \\\bottomrule
\end{tabular}\caption[Workload lbw-0-ip-to-vip( IP packet to a VIP)
operations]{Workload lbw-0-ip-to-vip( IP packet to a VIP) operations
and sizes (in bytes).}\end{table}
\subsection{Normal Packets}
Even when processing a normal packet, not related to a VIP address at
all, the Load Balancer still has to find out if this is the case. This workload, which only requires one operation (see
table \ref{table:workload:lbw1-1}) sets the minimum amount of work imposed by
the Load Balancer to the controller pipeline.
\subsection{ARP Request}
This workload results from processing an ARP Request addressed at a
VIP address. The data store operations, summarized in Table
\ref{table:workload:lbw1-2}, shows that two reads are
required. First, as previously seen, it queries the data
store to check if the packet destination address is a VIP (1 read
needed). As it is, the controller then retrieves the \texttt{MAC} address for that
VIP server (so, another read is needed).
\begin{table}[ht]
\small
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Get VIP id for the destination IP & R & 104 & 8\\\midrule
Get VIP info (proxy MAC address) & R & 29 & 509 \\\bottomrule
\end{tabular}\caption[Workload lbw-0-arp-request( Arp Request to a
VIP) operations]{Workload lbw-0-arp-request( Arp Request to a VIP)
operations and sizes (in bytes).}
\end{table}
\subsection{Optimizations}
\begin{table}[H]
\small
\begin{tabular}{llccccc}
Operation & Type & \multicolumn{5}{c}{ (Request, Reply) } \\ \midrule
& & lbw-0 & lbw-1 & lbw-2 & lbw-3 & lbw-4 \\ \toprule
%& & \multicolumn{5}{c}{(Request, Reply)} \\midrule
Get VIP id of destination IP & R & (104,8) &\multirow{2}{*}{(104,509)} & \multirow{2}{*}{(104,513)} &\multirow{2}{*}{\textbf{(62,324)}} & \multirow{2}{*}{-} \\\cmidrule{1-2}
Get VIP info (pool) & R & (29,509) & & & & \\ \midrule
Get the choosen pool & R & (30,369) & - & (30,373) & - & \multirow{3}{*}[-2mm]{\textbf{(11,4)}} \\ \cmidrule{1-2}
Replace pool after round-robin & W & (772,1) & -
&\textbf{(403, 1)} & - \\ \cmidrule{1-2}
Read the chosen Member & R & (32,221) & - & (32,225) & \textbf{(44,4)} & \\\bottomrule
\end{tabular}\caption[Load Balancer IP to VIP workload operations across
diferent implementations.]{Load Balancer lbw-\textit{X}-ip-to-vip workload
operations and respective sizes (in bytes) across diferent
implementations. Bolded sizes represent significant differences
across implementations. Sizes marked with \texttt{-} are equal to
the previous. }
\end{table}
\begin{figure}[ht]
\begin{floatrow}
\ffigbox{%
\includegraphics[scale=0.4]{../data/reportGenerator/lbw-0-ip-to-viplbw-1-ip-to-viplbw-2-ip-to-viplbw-3-ip-to-viplbw-4-ip-to-viptxLatCmp.pdf}
}{\caption{Cenas}%
}
\capbtabbox{%
\small
\begin{tabular}{lll}
Prefix & Data store & Section\\\toprule
lbw-0 & Simple Key-Value & \ref{sec:} \\
lbw-1 & Cross References & \ref{sec:} \\
lbw-2 & Versioned Values & \ref{sec:} \\
lbw-3 & Column Store & \ref{sec:} \\
lbw-4 & Micro Components & \ref{sec:} \\ \bottomrule
& & \\
& & \\
& & \\
& & \\
& & \\
\end{tabular}
}{%
\caption[Name guide to Load Balancer workloads]{Name guide to Load
Balancer workloads.}\
}
\end{floatrow}
\end{figure}
In the ARP request case we improve by using cross references tables
to join the two requests into one (lbw-1-arp-request). With the column
based implementation we improve slighty by reducing the size of the
requests as see
\textbf{Cross References Tables}, So both arp request and the packet address at a VIp workloads show
requests that first obtain the VIP id and only aftewards read the VIP
from the VIP table using the obtained id. So we can improve our
workload footprint by directly obtaining the VIP with only the first
request.
We can use our cross reference tables mechanism to do that. sec
\ref{sec.datastore.cross.references}.
\textbf{Versioned Values} With versioned values we can easily cut the operation which updates
the pool after round robin size in half. Off course every other get
operation will have an additional 4 bytes on reads. This is negligible specially in the case of the Load Balancer
since
\textbf{Columns}
So with columns lots of changes had to be made.
All value classes (Member, VIP , Pool ) were accessed directly by
their fields. We \emph{choose} to change that such that are fields
are private and manipulated by getters and setters.
So we choose to have the Members and VIPS accessed by columns.
For VIPS we don't actually gain much from this transformation (513 /
324 size reduction) compared with Members (225 / 4 reduction in
size) . The reason for this is that we actually have to retrieve a lot
of information from the VIP. Since we are caching VIPS and we must
access it in two different types of flow processing (broadcast packet
address to a VIP and packet address to a VIP) we should on each VIP
request pull the union of the information required for the two
operations. The reason being is simple. The flow are not
independent. When processing the first we can be sure that unless some
anomaly happens the hosts performing the arp request will consequently
trigger the second flow processing. So we benefit from having the
information in cache to avoid having to perform another request just
because a field is missing.
Also Members we only require a 4 byte request.
\textbf{Micro Components} When analyzing the last workload available. We notice two things: we
can't avoid going to the data store for pull and update the round
roubin algorithm. The reason being is twofold: first we should be
sure that the round-robin algorithm is consistent, second even if we
cache the thing then we are in trouble since the update will fail with
the timestamps in order. Off course we use some kind of CRDT value in
the data store which in this case would be pretty simple but then we
would be advocating the extra complexity of the eventual consistency
plane. So we want to make sure that we got our round robin accurate
but we want to improve on the performance of this workload in
general. Well, the ideal solution for this would be to use cache for
the VIP as we already have. This a lot of the data store interaction
can be alliviated: we do not have to always consult the data store on
each request since we can cache which address are VIP's and which are
not. But when an address is a VIP we must consult the data store in
order to round-robin. Ideally we would like to perform all operations
at once in the data store : retrieve the choosen member and update the
pool. For this we make use of micro-componenets by installing a
prototype method in the data store that does all this for us.
\subsubsection{Cache}
So we can actually improve on reading the members and vips. We do not
improve on reading the pools because given the replace operation this
could potentially cause even more operations due to lack of the
consistent pool information. So we transform the application to use
cache on the VIPS and Members table. On the theoretical best case scenario this
can lower the workload by completely avoiding the first and last
operation. Ideally we should also avoid the normal case of IP packets
not addressed at a VIP. For this our cache must understand what a
empty value means FIXME. (use containsInCache . update to insert empty
in cache. Then see if containsInCache AND get == null you can be
certain the value is not a VIP), completely avoiding the going to the
data store.
FIXME: how to measure, justify the time used in the cache parameter?
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{./../data/reportGenerator//lbw-3-ip-to-notviptxLat.pdf}
\caption[Minimum impact of Load Balancer in the pipeline.]{Workload
lbw-3-ip-to-notvip shows the minimal impact the Load Balancer
applications has on the pipeline in our best implementation.}
\end{figure}
\section{Device Manager}
\label{sec:feasibility:dm}
\glsresetall
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The Device Manager application tracks and stores host device
information such as the switch-ports to where devices are
attached to\footnote{The original application is able to track devices as
they move around a network, however our current implementation does
not take this feature into consideration.}. This information ---
that is retrieved from the OpenFlow packets that the controller receives --- is crucial to
Floodlight’s Forwarding application. That is to say, that for each new flow, the Device
Manager has the job of finding a switch-port for both the destination
and source address. Given this information, it is able to pass it to
the Forwarding application, that can later decide on the actions to
take (e.g., best route). Notice that this arrangement, excludes the
Learning Switch as the forwarding application in action.
Regarding the data store usage, Device Manager requires three
tables. The first table, keeps track
of known devices created by the
application. A second table indexes those same devices by their
\texttt{(MAC,VLAN)} pair. Finally, a third table maintains an index
from an \texttt{IP} address to one or more devices. Notice
that devices are uniquely identified by their \texttt{id} or \texttt{(MAC,VLAN)} pair, but not their \texttt{IP}\footnote{\textbf{This is true in the context of
the application. But it does not makes any sense. The only plausible
explanation i see, is that this related with the mobility
tracking capabilities of the original device manager. When considering
mobile hosts, different devices will surely appear with the same
IP.} }.
We consider two distinct workloads for this application differing in
wether the application already knows the source device information (workload \textbf{dmw1-0})
or not (workload \textbf{dmw1-1}). In the former case, the
application mainly reads information from the data store in order to
obtain location information. As for the latter case, the
application must create the device information and updates all the
existent tables. Therefore, this workload generates more traffic between
the controller and data store.
A lot of changes had to be made. First to the Device class which is
more than a simple value class and hardly Serializable due to its
static references...
We lost the Device new DeviceDebugEventLogger() which logged events in
the control plane.
Avoiding broadcast address querys in the data store. Before they were
free (nearly) not now.
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pic/feasibility/dm-unknown}
\caption{Packet from an unknown device.}
\label{fig:dm:interaction:unknown}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pic/feasibility/ls-events-unicast}
\caption{Packet from a known device.}
\label{fig:dm:interaction:known}
\end{subfigure}
\caption[Device Manager workload events]{Workloads for this application heavily depend on the state of the data store. Unknown devices trigger several operations to the creation of these, while known devices only require an update of their "last seen" timestamp. No matter the case, the source and destination devices are retrieved if they exist.}
\label{fig:dm:interaction}
\end{figure}
\subsection{Known Devices}
\begin{table}[H]
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Read the source device key & R & 408 & 8\\\midrule
Read the source device & R & 26 & 1444\\\midrule
Update "last seen" timestamp & W & 2942 & 0\\\midrule
Read the destination device key & R & 408 & 8\\\midrule
Read the destination device & R & 26 & 1369 \\\bottomrule
\end{tabular}
\caption[Workload dm-0-known( Known Devices) operations]{Workload dm-0-known( Known Devices) operations and sizes (in bytes).}
\end{table}
When devices are known to the application, a \texttt{packet-in} request
triggers the operations seen in table \ref{table:workload:dmw1-1}. The
first and final operation read source and destination device
information in order to make their switch-ports available to the
Forwarding process. Additionally, the second operation
(a write), updates the ``last seen'' timestamp of the source
device. The thorough reader will notice that the request size of this
operation significantly exceeds the size of a timestamp. This is
taken in consideration when we optimize this application (see
section TODO). Right now it suffices to say that the lack of a more
sophisticated data model and appropriate concurrency control in the
data store interface leads to excessive information exchange.
\subsection{Unknown Source}
\begin{table}[H]
\centering
\begin{tabular}{l c c c c}
Operation & Type & Request & Reply \\ \toprule
Read the source device key & R & 408 & 0\\\midrule
Get and increment the device id counter & W & 21 & 4\\\midrule
Put new device in device table & W & 1395 & 1\\\midrule
Put new device in \texttt{(MAC,VLAN)} table & W & 416 & 0\\\midrule
Get devices with source IP & R & 386 & 0\\\midrule
Update devices with source IP & W & 517 & 0\\\midrule
Read the destination device key & R & 408 & 8\\\midrule
Read the destination device & R & 26 & 1378 \\\bottomrule
\end{tabular}
\caption[Workload dm-0-unknown( ARP from Unknown Source)
operations]{Workload dm-0-unknown( ARP from Unknown Source) operations
and sizes (in bytes).}
\end{table}
This workload is triggered in the specific case in which the source device
is unknown and the OpenFlow message carries an ARP request
packet. Seing that both these conditions are true, the application
proceeds with 7 data store operations, described in table
\ref{table:workloads:dmw1-1}. Their intention is to create device
information and update the three tables described in the beginning
of this section. The first operation reads the source device. Being
that it is not known, this operation fails (notice in the table, that
the reply has a size of zero bytes). As a result the application
proceeds with the creation of a device. For this, the
following write (second operation) atomically retrieves
and increments a device
unique \texttt{id} counter. Afterwards, the third and fourth operation
update, with the newly created device, the device and MAC/VLAN
tables respectively. Likewise, the fifth and sixth operations update
the \texttt{IP} index table. Given that this index links an \texttt{IP} to
several devices we are forced to first collect the set of devices in
order to update it\footnote{Again, this can be alleviated with a more
expressive data model.}. This \emph{read-modify} operation can
fail in case of concurrent updates. Under that case, both operations
would be repeated until they succeed. At this point, the Device Manager
is done with the creation of the device and can, finally, move to the
last operation which reads the destination device information.
\subsection{Optimizations}
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{../data/reportGenerator/dm-0-unknowndm-1-unknowndm-2-unknowndm-3-unknowndm-4-unknowntxLatCmp.pdf}
\caption{}
\label{}
\end{subfigure}%
~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{../data/reportGenerator/dm-0-knowndm-1-knowndm-2-knowndm-3-knowndm-4-knowntxLatCmp.pdf}
\caption{}
\label{}
\end{subfigure}
\caption[Device Manager performance analysis]{}
\label{fig:dm:performance}
\end{figure}
\begin{table}
\small
\begin{tabular}{lll}
Prefix & Data store & Section\\\toprule
dm-0 & Simple Key-Value & \ref{sec:} \\
dm-1 & Cross References & \ref{sec:} \\
dm-2 & Versioned Values & \ref{sec:} \\
dm-3 & Column Store & \ref{sec:} \\
dm-4 & Micro Components & \ref{sec:} \\ \bottomrule
\end{tabular}
\caption[Name guide to Device Manager workloads]{Name guide to
Device Manager workloads.}
\label{table:names:dm}
\end{table}
\begin{table}[H]
\small
\centering
\begin{threeparttable}
\begin{tabular}{ll ccccc}
Operation & Type & \multicolumn{5}{c}{ (Request, Reply) } \\ \midrule
& & lbw-0 & lbw-1 & lbw-2 & lbw-3 & lbw-4 \\ \toprule
Get source key & R &(408, 8) & \multirow{2}{*}{(408,1274)} &
\multirow{2}{*}{(408,1278)} & \multirow{2}{*}{(486,1261)} &
\multirow{2}{*}{(28,1414)} \tnote{a} \\ \cmidrule{1-2}
Get source device & R & (26,1444) & & & & \\ \midrule
Update timestamp & W & (2942,0) & (2602,0) & (1316,1) & (667,1) &
(36,0) \\ \cmidrule{1-2}
Get destination key & R & (408,8) & \multirow{2}{*}[-1mm]{(408,1199)} &
\multirow{2}{*}[-1mm]{(408,1203)} & \multirow{2}{*}[-1mm]{(416,474)} &
\multirow{2}{*}[-1mm]{N/A} \\ \cmidrule{1-2}
Get destination device & R & (26,1369) & &
& & \\\bottomrule
\end{tabular}
\caption[Workload dm-0-known( Known Devices) operations]{Workload
dm-0-known( Known Devices) operations and sizes (in bytes).}
\begin{tablenotes}
\item [a)] This operation also fetches the destination device.
\end{tablenotes}
\end{threeparttable}
\end{table}
%TODO - do not use put new device in MAC,VLAN table. This is
%confusing.
\begin{table}[ht]
\small
\centering
\begin{threeparttable}
\begin{tabular}{ll ccccc}
Operation & Type & \multicolumn{5}{c}{ (Request, Reply) } \\ \midrule
& & lbw-0 & lbw-1 & lbw-2 & lbw-3 & lbw-4 \\ \toprule
Read source key & R & (408,0) & - & - & (486,0) & (28,201)\tnote{a}\\
Increment counter & W & (21,4) & - & - & - & \multirow{5}{*}{(476,8)} \\
Update device table & W & (1395,1) & (1225,1)\tnote{b} & - &
(1183,1) & \\
Update MAC table & W & (416,0) & - & - & -
& \\
Get from IP index & R & (386,0) & - & - & - & \\
Update IP index & W & (517,0) & - & - & - & \\
Get destination key & R & (408,8) &
\multirow{2}{*}{(408,1208)}\tnote{b} & \multirow{2}{*}{(408,1212)} &
\multirow{2}{*}{(416,474)} & \multirow{2}{*}{N/A} \\
Get destination device & R & (26,1378) & & & \\\bottomrule
\end{tabular}
\caption[Workload dm-0-unknown( ARP from Unknown Source)
operations]{Workload dm-0-unknown( ARP from Unknown Source) operations
and sizes (in bytes).}
\begin{tablenotes}
\item [a)] This operation also fetches the destination device.
\item [b)] Differences in sizes caused by a SERIALIZATION improvement
\end{tablenotes}
\end{threeparttable}
\end{table}
%the question that may arise: why not simply put everything inside the
%data store then? Well, in the case of the device manager, there are
%actually a few subitilidades hidden in our explanation. First there
%is a dependency of others services. topology manager.
\subsubsection{Cross References tables}
So in practice we have to get the device key from the mac/vlan table and only then we can actually get the device information we want. This is redudant, and we can actually perform the all operation instantenously by using cross references tables. This reduces the
number of messages. In this workload we also tuned the device n
serialization process to lower the devices message size. We simple
remove the reference to the entityClassifier since classifiers are
known to the local controller and we actually only need the
entityClassifier name. Its instance/class can then be obtained.
\subsubsection{Timestamps}
\subsubsection{Cache}
With cache we fetch known devices in an optimistic concurrency
manner. If there is no such device in cache, we then try to obtain it
from the data store, as it might be created from other
controller. (Really? - yes same device with different routes that go
through different openflow controllers)
What we try to do is to fetch device from the cache. At some point in
time in a normal network, we hope that all device information is
known. After that point devices in cache should pass the timestamp
update at first (if they are not updated by concurrent
controllers). If the devices are connected to different openflow
island simultaneously than this is a bad idea since we will actually
have to perform one more request that the normal workload
pattern. (try to updated - fails, retrieve new , update) . Off course
this could be mitigated by having the update attempt to return the
currently present value timestamp..
\subsubsection{Columns}
Biggests changes. Joshua Blosch advice. To be fair, we have also
broken this rule every now and then.
Other applications can add and manipulate what kind of columns they
want for the devices. For example in our scenario only the attachment
points of the destination device are needed. But other applications
could require the complete list of MAc addresses known for that
device... They could easily do that by manipulate the DeviceManager
interface to add this requirement to the list of fetched information
from the Device.
\subsubsection{Micro Components}
Just a proof of concept. Establish special methods in the data store.
Three :
createDevice <-
updateDeviceTimestamp <-
getTwoDevices <- could be done with transactions. Gets the source
device entirely. Also gets the attachments points of the destination
device since these are required to forwarding.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../PEI"
%%% End:
\message{ !name(feasibility.tex) !offset(-812) }
| {
"alphanum_fraction": 0.7445181802,
"avg_line_length": 48.7466174662,
"ext": "tex",
"hexsha": "f1234d1c522b829e57d39f5086a6bb604c2973bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ce685d7dc3b76a65fff9853adde3a2fc689595c2",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "fbotelho-university-code/consistentSDN",
"max_forks_repo_path": "tex/_region_.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ce685d7dc3b76a65fff9853adde3a2fc689595c2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "fbotelho-university-code/consistentSDN",
"max_issues_repo_path": "tex/_region_.tex",
"max_line_length": 912,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ce685d7dc3b76a65fff9853adde3a2fc689595c2",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "fbotelho-university-code/consistentSDN",
"max_stars_repo_path": "tex/_region_.tex",
"max_stars_repo_stars_event_max_datetime": "2017-10-21T16:12:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-21T16:12:54.000Z",
"num_tokens": 10278,
"size": 39631
} |
% Full title as you would like it to appear on the page
\chapter{A New Paradigm For Wide-Field Wavefront Sensing}
\label{chap:new_paradigm}
% Short title that appears in the header of pages within the chapter
\chaptermark{A New Paradigm}
\epigraph{What is the most important theme in computer science? [Class responds.] No. It is \textit{abstraction}.}{John Ousterhout in CS 140: Operating Systems, 2020}
\section{Challenges with Previous Approaches}
Keeping telescopes in focus is an art as old as the instruments themselves. Their widespread use spurred the development of theories of optics and aberrations. In 1678, the famed Dutch physicist Christian Huygens proposed that every point that interacts with light may be regarded as the source of a new spherical wave. In 1818, the French physicist Augustin-Jean Fresnel incorporated interference into this model. The resulting Huygens-Fresnel principle is still a widely taught model for wave propagation and provides the physical intution behind reflection, refraction, and diffraction. The paradigm also introduced the notion of the wavefront.
A wavefront is a two dimensional surface over which the phase of the wave is constant. We also overload this diction to refer to aberrations, or two dimensional spatial offsets from a reference wavefront. Figure \ref{fig:wavefront} shows example aberrations to planar and spherical reference wavefronts.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figs/new_paradigm/wavefront.png}
\caption[Wavefront and Wavefront Aberrations]{Example aberated planar and spherical wavefronts. The horizontal red lines show the direction of propagation. The vertical red line shows the reference wavefront and the blue line shows an example aberrated wavefront. Source: \cite{wavefront_fig}. }
\label{fig:wavefront}
\end{figure}
Wavefronts are useful because the wavefront aberrations in different planes, and through time, largely characterize the image quality of an optic. In the case of ground based telescopes, there are three primary contributions to image quality: the atmosphere, the telescope, and the camera. The field of \textit{adaptive optics} is concerned with controlling deformable mirrors to the optical path to correct for aberrations induced by atmospheric turbulence on 10-100Hz frequencies. However, for wide field telescopes with fields of view on the order of a degree, a common corrections is not possible for the entire field of view. In this context we are primarily concerned with \textit{active optics} which strives to correct the aberrations due to the telescope. New methods of wavefront sensing for active optics is the focus of the first part of this thesis.
One way to characterize the optics of a telescope is through path length differences. For every position in the image plane, we can measure the path differences to points in the entrance pupil. An alternative way to mathematically express this is through an aberrated wavefront defined at the entrance pupil for each position in the image plane. Then the imaging properties can be computed with Fourier Optics. We refer readers who would like to understand this relation further to Goodman's classic Fourier Optics text \cite{goodman2005introduction}.
There are two thematic approaches to wide field wavefront sensing: zonal and modal. In the zonal approach, the entrance pupil is partitioned into an array of subaperatures. In each of these zones, the wavefront is characterized by its optical path length, local gradient, or local curvature. The wavefront measurement improves with more partitions. The Shack-Hartmann sensor and shearing interferometer are the two most common zonal approaches.
The modal approach treats the wavefront aberration, at a given point in the image plane, as a sum of low order polynomials defined over the entrance pupil. This wavefront measurement improves with higher order polynomial measurements. Modal approaches typically rely on strategically defocused wavefront sensors. The Rubin Observatory focal plane was designed with a modal approach in mind as it does not require lenselets or tweaks to the beam. As described in chapter \ref{chap:aos}, the Rubin focal plane contains four corner wavefront sensors specifically for this purpose.
There are two previously described approaches to wavefront sensing that are relevant for Rubin. The first, by Roodman et. al. \cite{2014roodman}, is the algorithm used for the Dark Energy Camera on the Blanco Telescope \cite{darkenergycamera}. It is based on the Fraunhofer diffraction integral:
\begin{equation}\label{eqn:fraunhofer}
I(x,y) \propto |\mathcal{F}\{P(\rho, \theta \exp(2\pi i W(\rho, \theta) / \lambda))\}|^2
\end{equation}
\noindent where $I$ is intensity; $\mathcal{F}$ is the two-dimensional Fourier transform; $P$ is the pupil function; $W$ is the wavefront; $\lambda$ is the wavelength; and $x,y$ and $\rho, \theta$ parameterize the image and pupil planes respectively. The wavefront is modeled as a sum of orthonormal Zernike polynomials:
\begin{equation}\label{eqn:zernike_decomp}
W(\rho, \theta) = \sum_i a_i Z_i(\rho, \theta)
\end{equation}
\noindent Starting from a set of telescope control parameters, the wavefront coefficients can be computed. Then the Fraunhofer diffraction integral can be used to generate a corresponding intensity image. The difference between the donut images and the images from the forward model,
\begin{equation}\label{eqn:loss}
\mathcal{L} = \sum_{\text{donut d}}||I_{\text{d,true}} - I_{\text{d,forward}}||_2^2
\end{equation}
\noindent can be used as a loss function. Then we can estimate the telescope parameters by using an optimizer to find the parameters that minimize this loss function. This works well when there are a few non-degenerate parameters and the problem remains approximately convex.
While this approach has been successfully applied to the Blanco Telescope, there are two impediments to applying it to the Rubin Observatory. The first is that Rubin has much higher dimensionality and complete degeneracy between some of the parameters of interest. The Blanco Telescope is a prime focus telescope with a single mirror; the Rubin Observatory is a modified Paul-Baker telescope with three mirrors. The Blanco active optics sytem controls 8 parameters; the Rubin active optics system strives to control 50 parameters. The Rubin control parameter optimization surface is highly nonconvex. In addition to these theoretical hurdles, the fast beam of the Rubin optics also necesitates much larger fourier transforms pushing the runtime of the algorithm beyond the 15 seconds we are targeting (see next chapter for more details).
The second approach is based on the transport of intensity equation (TIE):
\begin{equation}\label{eqn:tie}
\frac{\partial I}{\partial z} = -(\nabla I \cdot \nabla W + I \nabla^2 W)
\end{equation}
\noindent where $z$ is the optical axis. It builds on the curvature wavefront sensing technique developed by Francois, Claude, and Nicolas Roddier \cite{curvaturesensing}. Their key insight was that $\frac{\partial I}{\partial z}$ can be approximated by subtracting two donut images on different sides of focus. Then $W$ in the TIE equation can be solved for in Fourier space or with a Zernike polynomial series expansion. The large central obscuration, fast $f$-number, off-axis distortion and vignetting, and different field positions of intra and extra-focal donuts are four challenges to using curvature sensing for Rubin. Xin et. al. \cite{2015Xin} proposed solutions and extended curvature sensing to estimate the Rubin optics wavefront at four field positions. Below we give a high-level summary of the operations this algorithm performs, on a single wavefront sensor, to estimate the wavefront at the center of the corresponding wavefront sensor:
\begin{enumerate}
\item Crop $N$ pairs of intra-focal and extra-focal donuts \\$\{(D_{\text{intra},1},\ D_{\text{extra},1}), \dots, (D_{\text{intra},N},\ D_{\text{extra},N})\}$.
\item Make initial wavefront estimates $W_0 = 0,\ W_1 = \text{guess}$.
\item Set $i = 1$.
\item While $||W_i - W_{i-1}||_2^2 < \text{tolerance}$ repeat,
\begin{enumerate}
\item Apply nonlinear transformation $f$ which migrates donut image from its field position to the center of the wavefront sensor, based on current wavefront estimate $W_i$. We have $\{(D_{\text{intra},1}^\prime,\ D_{\text{extra},1}^\prime), \dots, (D_{\text{intra},N}^\prime,\ D_{\text{extra},N}^\prime)\}$ where $D_{*,k}^\prime = f(D_{*,k}, W_i)$.
\item Apply correct $g$ for intensity and vignetting differences between the pairs of donuts. We have $\{(D_{\text{intra},1}^{\prime\prime},\ D_{\text{extra},1}^{\prime\prime}), \dots, (D_{\text{intra},N}^{\prime\prime},\ D_{\text{extra},N}^{\prime\prime})\}$ where $D_{*,k}^{\prime\prime} = g(D_{*,k}^\prime, W_i)$.
\item Set $(\partial_z I)_k \propto D_{\text{intra},k}^{\prime\prime} - D_{\text{extra},k}^{\prime\prime}$.
\item Solve TIE for $W_{i+1}$ with $(\partial_z I)_k$ by series expansion.
\item Set $i = i + 1$.
\end{enumerate}
\item Return $W_i$ as wavefront for the center of the wavefront sensor.
\end{enumerate}
\noindent Afer this process the algorithm uses the wavefront estimates at the center of the four corner wavefront sensors to constrain 50 telescope control parameters (these parameters are described in the next chapter).
This approach has been developed by a dedicated team over the last decade. Parts of the algorithm, such as step 4 (d) are mature and have been validated on real data \cite{cwfs_comparison}. However, despite significant effort, the code for the full algorithm on Rubin remains incomplete. Thus it is difficult to benchmark or compare to alternatives.
The primary challenge on theoretical grounds is the convergence. There are no convergence gaurentees. The wavefront estimate may not improve between the iterations in step 4. Some tests suggest that the algorithm typically converges for small perturbations to the nominal optics configuration. However, for large perturbation the transformations $f$ and $g$ will have more error. A different strategy might be required.
The algorithm is also fairly computationally expensive. It takes approximately 15 seconds to perform 10 iterations of step 4 on 10 pairs of donuts. However, a typical wavefront sensor image contains close to a thousand donut images. A lot of potentially useful donut information is ignored.
There are also a few challenges on practical grounds. The functions $f$ and $g$ used in step 4 depend on the wavefront estimate and are fairly complicated. This makes the algorithm challenging to interpret and debug. Also, since $f$ and $g$ depend on the wavefront estimate, it is difficult to characterize the error in the full algorithm, because each step depends on the preceeding steps.
After spending a year supporting efforts to implement this algorithm, I realized the prudence in developing an alternative approach. I sought to develop an algorithm that was simple, transparent, robust, low-latency, and high-throughput. The next two sections develop two key paradigms behind the algorithm.
\section{The Double Zernike Polynomials}
The Zernike polynomials are a sequence of polynomials $\{Z_i\}$ that are widely used to characterize wavefronts in optics \cite{zernike}. They are defined over a unit disk, or annulus, and normalized such that
\begin{equation}\label{eqn:zernike_norm}
\int_{R_{\text{inner}}}^{R_{\text{outer}}} \int_0^{2\pi} Z_i(\rho,\theta)Z_j(\rho,\theta)d\rho d\theta = \delta_{ij}
\end{equation}
\noindent Figure \ref{fig:zernike} shows the first 21 Zernike polynomials. For characterizing a wavefront in the pupil plane we follow the convention of using $Z_4-Z_{21}$ by Xin et. al. \cite{2015Xin}. The first three polynomials are ignored because while they effect the position of the image, they do not impact the image quality. The polynomials beyond 21 are ignored because they are only weakly excited by typical Rubin perturbations.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figs/new_paradigm/zernikes.png}
\caption[Zernike polynomials]{The Zernike polynomials $Z_1$ throught $Z_24$ with the Noll index scheme. The obscuration of the annular domain is consistent with the obscuration in the Rubin optics.}
\label{fig:zernike}
\end{figure}
For small telescopes, this basis can be used to characterize the wavefront in the entrance pupil. For wide-field telescopes like the Rubin Observatory, the wavefront changes non-trivially across the field of view. Thus the wavefront is a four dimensional function of both the entrance pupil and focal plane. One way to model this would be to have another polynomial basis $\{P_i\}$ for the focal plane $x,y$. Then we could write the full wavefront as $W(x,y,\rho, \theta) = \sum_i \sum_j \beta_{ij} P_i(x,y) Z_j(\rho, \theta)$. The coefficients $\beta$ are indexed by the image plane index $i$ and pupil plane index $j$. The double Zernike polynomials are the $P_i(x,y) Z_j(\rho, \theta)$ where the field polynomials are circular Zernike polynomials $P_i = Z_i$ \cite{doublezernike}. Thus $\beta_{ij}$ is the mode of circular polynomial $Z_i$ in the focal plane, and annular polynomial $Z_j$ in the image plane.
In practice we also truncate the focal plane index. This leaves us with a finite set of coefficients to represent the full wavefront across the pupil and focal planes. We found that for the Rubin Observatory, most of the aberration power is concentrated in the first three focal plane polynomials. We simulated 500 perturbed telescope states and numerically calculated the double Zernike coefficients out to index 36. Then we studied the fraction of the aberrated wavefront contained in the first $k$ focal plane Zernikes. We found that the first three polynomials capture 90\% of the wavefront. These first three polynomials are a constant offset, a tip-plane, and a tilt-plane - collectively they define a plane. We suspect a similar pattern holds for other wide field telescopes. It suggests that the 54 double Zernike coefficients $\beta_{ij}$where $1 \leq i \leq 3$ and $4 \leq j \leq 24$ offer a good characterization of the wavefront.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figs/new_paradigm/truncated.png}
\caption[Aberration Power in Focal Plane Zernikes]{The fraction of the aberration that is present in the first $k$ focal plane Zernike polynomials, where $k$ is the truncation index on the x-axis.}
\label{fig:truncated}
\end{figure}
This separation of the wavefront into pupil and focal plane components is key to this work. Our key insight was as follows. At any point in the image plane, we can use the image to constrain the pupil wavefront at that position - we deem this the \textit{local} wavefront. Then we can interpolate the focal, or \textit{global}, wavefront from all the local estimates. In section \ref{sec:decomposing}, we describe techniques that are very well suited for each of these subproblems. Before then, we summarize recent progress in neural network based phase estimation.
\section{Neural Network Based Phase Estimation}
The potential for neural networks to learn the non-linear mapping between intensity patterns and aberrations in the pupil plane was first recognized in 1990 \cite{1990NatureMLAO}. Shortly afterwards, this potential was realized as neural networks were deployed to detect turbulence induced distortion on the Multiple Mirror Telescope \cite{1991NatureMLAO} and to detect aberrations in the primary mirror of the Hubble Space Telescope \cite{1993HubbleMLAO}. Others expanded this concept to predict more wavefront components \cite{1992SPIE.1706..113J}, incorporate temporal history \cite{1996ESOC...54...95L,1997ApOpt..36..675M}, compare reconstruction methods \cite{2006OExpr..14.6456G}, and better characterize atmospheric turbulence \cite{2008ISTSP...2..624W}.
In the past decade, convolutional neural networks (CNNs) \cite{726791} have re-emerged and spurred dramatic advances in computer vision \cite{10.5555/2999134.2999257, 10.5555/2999792.2999897, 10.1007/s11263-015-0816-y,7780459}. This has created new possibilities for wavefront sensing in astronomy. In \cite{2018OptL...43.1235P}, the authors created a CNN that could estimate the wavefront from a single PSF image. They used these estimates as initial starting points in a gradient-based optimization and showed this was superior to using random samples. \cite{2019OExpr..27..240N} showed wavefront sensing performance could be improved by introducing a preconditioner to broaden the PSF and create more intensity structure for the neural network to exploit. This brings up interesting new design possibilities for wavefront sensors. While conventional Lyot-based low order wavefront sensing methods have a limited dynamic range due to their linear recovery, \cite{2020OExpr..2826267A} showed that a CNN can extend the aberration range over which the wavefront can be estimated by an order of magnitude.
Previous work on machine learning based wavefront sensing focuses on sensing the full wavefront aberration. Here we focus on sensing the optics wavefront, across the field of view, in the midst of the dominant atmospheric contribution. This problem presents new challenges, such as how to best aggregate intensity information from throughout the field of view to suppress the spatially correlated error due to the turbulence contribution. Our approach is comprised of only two steps, is easy to characterize, and can process each donut image in 6 \textit{milliseconds}.
\section{Wavefront Estimation Framework}
\label{sec:decomposing}
The optics wavefront $W_{\text{opt}}$ is a function of two separate planes: the pupil plane parameterized by $(u,v)$ and the focal plane parameterized by $(x,y)$. We use the double Zernike polynomial basis \cite{doublezernike} to represent the optics wavefront,
\begin{equation}W_{\text{opt}}(u,v,x,y) = \sum_{i=1}^k\sum_{j=1}^m\beta_{ij}Z_i(u,v)Z_j(x,y)\end{equation}
\noindent where $\beta_{ij}$ are the coefficients, $Z_i$ are annular Zernike polynomials over the pupil, and $Z_j$ are circular Zernike polynomials over the focal plane. The goal of wavefront sensing is to estimate these coefficients $\beta_{ij}$ from the $n$ donut images $D_i$ positioned across the wavefront sensors (see Figure \ref{fig:focalplane}). Let the position of donut $i$ be $x_i,y_i$ and the defocus offset of the corresponding sensor be $z_i$. The wavefront sensing problem is to find $f$ such that
\begin{equation}\beta = f((D_1,x_1, y_1, z_1), \dots, (D_n, x_n, y_n, z_n))\end{equation}
We break this into two subproblems.
\begin{enumerate}
\item Estimate the local wavefront with a CNN at each donut position.
\item Interpolate the global wavefront from the local wavefront estimates across the focal plane.
\end{enumerate}
\noindent These steps are outlined in Figure \ref{fig:twostep}.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm, keepaspectratio]{figs/new_paradigm/twostep.png}
\caption[Two Step Approach To Wavefront Sensing]{In the first step, a convolutional neural network (CNN) processes each donut crop on the four wavefront sensors. In the second step, from each local wavefront coefficient we interpolate three global wavefront coefficients with least squares optimization (LS).}
\label{fig:twostep}
\end{figure}
\subsection{Estimating Local Wavefronts}
In the first subproblem, we estimate the total local wavefront $w_{\text{tot}}(u,v)$ from donut $D_i$ at position $x_i, y_i, z_i$. The intensity in the donut image is related to the total local wavefront by the Fraunhofer diffraction integral (equation \ref{eqn:fraunhofer}). We represent the local wavefront in a basis of annular Zernike polynomials over the pupil, such that the total local wavefront for donut $i$ at position $x_i, y_i$ is
\begin{equation}
w_{\text{tot}}(u,v) = \sum_j \alpha_{ij}Z_j(u,v)
\end{equation}
Convolutional neural networks (CNNs) are particularly well suited for processing images and learning nonlinear mappings. We develop a CNN $\varphi$ to solve the inverse problem of estimating $\alpha_{ij}$ for $j = 1\ \dots m$ from $(D_i,x_i,y_i,z_i)$ . In chapter \ref{chap:cnn} we describe the implementation of this model in detail.
\subsection{Interpolating the Optics Wavefront}
In the second subproblem, we aggregate the local estimates from the first subproblem to constrain $\beta$. The total local wavefront at position $x_i,y_i$ is related to the optics wavefront via
\begin{equation}w_{\text{tot}}(u,v) = W_{\text{opt}}(u,v|x_i,y_i) + \epsilon(u,v|x_i,y_i)\end{equation}
where $\epsilon$ represents the atmospheric turbulence contribution to the wavefront. Let $\mathcal{Z}$ be defined such that $\mathcal{Z}_{ij} = Z_j(x_i,y_i)$. Then for $i = 1,\dots,m$ we have
\begin{equation}
\alpha e_i = \mathcal{Z} \beta e_i + \epsilon
\end{equation}
where $e_i$ is the $i$th unit vector. Then combining the $\alpha$ from the previous subproblem, and computing the corresponding $\mathcal{Z}$, allows us to solve for $\beta$,
\begin{equation}
\beta = \text{argmin}_{\beta}\left \{\sum_{i=1}^m \ell(\alpha e_i, \mathcal{Z} \beta e_i)\right\}
\end{equation}
where $\ell$ is a convex loss function. Algorithm \ref{alg:main} shows the psuedocode.
\begin{algorithm}
% \SetKwInOut{Input}{Input}
% \SetKwInOut{Output}{Output}
% \underline{function Estimate Optics Wavefront}\\
% \Input{combined wavefront sensor image $I \in \mathbb{R}^{N\times N}$}
% \Output{optics wavefront $\beta \in \mathbb{R}^{k \times m}$}
\text{given image} $I \in \mathbb{R}^{N \times N}$\\
\text{initialize local wavefront estimate} $\alpha \in \mathbb{R}^{n \times m}$\\
\text{initialize global Zernike basis} $\mathcal{Z} \in \mathbb{R}^{n \times k}$\\
\For{\text{donut} $i$ \text{ in } 1\dots n}
{
$D_i = \text{Crop}(I, x_i, y_i)$\\
$\alpha[i,:] = \varphi(D_i, x_i, y_i, z_i)$\\
\For{\text{Sernike} $j$ \text{ in } 1\dots k}
{
$\mathcal{Z}[i,j] = Z_j(x_i, y_i)$
}
}
\text{initialize optics wavefront} $\beta \in \mathbb{R}^{k \times m}$\\
\For{\text{local Zernike } $i$ \text{ in } $1 \dots m$}
{
$\beta[:,i] = \text{argmin}_{\beta[:,i]}\ \{\ell(\alpha[:,i], \mathcal{Z}\beta[:,i])\}$\\
}
\text{return } $\beta$\\
\caption{estimates the optics wavefront from donut images.}
\label{alg:main}
\end{algorithm}
The dominant source of error is the atmospheric turbulence contribution to the wavefront. This error is correlated on scales of arcminutes. By processing donuts with reasonable separation and between different wavefront sensors we are able to suppress this error by roughly a factor of $1 / \sqrt{n}$ where $n$ is the number of donuts used.
There are two parameters of our algorithm that must be set based on the telescope: the number of Zernike coefficients to use for the pupil $m$, and the number of Zernike coefficients to use for the focal plane $k$. For the Rubin Observatory we use Zernikes $Z_4$ through $Z_{21}$ for the pupil plane. The first three coefficients do not impact image quality, so we exclude them. We truncate the basis at $Z_{21}$, a convention set by \cite{2015Xin}, as the higher order terms have very small coefficients in practice. We use $Z_1$ through $Z_3$ for the focal plane. Our simulations show that 90\% of the optics wavefront is contained in this truncated basis.
There are two benefits to dividing the wavefront estimation problem into these two subproblems that are worth highlighting. The first is the useful intermediate data products. The local wavefront coefficients $\alpha$, which are estimated in the first subproblem, are physically meaningful. Telescope operators can track them during operations and gain further insight into the system. This adds an additional layer of transparency and robustness.
The second benefit is that it makes deep learning approaches feasible. Deep neural networks must be trained on large datasets to avoid overfitting. The input to the original problem is four wavefront sensor images, or up to thousands of donut images. The raytracing necessary to simulate even a single input sample is computationally expensive. In our first subproblem however, the input is only a single donut image. This reduces the computation required to produce a training sample by three orders of magnitude and makes it possible to generate simulated datasets that are sufficient for training deep neural networks. In chapter \ref{chap:cnn}, we highlight the power of these models.
| {
"alphanum_fraction": 0.7778814178,
"avg_line_length": 120.2829268293,
"ext": "tex",
"hexsha": "75f0e37c14be26a7534a02193daa0020915b0c26",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fccbf9244196333f55bbd95676077537979ba183",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "davidthomas5412/Thesis",
"max_forks_repo_path": "chapters/new_paradigm/new_paradigm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fccbf9244196333f55bbd95676077537979ba183",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "davidthomas5412/Thesis",
"max_issues_repo_path": "chapters/new_paradigm/new_paradigm.tex",
"max_line_length": 1103,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fccbf9244196333f55bbd95676077537979ba183",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "davidthomas5412/Thesis",
"max_stars_repo_path": "chapters/new_paradigm/new_paradigm.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6205,
"size": 24658
} |
\chapter{Random Search}
\section{Introduction}
This chapter focuses on the performance and characteristics of Random Search. The experimental conditions are first presented before detailing the results with some discussion on aspects of the solutions found. The degree to which wrapping occurs in the successful trials is examined and the impact on the success rate of its removal is also looked at.
One of the key objectives of this chapter is to provide some insight into the relative difficulty of the selected problems. This is based on the assumption that Random Search as a method of sampling the search space provides some indication of the solution density associated with a particular problem.
\section{Search Strategy Options}
For Random Search, individuals of random length and random value are created and evaluated. The size of the sample is fixed at 25000, which is the number of evaluations of the objective function that have been allowed per trial to each of the search methods evaluated in this research.
\section{Experimental Conditions}
Table~\ref{rs_param_table} shows the parameters used to configure Random Search for the trials.
For Symbolic Integration, Santa Fe Trail and Blocks a maximum genome length of 100 was used, while Symbolic Regression used a figure of 200. The selected values for maximum length were influenced by a consideration of the grammar used and the likely nature of the target expression for each of the problems.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Parameter &\multicolumn{4}{l|}{Problems}\\
\cline{2-1} \cline{3-1} \cline{4-1} \cline{5-1}
& Sym Int & Santa Fe & Blocks & Sym Reg \\
\hline
Number of Trials & 1000 & 1000 & 1000 & 1000 \\
Number of Objective & & & & \\
Function Evaluations & 25000 & 25000 & 25000 & 25000 \\
Initial Genome Length & 100 & 100 & 100 & 200 \\
Initial Genome Length & & & & \\
Variation Range & 10\% & 10\% & 10\% & 10\% \\
Wrapping & on & on & on & on \\
\hline
\end{tabular}
\caption{\label{rs_param_table} Parameters used to configure Random Search.}
\end{center}
\end{table}
\section{Results}
The results of the trials are shown in Table~\ref{rs_results_table}. Symbolic Integration proves to be the easiest problem to solve with a success rate of 40\%. The Santa Fe Trail also scores a high level of success with a figure of 30\% while Blocks is solved in 24\% of the attempts. Symbolic Regression was not solved in any of the trials.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|}
\hline
Problem & Successful Runs \\
\hline
Symbolic Integration & 40\% \\
Santa Fe Trail & 30\% \\
Blocks & 24\% \\
Symbolic Regression & 0\% \\
\hline
\end{tabular}
\caption{\label{rs_results_table} Results from Trials of Random Search.}
\end{center}
\end{table}
A significant aspect of these results is the insight they provide into the relative difficulty of the problems. The success rates for Symbolic Integration, Santa Fe and Blocks would suggest a high density of solutions within the search space for these problems.
A more detailed examination of Random Search in the case of Symbolic Integration and the Santa Fe Trail indicates that there is no revisiting of previously encountered individuals, that is each of the 25000 samples are unique. This lack of duplication is maintained even when one reduces the maximum permitted value of the codon to the minimum possible, that is the number of productions in the grammar (i.e. removal of Genetic Code Degeneracy see Section~\ref{degeneracy}). For example fixing the maximum codon value to 2 in the case of Santa Fe and 3 in the case of Symbolic Integration.
Another emerging feature from these results is the correlation between average number of expressed codons in a solution and success rates. Symbolic Integration proves to be the easiest of the problems to solve, requiring on average just 14 codons (see Table~\ref{rs_results_analysis_table}) while Santa Fe and Blocks require on average 46 and 69 codons respectively.
\section{Characteristics of Solutions found by Random Search}
Table~\ref{rs_results_analysis_table} shows some of the characteristics of the solutions found by Random Search. Symbolic Integration requires on average only 14 codons, while Santa Fe and Blocks require 46 and 69 respectively. Wrapping occurs in 43\% of Santa Fe solutions and 41\% of Blocks solutions while none of the Symbolic Integration solutions use wrapping. Wrapping on the Blocks problem often results in multiple wrap events on the same genome, averaging two wrap events per succesful solution. Santa Fe averages just one wrap event for successfull solutions.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Feature & Sym Int & Santa Fe & Blocks & Sym Reg \\
\hline
Avg Number of Codons & & & & \\
in Solution & 57 & 57 & 58 & n/a \\
Avg Number of expressed & & & & \\
Codons in Solution & 14 & 46 & 69 & n/a \\
Percentage of Solutions & & & & \\
featuring Wrapping & 0\% & 43\% & 41\% & n/a \\
\hline
\end{tabular}
\caption{\label{rs_results_analysis_table} Analysis of Characteristics from Solutions found by Random Search.}
\end{center}
\end{table}
\section{Impact of wrapping}
Wrapping appears prominently in the Santa Fe and Blocks problems with 43\% of the Santa Fe solutions and 41\% of Blocks solutions featuring wrapping.
Symbolic Integration doesn't use wrapping in any of the solutions found by Random Search. An examination of the average number of expressed codons used in solutions to the Symbolic Integration problem shows it to be quite short, at 14, which contrasts with the figure of 57 for the average number of codons used in a successful genome. This ratio of actual codons provided to actual codons required is greatest for Symbolic Integration. Comparing this same ratio for Santa Fe and Blocks reveals that although both problems use wrapping at similar levels (41\% - 43\%) the ratio for Blocks is actually less than one indicating a much higher number of wrap events. An analysis of the Blocks solutions supports this, revealing up to four wrap events within a single solution on occasion.
Column 2 of Table~\ref{rs_no_wrap_results_table} shows the impact on success rates of removing wrapping. Symbolic Integration which showed no dependency on wrapping remains unchanged while Santa Fe and Blocks drop by 15\% and 7\% respectively.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Problem & Successful & Change in & Avg Number & Avg Number \\
& Runs & Success Rate & of Codons & Codons Used \\
\hline
Symbolic Int. & 41\% & +1\% & 53 & 15 \\
Santa Fe Trail & 15\% & -15\% & 63 & 37 \\
Blocks & 17\% & -7\% & 65 & 39 \\
Symbolic Reg. & 0\% & n/a & n/a & n/a \\
\hline
\end{tabular}
\caption{\label{rs_no_wrap_results_table} Results from 1000 trials of Random Search with Wrapping disabled on the problem set.}
\end{center}
\end{table}
\section{Summary}
In this chapter we have looked at the efforts of Random Search in solving the problem set. Symbolic Integration, Santa Fe Trail and Blocks were solved with surprisingly high success rates, while all trials on Symbolic Regression and Spirals were unsuccessful. Wrapping featured prominently in the results for Santa Fe and Blocks, removing wrapping sees a drop in the success rate for these two problems. If we regard Random Search as a means of providing insight into the density of solutions in the search space then we can conclude that Symbolic Integration requiring an average of 14 codons has the highest density of solutions making it the easiest of the selected problems. Santa Fe and Blocks also show high solution densities requiring an average of 46 and 69 codons respectively. The results suggest that Symbolic Regression is a much more difficult problem, remaining unsolved by random search.
| {
"alphanum_fraction": 0.7504662439,
"avg_line_length": 55.0890410959,
"ext": "tex",
"hexsha": "9054a6b1f14d7f69876da972bed1af5d7a1e8a37",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "95b83a99bfb488281effdcd704f1802138f21dd0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "johnosbb/Grammatical-Evolution",
"max_forks_repo_path": "Content/Masters/Chapter5/chapter5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "95b83a99bfb488281effdcd704f1802138f21dd0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "johnosbb/Grammatical-Evolution",
"max_issues_repo_path": "Content/Masters/Chapter5/chapter5.tex",
"max_line_length": 904,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "95b83a99bfb488281effdcd704f1802138f21dd0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "johnosbb/Grammatical-Evolution",
"max_stars_repo_path": "Content/Masters/Chapter5/chapter5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1933,
"size": 8043
} |
\section{Introduction}
\label{sec:introduction}
\begin{frame}{Why \glsentryshort{jfnk}?}
\begin{itemize}
\item Robust method for solving general nonlinear equations.
\item $q$-quadratic convergence rate at termination \cite{textbookkelley}.
\item Analytic form of Jacobian not required.
\end{itemize}
\end{frame}
\begin{frame}{\glsentryshort{jfnk} Challenges}
\begin{itemize}
\item \gls{jfnk} methods require residual form.
\item Requires computationally expensive Jacobian-vector product.
\item Typically, an optimized \gls{pi} method is similarly efficient.
\end{itemize}
\end{frame}
| {
"alphanum_fraction": 0.7467741935,
"avg_line_length": 32.6315789474,
"ext": "tex",
"hexsha": "646ec300095f8d8f758629970acd72a41ca1a39e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da790b3bca756652ae97e8f9b0d3d83e13a163ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wcdawn/WilliamDawn-QE2",
"max_forks_repo_path": "presentation/sec_introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da790b3bca756652ae97e8f9b0d3d83e13a163ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wcdawn/WilliamDawn-QE2",
"max_issues_repo_path": "presentation/sec_introduction.tex",
"max_line_length": 78,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "da790b3bca756652ae97e8f9b0d3d83e13a163ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wcdawn/WilliamDawn-QE2",
"max_stars_repo_path": "presentation/sec_introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-30T15:17:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-30T15:17:59.000Z",
"num_tokens": 171,
"size": 620
} |
\section{Introduction}
%chris's intro stuff
\subsection{Emma Watson and Malware}
On September 10th 2012, MacAfee released a report detailing a list of the top 20
celebrities that it believed were the most risky to search for on the internet\cite{mac-watson}.
The report specifies a number of threats from risky web sites, such as malware,
phishing, and spam, and determined that Emma Watson was the most risky celebrity
to search for on the web with more than 12.6\% chance of a site being malicious
when searching for her name combined with phrases such as ``free download''.
The hypothesis suggested in the MacAfee report is that trending topics have
malicious web pages crafted specifically to target the topic, and this is the
hypothesis that will be investigated in the project.
Before investigating if malware is targeted at trending topics, it is necessary to
investigate exactly what constitutes a trending topic. ``Trending topic'' is a term
that originates from Twitter, where statistics are collected on users tweeting
with a given hashtag and used to determine popular hashtags, presented in the
user interface a ``trending topic''. When discussing trending topics in this
report, other sources of trends and popular subjects will also be considered as
trending topics.
To make it possible to investigate the hypothesis proposed above a framework
will be designed and built to automatically gather trends, search for URLs
relating to the trends, and then attempt to detect whether the URL is hosting
malicious content. The data will then be collected into a database, allowing the
hypothesis to be tested. Results will be reported using a web interface that
will also be capable of monitoring the status of the malware scanning system.
%nafiseh's intro stuff
\input{problem-definition}
| {
"alphanum_fraction": 0.8108707709,
"avg_line_length": 54.6363636364,
"ext": "tex",
"hexsha": "92bf4f25a22afc9f60ec664566cbb4efb0bf70bf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cb9633baff8752f3e043a2cfdb91cde868666ae2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "chrissorchard/malucrawl",
"max_forks_repo_path": "doc/report/description.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cb9633baff8752f3e043a2cfdb91cde868666ae2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "chrissorchard/malucrawl",
"max_issues_repo_path": "doc/report/description.tex",
"max_line_length": 96,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cb9633baff8752f3e043a2cfdb91cde868666ae2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "chrissorchard/malucrawl",
"max_stars_repo_path": "doc/report/description.tex",
"max_stars_repo_stars_event_max_datetime": "2016-12-09T03:02:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-09T03:02:05.000Z",
"num_tokens": 381,
"size": 1803
} |
\documentclass[a4paper, 11 pt]{amsart}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{aliascnt}
\usepackage{mathtools}
\usepackage{geometry}
\usepackage{fancyhdr}
\usepackage{euscript}
\usepackage{tikz,tkz-euclide,tikz-cd}
\usepackage{float}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{csquotes}
\usetkzobj{all}
%Proper open/closed sub/supset-notation
\newcommand{\osub}{\overset{\scriptscriptstyle{\mathrm{open}}}{\subset}}
\newcommand{\csub}{\overset{\scriptscriptstyle{\mathrm{closed}}}{\subset}}
\newcommand{\osup}{\overset{\scriptscriptstyle{\mathrm{open}}}{\supset}}
\newcommand{\csup}{\overset{\scriptscriptstyle{\mathrm{closed}}}{\supset}}
\newcommand{\cpsub}{\overset{\scriptscriptstyle{\mathrm{compact}}}{\subset}}
\newcommand{\cpsup}{\overset{\scriptscriptstyle{\mathrm{compact}}}{\supset}}
%Vector styles -begin-
\renewcommand{\v}[1]{\boldsymbol{#1}}
\newcommand{\vf}[1]{\boldsymbol{\bar{#1}}}
\newcommand{\ve}[1]{\hat{\boldsymbol{#1}}}
\newcommand{\vb}[1]{\underline{\boldsymbol{#1}}}
%Vector styles -end-
%Custom made symbols (proper ones -begin-
\newcommand*{\defeq}{\mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt
\hbox{$\raisebox{-0.200ex}{\scriptsize.}$}\hbox{\scriptsize.}}}
=} %Proper :=
\newcommand*{\eqdef}{=\mathrel{\vcenter{\baselineskip0.45ex \lineskiplimit0pt
\hbox{$\raisebox{0.10ex}{\scriptsize.}$}\hbox{\scriptsize.}}}
} %Proper =:
\newcommand{\co}{\colon\thinspace} %Proper colon
\newcommand{\st}{\vert\thinspace} %Proper "such that"
\newcommand{\sca}{\raisebox{.3ex}{\tiny$\ \bullet\ $}} %Proper scalar product
%Custom made symbols (proper ones) -end-
%Misc symbols -begin-
\renewcommand{\d}{\partial}
\newcommand{\del}{\nabla}
\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\renewcommand{\P}{\mathbb{P}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\e}{\varepsilon}
\newcommand{\f}{\varphi}
\newcommand{\E}{\mathbb{E}}
\newcommand{\RP}{\mathbb{R}\mathrm{P}}
%Misc symbols -end-
%Custom defined math operators -begin-
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\lk}{lk}
\DeclareMathOperator{\im}{im}
\DeclareMathOperator{\id}{id}
\DeclareMathOperator{\ind}{ind}
\DeclareMathOperator{\codim}{codim}
%Custom defined math operators -end-
%Parenthesis and arrows -begin-
\newcommand{\ekviv}{\ \Leftrightarrow \ }
\newcommand{\imp}{\ \Rightarrow \ }
\newcommand{\paren}[1]{\left( #1 \right)}
\newcommand{\parenb}[1]{\left[ #1 \right]}
\newcommand{\parenm}[1]{\left\{ #1 \right\}}
\newcommand{\ip}[2]{\left\langle #1, #2 \right\rangle}
\newcommand{\absv}[1]{\left\vert #1 \right\vert}
\newcommand{\norm}[1]{\left\Vert #1 \right\Vert}
\newcommand{\hak}[1]{\left \langle #1 \right \rangle}
\newcommand{\slfrac}[2]{\left. #1 \middle / #2 \right .}
%Parenthesis and arrows -end-
\newcommand{\rad}[2]{#1_{#2\raisebox{.1ex}{\tiny$ \bullet$}}} %Row of a matrix notation
\newcommand{\kol}[2]{#1_{\raisebox{.1ex}{\tiny$ \bullet$} #2}} %Column of a matrix notation
\newtheorem{theorem}{Theorem}
\newaliascnt{lemma}{theorem}
\newtheorem{lemma}[lemma]{Lemma}
\aliascntresetthe{lemma}
\providecommand{\lemmaautorefname}{Lemma}
\newaliascnt{proposition}{theorem}
\newtheorem{proposition}[proposition]{Proposition}
\aliascntresetthe{proposition}
\providecommand{\propositionautorefname}{Proposition}
\newaliascnt{corollary}{theorem}
\newtheorem{corollary}[corollary]{Corollary}
\aliascntresetthe{corollary}
\providecommand{\corollaryautorefname}{Corollary}
\theoremstyle{definition}
\newaliascnt{definition}{theorem}
\newtheorem{definition}[definition]{Definition}
\aliascntresetthe{definition}
\providecommand{\definitionautorefname}{Definition}
\newaliascnt{example}{theorem}
\newtheorem{example}[example]{Example}
\aliascntresetthe{example}
\providecommand{\exampleautorefname}{Example}
\newaliascnt{exercise}{theorem}
\newtheorem{exercise}[exercise]{Exercise}
\aliascntresetthe{exercise}
\providecommand{\exerciseautorefname}{Exercise}
\theoremstyle{remark}
\newaliascnt{remark}{theorem}
\newtheorem{remark}[remark]{Remark}
\aliascntresetthe{remark}
\providecommand{\remarkautorefname}{Remark}
\numberwithin{equation}{section}
\makeindex
\pagestyle{fancy}
\lhead{Dan Lilja}
\rhead{Validated Numerics \\
Computer lab 1} % course
\begin{document}
\thispagestyle{fancy}
\hfill
\section*{Further comments}
\subsection*{Exercise 5a}
Using monotonicity of the integrand allows us to find rigorous bounds by simply
computing lower and upper Riemann sums.
Since we haven't yet implemented rigorous versions of the standard functions I
use the ones in \texttt{cmath} for now.
\subsection*{Exercise 5b}
In solving this exercise I perform two changes of variables. The first change
of variables is $ y=x^{2} $, transforming the integral as
\begin{align*}
\int_{1}^{\infty}\sin(x^{2})x^{-3}dx
& = \dfrac{1}{2}\int_{1}^{\infty}\sin(y)y^{-2}dy \, .
\end{align*}
From here I divide the integral into three parts that I call the head, the main
part and the tail. These are, respectively,
\begin{align*}
\dfrac{1}{2} & \int_{1}^{\frac{\pi}{2}}\sin(y)y^{-2}dy \, , \\
\dfrac{1}{2} & \int_{\frac{\pi}{2}}^{256\frac{3\pi}{2}}\sin(y)y^{-2}dy \, ,
\\
\dfrac{1}{2} & \int_{256\frac{3\pi}{2}}^{\infty}\sin(y)y^{-2}dy \, .
\end{align*}
The head is calculated using that $ \sin(y) $ is increasing and $ y^{-2} $
decreasing by dividing $ \left[1,\frac{\pi}{2}\right] $ into small pieces and
on each piece approximating the value by multiplying the two functions largest
values for the upper bound and smallest value for the lower bound.
For the main part I use the same idea and the fact that $ \sin(y) $ alternates
between increasing and decreasing on the intervals $ \left[
\frac{(1+2k)\pi}{2},\frac{(1+2(k+1))\pi}{2} \right] $. I also perform another
change of variables, $ y=\pi dz $ two remove several occurrences of $ \pi $ in
the integrand and the interval boundaries.
Finally, the tail is approximated by simply forgetting the $ \sin $ factor of
the integrand and using
\begin{align*}
\dfrac{1}{2\pi}\int_{256\frac{3}{2}}^{\infty}z^{-2}dz & =
\dfrac{1}{2\pi}\left[ -z^{-1} \right]_{256\frac{3}{2}}^{\infty} \\
& = \dfrac{1}{2\pi\cdot 3\cdot 128} \, .
\end{align*}
Regarding the sharpness of the achieved bounds, just as my solution of exercise
5a it depends on the standard implementations of \texttt{sin} and
\texttt{pow} respecting the roundings and there is inherent non-sharpness in
the way I handle the value of $ \pi $.
\appendix
% Bibliographies can be prepared with BibTeX using amsplain,
% amsalpha, or (for "historical" overviews) natbib style.
%\begin{bibdiv}
% \begin{biblist}
%
% \end{biblist}
%\end{bibdiv}
\end{document}
| {
"alphanum_fraction": 0.7245003701,
"avg_line_length": 32.7912621359,
"ext": "tex",
"hexsha": "3f63e1ff9b6a675edac405d669f3ef539d015d28",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a5b65e6a9286a3d28a30b24a21043bf9ba6e01ab",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dlilja/ValNumProject",
"max_forks_repo_path": "ComputerLab1/comments/comments.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a5b65e6a9286a3d28a30b24a21043bf9ba6e01ab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dlilja/ValNumProject",
"max_issues_repo_path": "ComputerLab1/comments/comments.tex",
"max_line_length": 91,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a5b65e6a9286a3d28a30b24a21043bf9ba6e01ab",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dlilja/ValNumProject",
"max_stars_repo_path": "ComputerLab1/comments/comments.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2223,
"size": 6755
} |
In this section we introduce the general architecture of a game. We then present an example of common timing and synchronization primitives used in DSL's for games and we show some techniques typically used to implement them. For each technique we list the main drawbacks. Finally we present our solution to the problem of developing a DSL for games.
\subsection{Preliminaries}
A game engine is usually made by several interoperating components. All the components use a shared data structure, called \textit{game state}, for their execution. The two main components of a game are the \textit{logic engine}, which defines how the game state evolves during the game execution, and the \textit{graphics enginge}, which draws the scene by reading the updated game state. These two components are executed in lockstep within a function called \textit{game loop}. The game loop is executed indefinitely, updating the game state by calling the logic engine, and drawing the scene by using the graphics engine. An iteration of the game loop is called \textit{frame}. Usually a game should run between 30 to 60 frames per second. This requires both the graphics engine and the logic engine to be high-performance. In this paper we will only take into account the performance of the logic engine, as scripting drives the logic of the game loop. A schematic representation of this architecture can be seen in Figure \ref{fig:game_loop}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{Pictures/game_loop}
\caption{Game loop}
\label{fig:game_loop}
\end{figure}
\subsection{A time and synchronization primitive}
\label{subsec:synchronization}
A common requirement in game DSL's is a statement which allows to pause the execution of a function for a specified amount of time or until a condition is met. We will refer to these statements as \texttt{wait} and \texttt{when}. Such a behaviour can be modelled using different techniques:
\begin{itemize}
\item \textit{Threads} allow to solve such synchronization problems but they are unsuitable for video game development because of memory usage and CPU overhead due to their scheduling.
\item \textit{Finite State Machines} allow to represent such concurrent behaviours \cite{CASANOVA2_PAPER} by using a \texttt{switch} control structure to an opportune state. For instance in the case of timed wait, the states are \textit{wating} and \texttt{clear} (when the timer has elapsed). This solution is high-performance but the logic of the behaviour is lost inside the \texttt{switch} structure.
\item \textit{Strategy pattern} allows to represent the instructions of the language has polymorphic data types. Each concurrent structure is implemented by a class which defines the behaviour of the current structure, and the next structure to execute. This solution is not high-performance due to virtuality and the high number of object instantiations.
\item \textit{Monadic DSL} uses a variation of the state monad. It represents scripts with two states: \textit{Done} and \textit{Next}. The bind operator is used to simulate the code interruption. This approach is simple and elegant but it suffers of the same virtuality problems of the strategy pattern, this time because of the extensive use of lambda expressions.
\item \textit{Compiled DSL} is the most common solution and allows to represent the concurrency by using concurrent control structures defined in the language. Compiled DSL's grant high-performance and code readability, but they require to implement a compiler or an interpreter for it.
\end{itemize}
In what follows we show a possible implementation of these statements using the presented techniques. We deliberately ignore the thread-based implementation since, as explained above, threads are not suitable for game development.
\paragraph{Monadic DSL:} A possible approach to building an interpreted DSL is using monads in a functional programming language, as done in \cite{CASANOVA1_PAPER}. A script is a function that takes no input parameters and returns the state of the script after the execution of the current statement. The state can be either \textit{Done}, when the script terminates, or \textit{Next} when the script is still running and we need to pass the rest of the script to be evaluated. We present a definition of the monad in the following code snippet in pseudo-ml:
\begin{lstlisting}
type Script<'a> = Unit -> State<'a>
and State<'a> = Done of 'a | Next of Script<'a>
\end{lstlisting}
\noindent
We can define the \texttt{Return} and \texttt{Bind} operators as follow:
\begin{lstlisting}
let return (x : 'a) : Script<'a> =
fun () -> Done x
let (p : Script<'a>) >>= (k : 'a -> Script<'b>) : Script<'b> =
fun () ->
match p with
| Done x -> k x ()
| Next p' -> Next(p' >>= k)
\end{lstlisting}
\noindent
The \texttt{Return} operator simply returns a script that builds the result of the computation. The \texttt{Bind} operator checks the current state of the script: if the script has terminated then it simply builds the result by executing the last script statement, otherwise it returns the continuation containing the invocation of the \texttt{Bind} on the remaining part of the script. In this framework the \texttt{wait} statement can be implemented as follows (we assume that \texttt{do} is a shortcut for the bind on a function returning \texttt{Script<Unit>}):
\begin{lstlisting}
let yield : Script<Unit> =
fun () -> Next(fun () -> Done ())
let rec waitRecursion (interval : float32,
startingTime : float32) : Script<Unit> =
let t = getTime()
let dt = (t - t0)
if dt < interval then
do yield
do waitRecursion(startingTime)
else
return ()
let wait (timer : float32) : Script<Unit> =
let t0 = getTime()
do waitRecursion(timer, t0)
let when (predicate : Unit -> bool) : Script<Unit> =
if predicate () then return ()
else
do yield
do when predicate
\end{lstlisting}
\noindent
The \texttt{yield} function simply forces the script to stop for one frame. The \texttt{wait} function recursively checks if the timer has elapsed. If this is not the case it skips a frame and then re-evaluates otherwise it returns \texttt{Unit}. \texttt{wait} on a predicate simply skips a frame and keeps re-executing itself until the condition is met.
\paragraph{Strategy pattern} Implementing \texttt{wait} and \texttt{when} with the strategy pattern requires to define an interface which contains the signature for a method to run the script. Usually scripts need to access the time elapsed between the current frame and the previous one and to a reference to the game state data structure for various reasons, which are passed as parameter for this method. The method returns the updated script after the current execution. We present the code for the interface in a pseudo-C\# code:
\begin{lstlisting}
public interface Script
public Script Run(float dt, GameState state);
\end{lstlisting}
We will then model \texttt{wait} and \texttt{when} with two classes implementing such interface. Each of the script commands contains a reference to the next statement to execute.
\begin{lstlisting}
public class Wait : Statement
private Statement next;
private float time;
public Script Run(float dt, GameState state)
if (time >= 0)
time -= dt;
return this;
else
return next;
public class When : Statement
private Func<bool> predicate;
private Script next;
public Script Run(float dt, GameState state)
if (predicate())
return next;
else
return this;
\end{lstlisting}
\paragraph{Finite state machines:} In this approach we model \texttt{wait} and \texttt{when} as finite state automata. Both statements require to store the state of the automata. \texttt{wait} requires to store a timer whose value is updated at each frame. \texttt{when} stores a predicate to execute and check at every game logic update. \texttt{wait} has three states: (\textit{i}) timer initialization, (\textit{ii}) check timer and update, and (\textit{iii}) timer elapsed. In the first state we initialize the timer and then update it for the first time. The second state is used to check whether the timer has elapsed. The third state is used to continue with the execution after the time has elapsed.
\begin{lstlisting}
//WAIT
public void Update(float dt, GameState gameState)
switch(state)
case -1:
this.t = timer;
state = 0;
goto case 0;
case 0:
this.t = this.t - dt
if (this.t <= 0)
state = 1;
goto case 1:
else
return;
case 1:
//run the code after wait
\end{lstlisting}
\texttt{when} has two states: (\textit{i}) check predicate, (\textit{ii}) predicate satisfied (go on with the execution). The first state checks if the predicate has been satisfied. If that is the case, we jump to the next state, otherwise we pause the execution and we remain in the same state.
\begin{lstlisting}
//WHEN
public void Update(float dt, GameState gameState)
switch(state)
case 0:
if (predicate())
state = 1;
goto case 1:
else
return;
case 1:
//run the code after when
\end{lstlisting}
\paragraph{Waiting with a hard-coded compiler:} This approach requires to build a compiler by generating the syntax using a standard lexer/parser generator. After that usually it is required to define a set of type rules, and the operational semantics of the language constructs. The type rules and the operational semantics are then implemented in the type checker and the code generator of the compiler.
\subsection{Discussion}
In the previous paragraph we have seen different techniques employed by developers to implement timing and synchronization statements commonly used in video games. We now list the advantages and disadvantages of each solution:
\begin{itemize}
\item \textit{Monadic DSL:} this solution is elegant but inefficient, because of the extensive use of lambda expressions in monads. Indeed lambdas are usually implemented with virtual method calls. The code to define a monadic DSL is compact.
\item \textit{Strategy Pattern:} this solution is simple but low-performance because of the virtual method calls the logic update must invoke to run the statements and the high number of object instantiation to generate the script statements (one per statement). Besides the readability of the program for long scripts is lost due to the long chain of object instantiations. A library supporting scripts implemented with the strategy pattern is compact.
\item \textit{Finite state machines:} this approach is high-performance due to the small overhead of the \texttt{switch} control structure. Unfortunately the logic of the program for very complex scripts with several nested synchronizations is lost inside huge \texttt{switch} structures and the complexity increases drastically with the number of interruptions and synchronizations in the script \cite{AI_GAMES}. Moreover note that for each timer we have to maintain a separate timer, and for any of the two control structures we need to store the state of the automaton. Implementing complex synchronizations require long and complex state machines.
\item \textit{Hard-coded compiler:} this approach is high-performance, as we could generate the state machines presented above during the code generation step, but building, maintaining, and extending a hard-coded compiler is a hard and time-consuming task. The code length increases with the size of the compiler (in term of modules and code lines) and the language complexity.
\end{itemize}
This situation is summarized in Table \ref{tab:techniques}.
\begin{table}
\small
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Technique & Readability & Performance & Code length \\
\hline
Monadic DSL & \checkmark & \ding{55} & \checkmark \\
\hline
Strategy Pattern & \ding{55} & \ding{55} & \checkmark \\
\hline
Finite state machines & \ding{55} & \checkmark & \ding{55} \\
\hline
Hard-coded compiler & \checkmark & \checkmark & \ding{55} \\
\hline
\end{tabular}
\caption{Pros and cons of script implementation techniques}
\label{tab:techniques}
\end{table}
In this work we propose another development approach in building a game DSL by using a metacompiler, a program which takes as input a language definition, a program written in that language, and generates executable code.
\noindent
Given this considerations, we formulate the following problem statement.
\vspace{0.5cm}
\noindent
\textbf{PROBLEM STATEMENT:}
Given the formal definition of a game DSL our goal is to automate, by using a metacompiler, the process of building a compiler for that language in a (\textit{i}) short (code lines), (\textit{ii}) clear (code readability), and (\textit{iii}) efficient (time execution) way, with respect to a hand-made implementation. | {
"alphanum_fraction": 0.7639674545,
"avg_line_length": 68.6436170213,
"ext": "tex",
"hexsha": "e34b06a54fbda510a5feef671dcd1f7c2e0197e2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vs-team/Papers",
"max_forks_repo_path": "16. Metacasanova/Sections/problem_statement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vs-team/Papers",
"max_issues_repo_path": "16. Metacasanova/Sections/problem_statement.tex",
"max_line_length": 1048,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vs-team/Papers",
"max_stars_repo_path": "16. Metacasanova/Sections/problem_statement.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z",
"num_tokens": 2980,
"size": 12905
} |
% !TEX root = main.tex
%=====================================================================
\chapter{Bayesian Inference}\label{chap:randomprocesses}
%----------------------------------------------------------------------
%\section{The Bayesian approach}
%----------------------------------------------------------------------
So far we have looked at \emph{frequentist inference}, which assumes that an unknown parameter $\theta$ has a fixed (but unknown) value.
\bit
\it The PMF/PDF of an observation is written as $f(x;\theta)$.
\it The likelihood function is written as $L(\theta;x)$.
\it Estimators such as the MME and MLE claim to estimate the `true' value of $\theta$.
\eit
For \emph{Bayesian inference}, we instead think of an unknown parameter $\theta$ as a \emph{random variable}.
\bit
\it The PMF/PDF of an observation is written as $f(x|\theta)$.
\it The likelihood function of is written as $L(\theta|x)$.
\it We seek to estimate the distribution of $\theta$.
\eit
%----------------------------------------------------------------------
\section{Bayes' theorem}
%----------------------------------------------------------------------
% events
If the events $A_1,A_2,\ldots$ form a partition of event $B$, Bayes' theorem states that
\[
\prob(A_j|B)
% = \frac{\prob(B|A_j)\prob(A_j)}{\prob(B)}
= \frac{\prob(B|A_j)\prob(A_j)}{\sum_k \prob(B|A_k)\prob(A_k)}.
\]
% discrete
Let $X$ and $Y$ be discrete random variables taking values in the sets $\{x_1,x_2,\ldots\}$ and $\{y_1,y_2,\ldots\}$ respectively. Because the events $\{Y=y_1\},\{Y=y_2\},\ldots$ form a partition of the event $\{X=x_i\}$, we have % so applying Bayes' theorem we obtain
\[
\prob(Y=y_j|X=x_i)
%= \frac{\prob(X=x_i|Y=y_j)\prob(Y=y_j)}{\prob(X=x_i)}
= \frac{\prob(X=x_i|Y=y_j)\prob(Y=y_j)}{\sum_k \prob(X=x_i|Y=y_k)\prob(Y=y_k)}.
\]
To avoid cluttering the notation with subscripts, we write this as
\[
\prob(Y=y|X=x)
%= \frac{\prob(X=x|Y=y)\prob(Y=y)}{\prob(X=x)}
= \frac{\prob(X=x|Y=y)\prob(Y=y)}{\sum_y \prob(X=x|Y=y)\prob(Y=y)},
\]
where the sum in the denominator is taken over the range of $Y$.
In terms of PMFs, this becomes
\[
f_{Y|X}(y|x)
%= \frac{f_{X|Y}(x|y)f_Y(y)}{f_X(x)}
= \frac{f_{X|Y}(x|y)f_Y(y)}{\sum_y f_{X|Y}(x|y)f_Y(y)}.
\]
% continuous
This extends directly to the case of continuous random variables, the only difference being that the denominator (which is the marginal PDF of $X$) is expressed by an integral:
\[
f_{Y|X}(y|x)
%= \frac{f_{X|Y}(x|y)f_Y(y)}{f_X(x)}
= \frac{f_{X|Y}(x|y)f_Y(y)}{\int f_{X|Y}(x|y)f_Y(y)\,dy}.
\]
% parameter estimation
For Bayesian inference, the unknown parameter $\theta$ takes the role of $Y$ in the above formulation. To simplify the notation we denote the PMF/PDF of $\theta$ by the symbol $\pi$:
\[
\pi(\theta|\mathbf{x}) = \left\{\begin{array}{ll}
\displaystyle\frac{f(\mathbf{x}|\theta)\pi(\theta)}{\sum_{\theta}f(\mathbf{x}|\theta)\pi(\theta)} & \text{\quad ($\theta$ discrete)}, \\[5ex]
\displaystyle\frac{f(\mathbf{x}|\theta)\pi(\theta)}{\int f(\mathbf{x}|\theta)\pi(\theta)\,d\theta} & \text{\quad ($\theta$ continuous)}.
\end{array}\right.
\]
This can also be expressed in terms of likelihood functions,
\[
\pi(\theta|\mathbf{x}) = \left\{\begin{array}{ll}
\displaystyle\frac{L(\theta|\mathbf{x})\pi(\theta)}{\sum_{\theta}L(\theta|\mathbf{x})\pi(\theta)} & \text{\quad ($\theta$ discrete)}, \\[5ex]
\displaystyle\frac{L(\theta|\mathbf{x})\pi(\theta)}{\int L(\theta|\mathbf{x})\pi(\theta)\,d\theta} & \text{\quad ($\theta$ continuous)}.
\end{array}\right.
\]
%----------------------------------------------------------------------
\section{The prior and posterior distributions}
%----------------------------------------------------------------------
%----------------------------------------------------------------------
\subsection*{The prior distribution}
%----------------------------------------------------------------------
% prior
Suppose we have an initial estimate for distribution of $\theta$, perhaps obtained as a result of some preliminary experiments.
\bit
\it This is called the \emph{prior distribution} of $\theta$, which we denote by $\pi_0(\theta)$.
\eit
An initial point estimate of $\theta$ can be computed from the prior distribution, for example
\bit
\it by the \emph{mean} of the prior distribution: $\hat{\theta} = \expe\big[\pi_0(\theta)\big]$ or
\it by the \emph{mode} of the prior distribution: $\hat{\theta} = \argmax_{\theta}\big[\pi_0(\theta)\big]$.
\eit
If we have no prior knowledge, we should initially consider every value of $\theta$ to be equally likely. For example, if $\theta$ is continuous and all we know is that $\theta$ belongs to some interval $[a,b]$, we should adopt the \emph{uniform} distribution over $[a,b]$ as the prior distribution of $\theta$,
\[
\pi_0(\theta) = \left\{\begin{array}{ll}
1/(b-a) & \text{if } a\leq\theta\leq b, \\
0 & \text{otherwise.}
\end{array}\right.
\]
In this context, the uniform distribution is often called the \emph{na\"{\i}ve} or \emph{non-informative} prior.
%----------------------------------------------------------------------
\subsection*{The posterior distribution}
%----------------------------------------------------------------------
% posterior
Suppose we now obtain some sample data $\mathbf{x}=(x_1,\ldots,x_n)$.
\bit
\it Bayes' theorem can be used to combine the prior distribution with the data.
\it This yields an updated PMF/PDF $\pi_1(\theta)$, called the \emph{posterior distribution} of $\theta$.
\eit
By Bayes' theorem,
%\[
%\pi_1(\theta|\mathbf{x}) = \left\{\begin{array}{ll}
% \displaystyle\frac{f(\mathbf{x}|\theta)\pi_0(\theta)}{\sum_{\theta}f(\mathbf{x}|\theta)\pi_0(\theta)} & \text{\quad ($\theta$ discrete)}, \\[3ex]
% \displaystyle\frac{f(\mathbf{x}|\theta)\pi_0(\theta)}{\int_{\theta}f(\mathbf{x}|\theta)\pi_0(\theta)\,d\theta} & \text{\quad ($\theta$ continuous)}.
%\end{array}\right.
%\]
%
%This can also be expressed in terms of likelihood functions,
%\[
%\pi_1(\theta|\mathbf{x}) = \left\{\begin{array}{ll}
% \displaystyle\frac{L(\theta|\mathbf{x})\pi_0(\theta)}{\sum_{\theta}L(\theta|\mathbf{x})\pi_0(\theta)} & \text{\quad ($\theta$ discrete)}, \\[3ex]
% \displaystyle\frac{L(\theta|\mathbf{x})\pi_0(\theta)}{\int_{\theta}L(\theta|\mathbf{x})\pi_0(\theta)\,d\theta} & \text{\quad ($\theta$ continuous)}.
%\end{array}\right.
%\]
\[
\pi_1(\theta|\mathbf{x})
= \frac{f(\mathbf{x}|\theta)\pi_0(\theta)}{\sum_{\theta}f(\mathbf{x}|\theta)\pi_0(\theta)}
\qquad\text{or}\qquad
\pi_1(\theta|\mathbf{x})
= \frac{f(\mathbf{x}|\theta)\pi_0(\theta)}{\int f(\mathbf{x}|\theta)\pi_0(\theta)\,d\theta}.
\]
In terms of likelihood functions,
\[
\pi_1(\theta|\mathbf{x})
= \frac{L(\theta|\mathbf{x})\pi_0(\theta)}{\sum_{\theta}L(\theta|\mathbf{x})\pi_0(\theta)}
\qquad\text{or}\qquad
\pi_1(\theta|\mathbf{x})
= \frac{L(\theta|\mathbf{x})\pi_0(\theta)}{\int L(\theta|\mathbf{x})\pi_0(\theta)\,d\theta}.
\]
Having obtained the posterior distribution, we can compute an updated estimate of $\theta$, for example
\bit
\it by the mean of the posterior distribution: $\hat{\theta} = \expe\big[\pi_1(\theta)\big]$, or
\it by the mode of the posterior distribution: $\hat{\theta} = \argmax_{\theta}\big[\pi_1(\theta)\big]$.
\eit
% defn: MAP estimator
\begin{definition}
The mode of the posterior is called the \emph{maximum a-posteriori} or \emph{MAP} estimator of $\theta$.
\end{definition}
% remark
\begin{remark}
%The posterior distribution is
%\[
%\pi_1(\theta|\mathbf{x}) =
%\displaystyle\frac{L(\theta|\mathbf{x})\pi_0(\theta)}{\sum_{\theta}L(\theta|\mathbf{x})\pi_0(\theta)}
%\text{\quad or\quad}
%\pi_1(\theta|\mathbf{x}) =
%\displaystyle\frac{L(\theta|\mathbf{x})\pi_0(\theta)}{\int_{\theta}L(\theta|\mathbf{x})\pi_0(\theta)\,d\theta}
%\]
The denominator of the posterior depends only on $\mathbf{x}$, so the posterior is proportional to the likelihood times the prior:
%\begin{align*}
\[
\pi_1(\theta|\mathbf{x}) \propto L(\theta|\mathbf{x})\pi_0(\theta),
%\qquad\text{or}\qquad
%\text{"posterior} \propto \text{likelihood}\times\text{prior"}.
\]
The MAP estimator is the value of $\theta$ that maximizes the numerator $L(\theta|\mathbf{x})\pi_0(\theta)$ of the posterior.
\bit
\it Note that when $\pi_0$ is the uniform distribution, the MAP estimator is just the MLE.
\eit
To compute the mean of $\pi_1(\theta|\mathbf{x})$ we also need to compute the denominator. This is not always easy, which is why the MAP estimator is more widely used in practical applications.
\end{remark}
%% example
%\begin{example}\label{ex:biscuits}
%We have three tins of biscuits. The first tin contains $30$ chocolate and $10$ plain biscuits, the second tin contains $20$ chocolate and $20$ plain biscuits, and the third tin contains $10$ chocolate and $30$ plain biscuits. A tin is chosen at random, and a biscuit is chosen at random from the tin.
%\ben
%\it If a chocolate biscuit is observed, estimate which tin was chosen.
%\een
%The experiment is repeated, but this time two biscuits are chosen at random from the tin.
%\ben\stepcounter{enumi}
%\it If two chocolate biscuits are observed, estimate which tin was chosen.
%\it If one chocolate biscuit and one plain biscuit are observed, estimate which tin was chosen.
%\een
%\end{example}
%
%\begin{solution}
%\ben
%\it % one chocolate
%Let $\Theta = \{\theta_1,\theta_2,\theta_3\}$ where the value $\theta_k$ indicates that tin $k$ was chosen ($k=1,2,3$). Before observing the biscuit, it is reasonable to suppose that each tin is equally likely to be chosen. Thus we adopt the \emph{uniform} prior distribution for $\theta$:
%\[
%\pi_0(\theta) = \frac{1}{3} \quad\text{for all}\quad \theta\in\Theta.
%\]
%Let $A$ be the event that a chocolate biscuit was observed. Then
%\[
%\prob(A|\theta_1)=3/4,\qquad \prob(A|\theta_2)=1/2,\qquad \prob(A|\theta_3)=1/4,
%\]
%or equivalently
%\[
%L(\theta_1| A)=3/4,\qquad L(\theta_2| A)=1/2,\qquad L(\theta_3| A)=1/4.
%\]
%By Bayes' theorem, the posterior distribution is
%\[
%\pi_1(\theta_k|A) = \frac{L(\theta_k| A)\pi_0(\theta_k)}{\sum_{k=1}^3 L(\theta_k| A)\pi_0(\theta_k)}
%\text{\quad\qquad ($k=1,2,3$).}
%\]
%For the first tin,
%\[
%\prob(\theta_1|A) = \frac{3/4\times 1/3}{(3/4\times 1/2) + (1/2\times 1/3) + (1/4\times 1/3)} = \frac{1}{2}.
%\]
%Similar calculations for the second and third tins yield the following posterior:
%\[
%\pi_1(\theta_1) = 1/2,\qquad \pi_1(\theta_2) = 1/3, \qquad \pi_1(\theta_3) = 1/6.
%\]
%The MAP estimate is $\hat{\theta}_{MAP} = \theta_1$, so our best guess is that the first tin was chosen.
%
% % <<
%
%\it % two chocolate
%Let $B$ be the event that two chocolate biscuits are observed,
%\[
%\prob(B|\theta_1)=87/156,\qquad \prob(B|\theta_2)=38/156,\qquad \prob(B|\theta_3)=9/156.
%\]
%If we assume the uniform prior $\pi_0$, we obtain the posterior distribution
%\[
%\pi_1(\theta_1|B) = 87/134,\qquad \pi_1(\theta_2|B) = 38/134, \qquad \pi_1(\theta_3|B) = 9/134.
%\]
%The MAP estimator again suggests that the first tin was chosen.
%
%\it % one chocolate, one plain
%Let $C$ be the event that one chocolate and one plain biscuit are observed,
%\[
%\prob(C|\theta_1)=60/156,\qquad \prob(C|\theta_2)=80/156,\qquad \prob(C|\theta_3)=60/156.
%\]
%If we assume the uniform prior $\pi_0$,
%\[
%\pi_1(\theta_1|C) = 3/10,\qquad \pi_1(\theta_2|C) = 4/10, \qquad \pi_1(\theta_3|C) = 3/10.
%\]
%This time, the MAP estimator leads us to assert that the second tin was chosen.
%\een
%\end{solution}
%----------------------------------------------------------------------
%\section{Example}
%----------------------------------------------------------------------
% example
\begin{example}\label{ex:biscuits}
Suppose we have three tins of biscuits. The first tin contains $30$ chocolate and $10$ plain biscuits, the second tin contains $20$ chocolate and $20$ plain biscuits, and the third tin contains $10$ chocolate and $30$ plain biscuits. A tin is selected at random, and a biscuit is chosen at random from the tin.
\ben
\it If a chocolate biscuit is chosen, estimate which tin was selected.
\een
The biscuit is replaced, then a biscuit is again chosen from the tin.
\ben\stepcounter{enumi}
\it If a chocolate biscuit is chosen, update your estimate regarding which tin was selected.
\it If a plain biscuit is chosen, update your estimate regarding which tin was selected.
\een
\end{example}
\begin{solution}
Let $\Theta = \{\theta_1,\theta_2,\theta_3\}$ where the value $\theta_k$ indicates that tin $k$ was selected ($k=1,2,3$). Before the biscuit is chosen, it is reasonable to suppose that each tin is equally likely to be selected. Thus we adopt the \emph{uniform} prior distribution for $\theta$:
\[
\pi_0(\theta) = \frac{1}{3} \quad\text{for all}\quad \theta\in\Theta.
\]
\ben
\it % one chocolate
Let $A$ be the event that a chocolate biscuit is chosen. Then
%\[
%\prob(A|\theta_1)=3/4,\qquad \prob(A|\theta_2)=1/2,\qquad \prob(A|\theta_3)=1/4,
%\]
%or equivalently
\[
L(\theta_1| A)=3/4,\qquad L(\theta_2| A)=1/2,\qquad L(\theta_3| A)=1/4.
\]
Using Bayes' theorem, the posterior distribution is
\[
\pi_1(\theta_k|A) = \frac{L(\theta_k| A)\pi_0(\theta_k)}{\sum_{k=1}^3 L(\theta_k| A)\pi_0(\theta_k)}
\text{\quad\qquad ($k=1,2,3$).}
\]
For the first tin,
\[
\pi_1(\theta_1|A) = \frac{3/4\times 1/3}{(3/4\times 1/3) + (1/2\times 1/3) + (1/4\times 1/3)} = \frac{1}{2}.
\]
Similar calculations for the second and third tins yield the following posterior distribution:
\[
\pi_1(\theta_1) = 1/2,\qquad \pi_1(\theta_2) = 1/3, \qquad \pi_1(\theta_3) = 1/6.
\]
The MAP estimate is $\hat{\theta}_{MAP} = \theta_1$, so our best guess is that the first tin was selected.
% <<
\it % two chocolate
Let $B$ be the event that the a chocolate biscuit was chosen the second time:
\[
L(\theta_1|B)=3/4,\qquad L(\theta_2|B)=1/2,\qquad L(\theta_3|B)=1/4.
\]
Using $\pi_1$ as our prior distribution, we obtain an updated posterior distribution:
\[
\pi_2(\theta_k|B) = \frac{L(\theta_k|B)\pi_1(\theta_k)}{\sum_{k=1}^3 L(\theta_k| B)\pi_1(\theta_k)}
\text{\quad\qquad ($k=1,2,3$).}
\]
For the first tin,
\[
\pi_2(\theta_1|B) = \frac{3/4\times 1/2}{(3/4\times 1/2) + (1/2\times 1/3) + (1/4\times 1/6)} = \frac{9}{14}.
\]
Similar calculations for the second and third tins yield the following posterior:
\[
\pi_2(\theta_1) = 9/14,\qquad \pi_2(\theta_2) = 4/14, \qquad \pi_2(\theta_3) = 1/14.
\]
The MAP estimator again leads us to estimate that the first tin was selected.
\it % one chocolate, one plain
Let $C$ be the event that a plain biscuit was chosen the second time:
\[
L(\theta_1|C)=1/4,\qquad L(\theta_2|C)=1/2,\qquad L(\theta_3|C)=3/4.
\]
Again using $\pi_1$ as our prior distribution, we obtain an updated posterior:
\[
\pi_2(\theta_k|C) = \frac{L(\theta_k|C)\pi_1(\theta_k)}{\sum_{k=1}^3 L(\theta_k|C)\pi_1(\theta_k)}
\text{\quad\qquad ($k=1,2,3$).}
\]
For the first tin,
\[
\pi_2(\theta_1|C) = \frac{1/4\times 1/2}{(1/4\times 1/2) + (1/2\times 1/3) + (3/4\times 1/6)} = \frac{3}{10}.
\]
Similar calculations for the second and third tins yield the following posterior distribution:
\[
\pi_2(\theta_1) = 3/10,\qquad \pi_2(\theta_2) = 4/10, \qquad \pi_2(\theta_3) = 3/10.
\]
This time, the MAP estimator leads us to estimate that the second tin was selected.
\een
\end{solution}
%%----------------------------------------------------------------------
%\section{The scientific method}
%%----------------------------------------------------------------------
%
% remark
\begin{remark}%[The Scientific Method]
We can think of the biscuit tins in Example~\ref{ex:biscuits} as competing scientific hypotheses:
\bit
\it The probability assigned to each hypothesis indicates its \emph{relative plausibility}.
\it We update the relative plausibility of each competing hypothesis based on \emph{observation}.
\eit
%\vspace*{2ex}
In this way, Bayesian inference embodies the \emph{scientific method}.
%\ben
%\it Start with an initial set of beliefs about the relative plausibility of various hypotheses.
%\it Collect new data by conducting experiments.
%\it Refine the relative plausibility of the various hypotheses in the light of the new data.
%\it Repeat (2) and (3).
%\een
\end{remark}
\begin{exercise}
Suppose we have three coins $A$, $B$ and $C$ which have probabilities $1/4$, $1/2$ and $3/4$ respectively of showing heads. A coin is chosen at random, and tossed three times. If exactly two heads are obtained, use the maximum a-posteriori (MAP) estimator to estimate which coin was chosen.
\begin{answer}
First we define the parameter $\theta\in\{1,2,3\}$ such that $\{\theta=1\}$ is the event that coin $A$ is chosen, $\{\theta=2\}$ is the event that coin $B$ is chosen, and $\{\theta=3\}$ is the event that coin $C$ is chosen. We should initially assume that each coin is equally likely to be chosen, so we choose the uniform prior distribution:
\begin{align*}
\pi_0(1) & = \prob(\theta=1) = 1/3 \\
\pi_0(2) & = \prob(\theta=2) = 1/3 \\
\pi_0(3) & = \prob(\theta=3) = 1/3
\end{align*}
Let $T$ be the event that exactly two heads are obtained. Then
\begin{align*}
\prob(T|\theta=1) & = 3(1/4)^2(3/4) = 9/64 \\
\prob(T|\theta=2) & = 3(1/2)^2(1/2) = 3/8 \\
\prob(T|\theta=3) & = 3(1/4)(3/4)^2 = 27/64
\end{align*}
The denominator of the posterior is the overall probability of obtaining exactly two heads:
\begin{align*}
\prob(T)
& = \prob(T|\theta=1)\prob(\theta=1) + \prob(T|\theta=2)\prob(\theta=2) + \prob(T|\theta=3)\prob(\theta=3) \\
& = 3(1/4)^2(3/4)(1/3) + 3(1/2)^3(1/3) + 3(3/4)^2(1/4)(1/3) \\
& = 3/64 + 8/64 + 9/64 \\
& = 20/64
\end{align*}
Hence the posterior distribution is given by
\[
\pi_1(\theta) = \frac{\prob(T|\theta)\pi_0(\theta)}{\prob(T)} =
\]
from which we obtain
\begin{align*}
\pi_1(1) & = 3/20, \\
\pi_1(2) & = 8/20, \\
\pi_1(3) & = 9/20.
\end{align*}
The MAP estimator (mode of the posterior) is $\theta=3$, which corresponds to coin $C$.
\end{answer}
\end{exercise}
%----------------------------------------------------------------------
\section{The binomial model}
%----------------------------------------------------------------------
A suitable model for estimating the distribution of a parameter in the interval $[0,1]$ is provided by the \emph{beta distribution}.
% definition
\begin{definition}\label{def:beta_distribution}
The beta distribution with parameters $\alpha,\beta>0$ is defined by the PDF
\[
f(x;\alpha,\beta) = \begin{cases}
\displaystyle\frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)} & \text{if $0\leq x\leq 1$}, \\
0 & \text{otherwise,}
\end{cases}
\]
where $B(\alpha,\beta)$ is the so-called \emph{beta function},
\[
B(\alpha,\beta) = \int_0^1 t^{\alpha-1}(1-t)^{\beta-1}\,dt,
\]
which is defined for all $\alpha,\beta>0$.
\end{definition}
%\begin{remark}[Special case]
%If $X\sim\text{Beta}(1,1)$, then $X\sim\text{Uniform}(0,1)$.
%\end{remark}
% lemma
\begin{lemma}
Let $X\sim\text{Beta}(\alpha,\beta)$. Then $\expe(X) = \displaystyle\frac{\alpha}{\alpha+\beta}$ and $\mode(X) = \displaystyle\frac{\alpha-1}{\alpha+\beta-2}$ provided that $\alpha,\beta > 1$.
%For $X\sim\text{Beta}(\alpha,\beta)$,
%\[
%\expe(X) = \frac{\alpha}{\alpha+\beta} \text{\quad and\quad} \var(Y) = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}.
%\]
%and if $\alpha,\beta > 1$,
%\[
%\mode(X) = \frac{\alpha-1}{\alpha+\beta-2}.
%\]
\end{lemma}
% proof
\begin{proof}
Exercise.
\end{proof}
% example
\begin{example}
Let $X\sim\text{Binomial}(n,\theta)$ where $n$ is known, but $0<\theta<1$ is unknown.
\ben
\it We conduct a sequence of $n$ independent trials and observe $k$ successes. Find a suitable prior distribution for $\theta$, compute the posterior distribution, and find its mean and mode.
\it We conduct a further sequence of $n$ independent trials, this time observing $k'$ successes. Compute an updated posterior distribution for $\theta$, and find its mean and mode.
\een
\end{example}
% solution
\begin{solution}
\ben
\it % <<< (i)
Let $f(x|\theta)$ be the PMF of the $\text{Binomial}(n,\theta)$ distribution:
\[
f(x|\theta) = \binom{n}{k}\theta^x(1-\theta)^{n-x}
\]
Initially we should consider every value of $\theta$ to be equally likely. Thus we adopt the uniform prior distribution for $\theta$.
\[
\pi_0(\theta) = \left\{\begin{array}{ll}
1 & \text{if } \theta\in[0,1] \\
0 & \text{otherwise.}
\end{array}\right.
\]
Given $k$ successes in $n$ trials, the likelihood function is
\[
L(\theta|k) = f(k|\theta) = \binom{n}{k}\theta^k(1-\theta)^{n-k}
\]
The posterior distribution combines the observation with the prior distribution:
\[
\pi_1(\theta) = \pi_1(\theta|k)
= \frac{L(\theta|k)\pi_0(\theta)}{\int L(\theta|k)\pi_0(\theta)\,d\theta}
= \frac{\theta^k(1-\theta)^{n-k}}{\int_0^1\theta^k(1-\theta)^{n-k}\,d\theta}.
\]
We recognise $\pi_1(\theta)$ as the PDF of the $\text{Beta}(\alpha,\beta)$ distribution, with parameters
\[
\alpha=k+1 \text{\quad and\quad}\beta=n-k+1.
\]
\bit
\it The mode of $\pi_1(\theta)$ is $k/n$. This is the MAP estimator of $\theta$.
\it Note that this co-incides with the MLE of $\theta$.
\eit
\bit
\it The expected value of $\pi_1(\theta)$ is $(k+1)/(n+2)$.
\it This is approximately equal to the MAP estimator when $k$ and $n$ are both large.
\eit
\it % <<< (ii)
Given $k'$ successes in $n$ trials, the likelihood function is
\[
L(\theta|k') = f(k'|\theta) = \binom{n}{k'}\theta^{k'}(1-\theta)^{n-k'}.
\]
Using $\pi_1$ as the new prior distribution for $\theta$, the new posterior distribution $\pi_2$ is
\[
\pi_2(\theta) =
\pi_2(\theta|k,k')
= \frac{L(\theta|k')\pi_1(\theta)}{\int_0^1 L(\theta|k')\pi_1(\theta)\,d\theta}
= \frac{\theta^{k+k'}(1-\theta)^{2n-(k+k')}}{\int_0^1\theta^{k+k'}(1-\theta)^{2n-(k+k')}\,d\theta}.
\]
We recognise $\pi_2(\theta)$ as the PDF of the $\text{Beta}(\alpha,\beta)$ distribution, with parameters
\[
\alpha=k+k'+1 \text{\quad and\quad}\beta=2n-(k+k')+1.
\]
Hence our updated MAP estimate of $\theta$ is
\[
\hat{\theta}_{MAP} = \frac{k+k'}{2n}.
\]
\bit
\it If $k'<k$, the mode shifts to the left (adjusted down).
\it If $k'>k$, the mode shifts to the right (adjusted up).
\eit
\een
\end{solution}
%----------------------------------------------------------------------
\section{The exponential model}
%----------------------------------------------------------------------
A suitable model for estimating the distribution of a non-negative parameter $\theta\geq 0$ is provided by the \emph{gamma distribution}.
% definition
\begin{definition}\label{def:gamma_distribution}
The \emph{gamma distribution} with parameter $\alpha,\beta>0$ is defined by the PDF
\[
f(x;\alpha,\beta) = \begin{cases}
\displaystyle\frac{\beta^{\alpha}}{\Gamma(\alpha)}\, x^{\alpha-1} e^{-\beta x} & \text{for $x>0$}, \\
0 & \text{otherwise.}
\end{cases}
\]
where $\Gamma(\alpha)$ is the so-called \emph{gamma function},
\[
\Gamma(\alpha) = \int_0^{\infty} t^{\alpha-1}e^{-t}\,dt,
\quad\text{which is defined for all $\alpha\in\R$.}
\]
\end{definition}
%% remark
%\begin{remark}[Special cases]
%\bit
%\it If $X\sim\text{Exponential}(\lambda)$, where $\lambda$ is a rate parameter, then $X\sim\text{Gamma}(1,\lambda)$.
%\it If $X\sim\text{Chi-squared}(k)$ then $X\sim\text{Gamma}(k/2,2)$
%\eit
%\end{remark}
% lemma
\begin{lemma}
Let $X\sim\text{Gamma}(\alpha,\beta)$. Then $\expe(X) = \displaystyle\frac{\alpha}{\beta}$ and $\mode(X) = \displaystyle\frac{\alpha-1}{\beta}$ provided that $\alpha > 1$.
%\quad\text{and}\quad
%(2)\quad \mode(X) = \frac{\alpha-1}{\beta}
%\text{ provided that $\alpha > 1$}.
%\]
\end{lemma}
% proof
\begin{proof}
Exercise.
\end{proof}
% example: exponential
\begin{example}
Let $X\sim\text{Exponential}(\lambda)$, where $\lambda>0$ is an unknown rate parameter. Let $X_1,X_2,\ldots,X_n$ be a random sample of observations from the distribution of $X$, and suppose that we adopt the $\text{Gamma}(\alpha,\beta)$ distribution as a prior distribution for $\lambda$, where $\alpha,\beta>0$ are fixed values (perhaps estimated in some preliminary experiments).
\ben
\it Find the mean and mode of the prior distribution.
\it Show that the posterior of $\lambda$ is the $\text{Gamma}(\alpha+n,\beta+\sum_{i=1}^n x_i)$ distribution.
\it Find the mean and mode of the posterior distribution.
\een
\end{example}
% solution
\begin{solution}
Let $f(x|\lambda)$ be the PDF of the $\text{Exponential}(\lambda)$ distribution:
\[
f(x|\lambda) = \left\{\begin{array}{ll}
\lambda\exp(-\lambda x) & \text{for $x>0$}, \\
0 & \text{otherwise.}
\end{array}\right.
\]
The PDF of the prior distribution is
\[
\pi_0(\lambda) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\,\lambda^{\alpha-1}\exp(-\beta\lambda).
\]
which has mean $\alpha/\beta$ and mode $(\alpha-1)/\beta$.
Let $\boldx$ be a realisation of the sample. The likelihood function is
\[
L(\lambda|\mathbf{x})
= \prod_{i=1}^n f(x_i|\lambda)
= \prod_{i=1}^n \lambda\exp(-\lambda x_i)
= \lambda^n \exp\Big(-\lambda\textstyle\sum_{i=1}^n x_i\Big).
\]
The PDF of the posterior distribution is
\[
\pi_1(\lambda|\mathbf{x})
= \frac{L(\lambda|\mathbf{x})\pi_0(\lambda)}{\displaystyle\int_0^{\infty} L(\lambda|\mathbf{x})\pi_0(\lambda)\,d\lambda}.
\]
The numerator is the product of the likelihood $L(\lambda|\mathbf{x})$ and the prior $\pi_0(\lambda)$:
\begin{align*}
L(\lambda|\mathbf{x})\pi_0(\lambda)
& = \Big[\lambda^n \exp\big(-\lambda\textstyle\sum_{i=1}^n x_i\big)\Big]\left[\displaystyle\frac{\beta^{\alpha}}{\Gamma(\alpha)}\lambda^{\alpha-1}\exp(-\beta\lambda)\right] \\[1ex]
& = \frac{\beta^{\alpha}}{\Gamma(\alpha)} \lambda^{\alpha+n-1}\exp\Big[-\big(\beta+\textstyle\sum_{i=1}^n x_i\big)\lambda\Big]
\end{align*}
%Notice that this resembles the PDF of the $\text{Gamma}\Big(\alpha+n,\beta+\sum_{i=1}^n x_i\Big)$ distribution.
The PDF of the posterior distribution becomes
\[
\pi_1(\lambda|\mathbf{x})
= \frac{\lambda^{\alpha+n-1}\exp\Big[-\big(\beta+\sum_{i=1}^n x_i\big)\lambda\Big]}
{\displaystyle\int_0^{\infty} \lambda^{\alpha+n-1}\exp\Big[-\big(\beta+\sum_{i=1}^n x_i\big)\lambda\Big]\,d\lambda}
\]
To compute the denominator, change the variable of integration to $t = -\big(\beta+\sum_{i=1}^n x_i\big)\lambda$. This yields
\[
\int_0^{\infty} \lambda^{\alpha+n-1}\exp\Big[-\big(\beta+\textstyle\sum_{i=1}^n x_i\big)\lambda\Big]\,d\lambda
= \displaystyle\frac{\Gamma(\alpha+n)}{\big(\beta+\sum_{i=1}^n x_i\big)^{\alpha+n}}
\]
Thus the PDF of the posterior distribution is
\[
\pi_1(\lambda|\mathbf{x})
= \frac{\big(\beta+\sum_{i=1}^n x_i\big)^{\alpha+n}}{\Gamma(\alpha+n)}
\lambda^{\alpha+n-1}\exp\Big[-\big(\beta+\sum_{i=1}^n x_i\big)\lambda\Big]
\]
which is the PDF of the $\text{Gamma}\Big(\alpha+n,\beta+\sum_{i=1}^n x_i\Big)$ distribution.
% <<
The mean and mode of $\lambda\sim\text{Gamma}\Big(\alpha+n,\beta+\sum_{i=1}^n x_i\Big)$ are
\[
\expe(\lambda) = \frac{\alpha+n}{\beta+\sum_{i=1}^n x_i}
\text{\quad and\quad}
\text{Mode}(\lambda) = \frac{\alpha+n-1}{\beta+\sum_{i=1}^n x_i}
\text{\quad respectively.}
\]
Hence the MAP estimator of $\lambda$ is
\[
\hat{\lambda}_{MAP} = \frac{\alpha+n-1}{\beta+\sum_{i=1}^n x_i}
\]
\bit
\it When $n=0$, this is simply the mode of the prior distribution, $\text{Gamma}(\alpha,\beta)$.
\it As $n$ increases, the influence of the prior distribution decreases.
\it If we write $\hat{\lambda}_{MAP}$ as
\[
\hat{\lambda}_{MAP}
= \frac{1 + \left(\frac{\alpha-1}{n}\right)}{\frac{1}{n}\sum_{i=1}^n x_i + \left(\frac{\beta}{n}\right)}.
\]
we see that $\hat{\lambda}_{MAP}=\bar{X}^{-1}$ as $n\to\infty$.
\it This is the method-of-moments estimator (MME) of $\lambda$, which is based entirely on the data and takes no account of the prior distribution.
\eit
\end{solution}
%----------------------------------------
\begin{exercise}
\begin{questions}
\question
Let $X\sim\text{Geometric}(\theta)$ where $0<\theta<1$ is unknown.
\begin{parts}
\part % << (i)
A single experiment yields the observation $k$. Find a suitable prior distribution for $\theta$, compute the corresponding posterior distribution, and find the MAP estimator of $\theta$ for this posterior.
\begin{answer}
Let $f(x|\theta)$ be the PMF of the $\text{Geometric}(\theta)$ distribution:
\[
f(x|\theta) = \theta^x(1-\theta)^{n-x}
\]
Without any information about $\theta$, we should choose the \emph{na\"{\i}ve} prior:
\[
\pi_0(\theta) = \begin{cases}
1 & \text{if $0\leq\theta\leq 1$,} \\
0 & \text{otherwise.}
\end{cases}
\]
For the observation $X=k$, the likelihood function is
\[
L(\theta|k) = f(k|\theta) = \theta(1-\theta)^{k-1}
\]
The posterior distribution is:
\[
\pi_1(\theta|k)
= \frac{L(\theta|k)\pi_0(\theta)}{\int L(\theta|k)\pi_0(\theta)\,d\theta}
= \frac{\theta(1-\theta)^{k-1}}{\int_0^1\theta(1-\theta)^{k-1}\,d\theta}.
\]
We recognise this as the PDF of the $\text{Beta}(\alpha,\beta)$ distribution, with parameters $\alpha=2$ and $\beta=k$. The mode of the $\text{Beta}(\alpha,\beta)$ distribution is $(\alpha-1)/(\alpha+\beta-2)$, so the MAP estimator is
\[
\hat{\theta}_{MAP} = \frac{1}{k}.
\]
\end{answer}
\part % << (ii)
A second experiment yields the observation $k'$. Compute an updated posterior distribution for $\theta$, and find a new MAP estimator for $\theta$.
\begin{answer}
For the observation $X=k'$, the likelihood function is
\[
L(\theta|k') = f(k'|\theta) = \theta(1-\theta)^{k'-1}
\]
Using $\pi_1$ as the new prior distribution for $\theta$, the new posterior distribution $\pi_2$ is
\[
\pi_2(\theta|k,k')
= \frac{L(\theta|k')\pi_1(\theta)}{\int_0^1 L(\theta|k')\pi_1(\theta)\,d\theta}
= \frac{\theta^2(1-\theta)^{k+k'-2}}{\int_0^1\theta^2(1-\theta)^{k+k'-2}\,d\theta}.
\]
which we recognise as the PDF of the $\text{Beta}(\alpha,\beta)$ distribution, with parameters $\alpha=3$ and $\beta=k+k'-1$. Hence the new MAP estimator is
\[
\hat{\theta}_{MAP} = \frac{2}{k+k'}.
\]
\end{answer}
\end{parts}
\question
Let $X\sim\text{Poisson}(\lambda)$, where $\lambda>0$ is unknown, and let $X_1,X_2,\ldots,X_n$ be a random sample of observations from the distribution of $X$. Suppose we adopt the $\text{Gamma}(\alpha,\beta)$ distribution as a prior distribution for $\lambda$, where $\alpha,\beta>0$ are fixed values.
\begin{parts}
\part % << (i)
Show that the MAP estimator of $\lambda$ is given by
\[
\hat{\lambda}_{MAP} = \frac{\alpha-1+\sum_{i=1}^n X_i}{n+\beta}
\]
\begin{answer}
Let $f(x|\lambda)$ be the PMF of the $\text{Poisson}(\lambda)$ distribution:
\[
f(x|\lambda) = \begin{cases}
\frac{\lambda^{x}e^{-\lambda}}{x!} & \text{for $x=0,1,2,\ldots$}, \\
0 & \text{otherwise.}
\end{cases}
\]
The PDF of the prior distribution is
\[
\pi_0(\lambda) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\,\lambda^{\alpha-1}\exp(-\beta\lambda).
\]
which has mean $\alpha/\beta$ and mode $(\alpha-1)/\beta$.
Let $\boldx=(x_1,x_2,\ldots,x_n)$ be a realisation of the sample. The likelihood function is
\[
L(\lambda|\mathbf{x})
= \prod_{i=1}^n f(x_i|\lambda)
= \prod_{i=1}^n \frac{\lambda^{x_i}e^{-\lambda}}{x_i!}
= \lambda^{\sum x_i} e^{-n} \prod_{i=1}^n \frac{1}{x_i!}
\]
The PDF of the posterior distribution is
\[
\pi_1(\lambda|\mathbf{x})
= \frac{L(\lambda|\mathbf{x})\pi_0(\lambda)}{\int_0^{\infty} L(\lambda|\mathbf{x})\pi_0(\lambda)\,d\lambda}.
\]
To find the MAP estimator, we need to find the value of $\lambda$ that maximises the numerator:
\[
L(\lambda|\mathbf{x})\pi_0(\lambda)
= c\lambda^{(\alpha-1+\sum x_i)} e^{-(n+\beta)\lambda}
\quad\text{where}\quad
c = \left(\prod_{i=1}^n \frac{1}{x_i!}\right) \frac{\beta^\alpha}{\Gamma(\alpha)}
\]
Let $g(\lambda)=\lambda^{\alpha-1+\sum x_i} e^{-(n+\beta)\lambda}$. Then
\[
g'(\lambda) = \lambda^{(\alpha-2+\sum x_i)} e^{-(n+\beta)\lambda}\left[(\alpha-1+\sum _{i=1}^n x_i) - \lambda(n+\beta)\right]
\]
Setting $g'(\lambda)$ to zero and solving for $\lambda$, we obtain the MAP estimator
\[
\hat{\lambda}_{MAP} = \frac{\alpha-1+\sum_{i=1}^n X_i}{n+\beta}
\]
as required.
\end{answer}
\part % << (ii)
Comment on the limiting cases (i) $n=0$ and (ii) $n\to\infty$.
\begin{answer}
\bit
\it When $n=0$, $\hat{\lambda}_{MAP}=(\alpha-1)/\beta$ is the mode of the prior distribution.
\it When $n\to\infty$,
\[
\hat{\lambda}_{MAP}
= \frac{\frac{\alpha-1}{n}+\frac{1}{n}\sum_{i=1}^n X_i}{1+\frac{\beta}{n}}
\to \frac{1}{n}\sum_{i=1}^n X_i
\]
As the sample size increases, the effect of the prior decreases, and the MAP estimator approaches the sample mean in the limit as $n\to\infty$.
\eit
\end{answer}
\end{parts}
\question
Let $X_1,\ldots,X_n$ be a random sample from the $N(\mu,\sigma^2)$ distribution, where the mean $\mu$ is unknown but the variance $\sigma^2$ is known. Suppose we adopt the $N(\mu_0,\sigma_0^2)$ distribution as a prior for the unknown mean $\mu$ (where $\mu_0$ and $\sigma_0^2$ are known constants). Compute the maximum a-posteriori (MAP) estimator of $\mu$.
\begin{answer}
Let $\pi_0(\mu)$ denote the prior density function of $\mu$:
\[
\pi_0(\mu) = \frac{1}{\sigma_0\sqrt{2\pi}}\exp\left[-\frac{1}{2}\left(\frac{\mu-\mu_0}{\sigma_0}\right)^2\right].
\]
For the observed sequence $\mathbf{x}=(x_1,x_2,\ldots,x_n)$, the likelihood function is
\begin{align*}
L(\mu\,|\,\mathbf{x})
& = \prod_{i=1}^n \left(\frac{1}{\sigma\sqrt{2\pi}}\right)\exp\left[-\frac{1}{2}\left(\frac{x_i-\mu}{\sigma}\right)^2\right] \\
& = \left(\frac{1}{\sigma\sqrt{2\pi}}\right)^{n/2}\exp\left[-\frac{1}{2}\sum_{i=1}^n\left(\frac{x_i-\mu}{\sigma}\right)^2\right].
\end{align*}
The posterior density function of $\mu$ combines the data and the prior:
\begin{align*}
\pi_1(\mu)
& = \frac{L(\mu\,|\,\mathbf{x})\pi_0(\mu)}{\int L(\mu|\mathbf{x})\pi_0(\mu)\,d\mu}.
\end{align*}
The MAP estimator of $\mu$ is the value that maximises the posterior $\pi_1(\mu)$. Since the denominator in the above expression for $\pi_1$ is a constant, it is sufficient to find the value of $\mu$ that maximises the numerator,
\begin{align*}
L(\mu|\mathbf{x})\pi_0(\mu)
= \left(\frac{1}{\sigma_0\sqrt{2\pi}}\right)\left(\frac{1}{\sigma\sqrt{2\pi}}\right)^{n/2}
\exp\left[-\frac{1}{2}\sum_{i=1}^n\left(\frac{x_i-\mu}{\sigma}\right)^2 -\frac{1}{2}\left(\frac{\mu-\mu_0}{\sigma_0}\right)^2\right].
\end{align*}
Let
\[
g(\mu) = \exp\left[-\frac{1}{2}\sum_{i=1}^n\left(\frac{x_i-\mu}{\sigma}\right)^2 -\frac{1}{2}\left(\frac{\mu-\mu_0}{\sigma_0}\right)^2\right].
\]
The value of $\mu$ that maximizes $L(\mu|\mathbf{x})\pi_0(\mu)$ also maximizes $g(\mu)$. The first derivative of $g$ with respect to $\mu$ is
\[
g'(\mu) = \left[\frac{1}{\sigma^2}\sum_{i=1}^n (x_i-\mu) - \frac{1}{\sigma_0^2} (\mu-\mu_0)\right]g(\mu).
\]
Setting this equal to zero,
\[
\frac{1}{\sigma^2}\sum_{i=1}^n (x_i-\mu) = \frac{1}{\sigma_0^2}(\mu-\mu_0),
\]
and solving for $\mu$, we obtain the MAP estimator,
\[
\hat{\mu}_{MAP} = \left(\frac{\sigma^2\sigma_0^2}{\sigma^2+n\sigma_0^2}\right)\left(\frac{1}{\sigma^2}\sum_{i=1}^n x_i + \frac{\mu_0}{\sigma_0^2}\right).
\]
\bit
\it This expression can be rearranged to give
\[
\hat{\mu}_{MAP}\left(1 + \frac{n\sigma_0^2}{\sigma^2}\right) = \frac{\sigma_0^2}{\sigma^2}\sum_{i=1}^n x_i + \mu_0.
\]
This shows that $\hat{\mu}=\mu_0$ when $n=0$, so the $\hat{\mu}_{MAP}$ is equal to the mean of the prior distribution when there is no data.
\it The expression can also be rearranged to give
\[
\hat{\mu}_{MAP}\left(1 + \frac{\sigma^2}{n\sigma_0^2}\right) = \frac{1}{n}\sum_{i=1}^n x_i + \left(\frac{\sigma^2}{n\sigma_0^2}\right)\mu_0.
\]
This shows that $\hat{\mu}_{MAP} \to \bar{X}$ as $n\to\infty$ (which is independent of the prior).
\eit
\end{answer}
%----------------------------------------
\end{questions}
\end{exercise}
%----------------------------------------------------------------------
| {
"alphanum_fraction": 0.6381651376,
"avg_line_length": 41.1918604651,
"ext": "tex",
"hexsha": "9e2ee3d4ac6c1cd4826301f451568e760a937de9",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z",
"max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gillardjw/notes",
"max_forks_repo_path": "L5/MA2500/13_bayesian_inference.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gillardjw/notes",
"max_issues_repo_path": "L5/MA2500/13_bayesian_inference.tex",
"max_line_length": 382,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gillardjw/notes",
"max_stars_repo_path": "L5/MA2500/13_bayesian_inference.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12221,
"size": 35425
} |
In this report we have introduced Markov Chain Monte Carlo algorithms, with a particular focus on Langevin Monte Carlo methods. We then tested the performance of the algorithms based on a number of metrics and a broad range of parameters. Our main contribution is extending the work of \cite{Brosse18tULA} and producing a \textsc{Python} package ready for use by other researchers to implement their own methods to be tested against the algorithms we have presented here. As well as implementing their own methods, it is also written in a way that allows easy extension to different potentials and distributions. As well as this, we have extended the visualisation library of Rogozhnikov \cite{rogozhnikov2016hmc} to include LMC methods. This allows students and those newly introduced to the field to get an intuitive look at how these algorithms work, and the differences between them.
We have shown that taming is a viable method of preventing divergence of Langevin-based algorithms and gives insight into a distribution when Metropolised algorithms are unable to, particularly in ill-conditioned problems. It is interesting to note the efficiency of the \texttt{LM} algorithm in particular, despite its relative simplicity. It does however suffer from divergence problems as \texttt{ULA}.
There is no simple answer as to which algorithm is `best'. Depending on the application, and the prior information given on the distribution, one could make the case for any of the algorithms presented here. If the general shape of the distribution is known and computational power is not a restriction, a simple Random Walk Metropolis algorithm may well be the best option, even in the superlinear case.
If very little is known about maxima of the distribution, the taming method is the best choice. It is able to quickly find the modes of the distribution without divergence, however once at the mode it often over estimates the width of the potential well. This has provoked some investigation into `switching' methods, where the chain is initially started using \texttt{tULA} for a fixed time to find the mode, before switching to the \texttt{RWM} algorithm to better explore the well. This is the rationale behind the \texttt{HPH} algorithm in the package, however it has been omitted from the report due to a lack of theoretical justification. It may well turn out to be similar to existing adaptive time stepping methods, or tempering methods.
Higher order methods seem to come at too great a computational cost, especially in very high dimension, although the theory supporting them suggests that more work is to be done on the numerical implementation of such methods.
If this report has highlighted anything, it is that Metropolisation is not the final word in Langevin Monte Carlo, and other methods of approximating Langevin dynamics should be exploited and explored. Non-asymptotic bounds are of great importance in this area, as it is known that \texttt{MALA} converges in the limit but this is of little practical use.
\subsection{Future Work}\label{subsec:future}
There are many possible ways in which the work presented here could be extended, both from a research and personal perspective. First, there is great scope for the improvement of our program. It is possible to add new analytic distributions to sample from, including those with non-smooth potentials as in \cite{durmus2018efficient}. Another obvious avenue would be to apply all the methods and analysis here to real (large) datasets, when the gradient is not known analytically. This would give the end user a much clearer impression of which algorithm to use in their given case. It would also slow down all gradient based methods as an unbiased estimator for the gradient would have to be calculated at every iteration. It may also be possible to speed up the higher order methods (\texttt{HOLA,tHOLA,tHOLAc}) by implementing a parallelised version, breaking up the complex iteration into smaller easier to manage sections. This in general is highly non-trivial due to the inherent dependence on the previous step in a Markov chain. Another important consideration is that of burn-in time. For the majority of our tests, we started from a minimiser of the potential well however our code is certainly not limited to this case. Doing so effectively removes any burn-in time as the chains are started in approximate stationarity. When started far away from the mode of a distribution, initial tests suggest taming is a highly preferable method to Metropolised algorithms. They take many more steps towards the mode, while Metropolised algorithms waste lots of time rejecting moves. In practice, this would greatly reduce the burn-in time -- a key feature of an effective MCMC method.
It is also important that we test the accuracy of our measures. This is difficult, especially in high dimensions as no methods exist for numerically calculating the 2-Wasserstein metric that many of the theoretical bounds use. Furthermore, using kernel density estimation or histograms with few bins introduces an error that is difficult to quantify and reduce. Here we have only qualitative comparisons between metrics. This is a well known problem in numerical optimal transport and inherent in dealing with high dimensional datasets and problems.
Many other methods exist in MCMC that remain to be tested against the tamed algorithms, or incorporated in to our program. These include Hamiltonian Monte Carlo (HMC), manifold MALA (mMALA), underdamped Langevin Monte Carlo and stochastic gradient Langevin dynamics (SGLD) \cite{betancourt2017conceptual, Girolami2011,cheng2018,pitfalls}. For dealing with stiff problems, specific methods also exist that could be used as benchmarks for taming algorithms \cite{abdulle2013weak}. In the next section SGLD will be expanded on due to its popularity in machine learning and high dimensional problems.
For the wider field, it is important that either `user-friendly' bounds on nonasymptotic error are developed in terms of a numerically implementable metric, or that methods are developed to accurately calculate the 2-Wasserstein metric. Even this will not solve the problem of approximating a high dimensional distribution using samples from a distribution; it is simply infeasible to be able to generate enough samples to get a good representation.
| {
"alphanum_fraction": 0.816698054,
"avg_line_length": 289.6363636364,
"ext": "tex",
"hexsha": "56b6995b7c97750cb1a09afd1e1db2afd9834d9b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-01-19T17:44:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-19T17:44:19.000Z",
"max_forks_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "swyoon/LangevinMC",
"max_forks_repo_path": "WriteUp/conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "swyoon/LangevinMC",
"max_issues_repo_path": "WriteUp/conclusion.tex",
"max_line_length": 1685,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Tom271/LangevinMC",
"max_stars_repo_path": "WriteUp/conclusion.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-04T13:35:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-07T12:51:19.000Z",
"num_tokens": 1294,
"size": 6372
} |
% !TEX root = cs1textbook.tex
\chapter{Reference Variables}
\label{chapter:reference-variables}
\minitoc
\section{Built-In Data Types}
The Java language provides a few ways to represent data. In Chapter \ref{chapter:info-numbers} we learned a couple of the ways Java allows us to store numbers. Truth values were discussed in Chapter \ref{chapter:if}. Then, in Chapter \ref{chapter:char}, we learned how to store characters and strings. But, as they said on Sesame Street, one of these things is not like the others.
\subsection{Primitive Data Types}
In Section \ref{section:string}, it was noted that the data types \texttt{int}, \texttt{double}, \texttt{boolean}, and \texttt{char} are \textbf{primitive data types}\index{Primitive Data Types}, without discussing what that means. For now, we will define a variable of a primitive data type as one that stores a value directly, but this won't mean much until we explore the other kind of data type.
\subsection{Classes Are Also Data Types}
We also learned in \ref{section:string} that the \texttt{String} data type is part of the Java language. \texttt{String} is also the name of a Java class. A \textbf{class} serves as a data type in Java, and in this chapter we will explore how to create our own data types.
Java programmers use the word \textit{class} to describe a group of objects with similar characterisics or uses; for instance, acoustic 12-string guitars are one class of musical instruments, and wide receivers are one class of football players. When we define a class, we specify what objects of that class will be like, and we explain what sort of tasks these objects will be able to perform.
\begin{defn}{Class}
A \textbf{class}\index{Class!Definition} is a Java data type. We programmers can create our own data types by defining our own classes. A typical Java class defines the \textit{properties} and \textit{behaviors} of the variables of this type that will be created.
\end{defn}
\section{Primitive Variables and Reference Variables}
Before starting to discuss how to create our own Java classes, we should examine what happens in memory when we create variables of those classes. In Section \ref{section:string} we learned that a variable of type \texttt{String} must be \textit{instantiated} using code like:
\begin{minted}{java}
String s1 = new String( "Hello world!" );
\end{minted}
The way this \texttt{String} is stored, however, is quite different from how an integer is stored. The value is not directly stored in the variable. Consider this code:
\begin{minted}{java}
int x1 = 15;
String s1 = new String( "Hello world!" );
\end{minted}
Figure \ref{fig:reference-variable} shows how the values are stored.
\begin{figure}[ht]
\begin{center}
\sffamily
\begin{subfigure}{0.2\textwidth}
\begin{center}
\begin{tikzpicture}[var/.style={draw=nccblue,fill=nccorange,text=white,minimum width = width{"Hello world!"}, minimum height=1.25pt,inner sep=2pt,outer sep=2pt}]
\node (x1) [var] {15};
\node [above = 2mm of x1] {x1};
\end{tikzpicture}
\end{center}
\caption{\mintinline{java}{int x1 = 15;}}
\end{subfigure}%
\begin{subfigure}{0.4\textwidth}
\begin{center}
\begin{tikzpicture}[var/.style={draw=nccblue,fill=nccorange,text=white,minimum height=2pt,inner sep=2pt,outer sep=2pt},data/.style={draw=nccblue,fill=white,text=nccblue,inner sep=2pt,outer sep=2pt}]
\node (s1) [var] {\textcolor{nccorange}{X}\textcolor{nccblue}{\raisebox{0.25ex}{$\bullet$}}\textcolor{nccorange}{X}};
\node (hw) [data,right = 1cm of s1] {Hello world!};
\node [above = 2mm of s1] {s1};
\path [->,draw,thick,nccblue] (s1.center) -- (hw);
\end{tikzpicture}
\end{center}
\caption{\mintinline{java}{String s1 = new String( "Hello world!" );}}
\end{subfigure}
\end{center}
\caption{How Integers and Strings Are Stored in Memory}
\label{fig:reference-variable}
\end{figure}
The actual string is stored some place in memory away from the variable \texttt{s1}, and what \texttt{s1} stores is the numeric location of this place in memory. We say that \texttt{s1} is a \textbf{reference variable} because it contains a \textit{reference} to the string ``Hello world!''.
\begin{defn}{Reference Variable}
A \textbf{reference variable}\index{Reference Variable} is a variable that stores a \textit{reference} to, or the memory address of, a piece of data stored somewhere else in memory.
\end{defn}
Throughout this Part of the book, we will be developing an application as a case study. Specifically, we will be designing a program that children can use to draw geometric shapes on the screen, and learn about some of these shapes' important characteristics like area, perimeter, volume, etc.
| {
"alphanum_fraction": 0.7113608992,
"avg_line_length": 63.8717948718,
"ext": "tex",
"hexsha": "8c95be4393669983984319f263117fe745cd24f3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cmerlo441/cs1textbook",
"max_forks_repo_path": "reference-variables.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cmerlo441/cs1textbook",
"max_issues_repo_path": "reference-variables.tex",
"max_line_length": 400,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cmerlo441/cs1textbook",
"max_stars_repo_path": "reference-variables.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1308,
"size": 4982
} |
%mainfile: ../master.tex
\section{Criteria for Language Design}\label{analsum}
This section will, in light of the preceding two chapters, \cref{part:analysis,part:analysis2}, settle on some criteria for TLDR. The criteria are inspired by those found in \cite{sebesta2015concepts}. However, only those characteristics which differentiate TLDR will be discussed, even though other characteristics will be present in the language.
\subsection{Simplicity, Orthogonality and Syntax Design}
In order to accommodate a user group with programming as a secondary skill set, TLDR should be simple. This would result in strict and straightforward rules regarding interactions. The language should try to keep simple rules regarding orthogonality as well, but not necessarily be highly orthogonal for that reason. The syntax design, should for these reasons, cater towards a mathematical perspective rather than a computer science perspective.
\subsection{Data Types}
Since the users of TLDR will typically have a background in mathematics, or require the use of some mathematics in order to asses the value of any given result of a simulation, the language should strive towards having data types which allow for representation of traditional mathematical numbers.
\subsection{Type Checking and Exception Handling}
Due to the language targeting high performance computation, the language should try to avoid run-time errors, and catch problems as early as possible. This suggests a strongly typed language, but this will be elaborated further on in \cref{typesys}. For the same reasons, the language will not value exception handling, since edge cases should be fully encompassed in a simulation. However it is worth noticing that the actor model could require exception handling, if communication problems, caused by either race conditions or network conditions, should prove frequent.
\subsection{Readability over Writability}
TLDR values readability over writability, since the problems which the languages aims to support are usually big complex mathematical problems, which quickly can confuse the programmers, if they are not able to understand the code. The size of the problems will also likely popularise the use of external libraries, which will also benefit from readability, since an understanding of the functionalities supplied by a given library is useful when exploring which library best solves the problem.
| {
"alphanum_fraction": 0.8215904409,
"avg_line_length": 101.125,
"ext": "tex",
"hexsha": "ca20086924d35dc672ac5c73d871627aa266b35f",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2016-04-12T20:49:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-04-12T20:49:43.000Z",
"max_forks_repo_head_hexsha": "1fb4ce407174224efce92aa3ee5e1ac5704a307b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "simonvandel/P4",
"max_forks_repo_path": "Report/Analysis/summary.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1fb4ce407174224efce92aa3ee5e1ac5704a307b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "simonvandel/P4",
"max_issues_repo_path": "Report/Analysis/summary.tex",
"max_line_length": 571,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1fb4ce407174224efce92aa3ee5e1ac5704a307b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "simonvandel/TLDR",
"max_stars_repo_path": "Report/Analysis/summary.tex",
"max_stars_repo_stars_event_max_datetime": "2015-02-18T13:38:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-18T13:38:49.000Z",
"num_tokens": 462,
"size": 2427
} |
\documentclass{article}
\begin{document}
\section{This is a normal section}
\section{
This is a really long section title which is hard-wrapped
after 80 characters or so to keep the source code readable
}
\end{document}
| {
"alphanum_fraction": 0.7489177489,
"avg_line_length": 19.25,
"ext": "tex",
"hexsha": "1cffa7033b1f01c6397f35c0e246adf19c2d4c9b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ab3ee41a625d4f9e8c22adff3304f739e9882cfa",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "stungkit/vimtex",
"max_forks_repo_path": "test/test-toc/test-multiline.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "ab3ee41a625d4f9e8c22adff3304f739e9882cfa",
"max_issues_repo_issues_event_max_datetime": "2015-03-08T16:19:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-08T16:19:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "stungkit/vimtex",
"max_issues_repo_path": "test/test-toc/test-multiline.tex",
"max_line_length": 62,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ab3ee41a625d4f9e8c22adff3304f739e9882cfa",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "stungkit/vimtex",
"max_stars_repo_path": "test/test-toc/test-multiline.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 57,
"size": 231
} |
\documentclass[final]{article}
% if you need to pass options to natbib, use, e.g.:
% \PassOptionsToPackage{numbers, compress}{natbib}
% before loading nips_2017
%
% to avoid loading the natbib package, add option nonatbib:
% \usepackage[nonatbib]{nips_2017}
\usepackage{nips_2017}
% to compile a camera-ready version, add the [final] option, e.g.:
% \usepackage[final]{nips_2017}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{booktabs}
\DeclareMathOperator*{\Motimes}{\text{\raisebox{0.25ex}{\scalebox{0.8}{$\bigotimes$}}}}
% \usepackage{lmodern}
\allowdisplaybreaks
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{tikz}
\usepackage{float}
\usetikzlibrary{shapes, arrows}
\title{Predicting Stock Trends with Fourier Analysis}
% The \author macro works with any number of authors. There are two
% commands used to separate the names and addresses of multiple
% authors: \And and \AND.
%
% Using \And between authors leaves it to LaTeX to determine where to
% break the lines. Using \AND forces a line break at that point. So,
% if LaTeX puts 3 of 4 authors names on the first line, and the last
% on the second line, try using \AND instead of \And before the third
% author name.
\author{
Forest Kobayashi \\
Department of Mathematics\\
Harvey Mudd College\\
Claremont, CA 91711 \\
\texttt{[email protected]} \\
%% examples of more authors
\And
Jacky Lee \\
Department of Mathematics\\
Department of Computer Science \\
Harvey Mudd College \\
Claremont, CA 91711 \\
\texttt{[email protected]}
}
\begin{document}
% \nipsfinalcopy is no longer used
\maketitle
\begin{abstract}
Stock data is very difficult to analyze using classical methods, in
large part due to heavy involvement of humans in stock pricing. In
this paper, we examine a new approach to analyzing time-series stock
data, particularly in the context of grouping correlated stocks.
\end{abstract}
\section{Background and Motivation}
There are many reasons why predicting stock prices is difficult, but
for our model today, we will focus on just two:
\begin{enumerate}
\item The behavior of a given stock is influenced by hundreds of
hidden variables --- e.g., the current state of geopolitics,
governmental fiscal policies (for instance, the US Treasury
interest rate), the performance of stocks in various other
markets, etc. Hence, the price of a stock at any given moment is
an emergent phenomenon, influenced by many small movements of
markets around the world. Further, these connections themselves
are in a constant state of flux --- as the market evolves,
technology improves, and policies are rewritten, the weight each
of these variables has in influencing pricing will wax and wane.
\item With the exception of some forms of algo trading, most trading
strategies are being created and implemented by humans. That is,
often humans are the ones who perform analysis of market
information, and make decisions based off the findings. But humans
are not naturally predisposed to rigorous quantitative analysis,
and hence markets do not always behave rationally. As an example,
consider \texttt{Bitcoin}.\footnote{Citation: Gary Evans} Thus,
an accurate financial model will need to have some method of
predicting human psychology.
\end{enumerate}
Using modern machine learning techniques, it is possible to control
for some effects of (2) by using previous stock data. However,
less work has been done to model the effects of (1). So, for today's
model, we will focus on methods of preprocessing time series stock
data so as to control for some of the effects of (1).
% Stock prices can be influenced by many factors such as other stock
% prices, anticipation of capital gains or losses, global politics,
% earnings reports, and etc. These factors may each have a different
% weighting on the stock depending on time and which stock is being
% analyzed in question. For example, earnings reports might have
% significantly more impact on stock prices immediately after release as
% opposed to two months after the report comes out. Another example
% would be that global politics might influence the stock price of
% \textbf{LMT} (Lockheed Martin), an aerospace and defense company, more
% than the stock price of \textbf{BKC} (Burger King), a fast food
% restaurant chain.
% Ultimately, humans are the ones who choose to buy, sell, or short
% these stocks based on some analysis, usually not completely
% comprehensive, of the factors they observe. Since humans are often
% unaware or unable to analyze all these competing factors at once,
% their decisions are often unpredictable or at least very difficult to
% predict. As such, predicting the price of stocks by modeling these
% interactions between stock price factors and humans is difficult and
% unwise. Therefore, we have decided to perform stock analysis by only
% analyzing past stock trends. This approach, although loses out on a
% lot of information, is computationally less demanding and easier to
% find data for performing analysis.
\section{The Model}
\subsection{Goals}
Our ultimate goal is to be able to use past stock data to train our
model. Then given some stock trend data for some stock, use our model
to predict the upcoming trends and return a vector of decisions. An
example of the output is shown below.
\[
\mathbf{y} = \Bigl<P_{sell}(\mathbf{x}), P_{nothing}(\mathbf{x}),
P_{buy}(\mathbf{x})\Bigr>
\]
Here, $P_{buy}(\mathbf{x})$ represents how confident the model is in
recommending the user to buy the stock. Similarly,
$P_{sell}(\mathbf{x})$ represents selling the stock,
$P_{short}(\mathbf{x})$ represents shorting the stock, and
$P_{nothing}(\mathbf{x})$ represents doing nothing. All these
probabilities should sum to 1.
In order to reach this sort of conclusion, we want our model to
compensate for the effects of (1) in a clever way. In traditional
stock trading practices, there are a few guidelines that are used for
similar determinations:
\begin{enumerate}
\item Someimtes industries follow trends together. For instance, if
we see a price drop in one oil stock, we might expect to see drops
in other oil stocks.
\item When interest rates go up, yield-bearing financial assests
increase in value, while the value of equity stocks might
decrease.
\item If an earnings report reveals that a stock overestimated its
expected profits, the value of the stock will decrease.
\item In times of high uncertainty, money might be pulled out of
equity assets and moved to yield-bearing financial assets. Hence,
stock prices might go down.
\item Bad PR can result in a temporary dip in stock pricing.
\end{enumerate}
These are external factors that cannot be determined purely from the
stock data itself. Often, most of this information comes from news
outlets, press releases, and other media-sharing forums. So, we need
our model to perform the following:
\begin{enumerate}
\item Scrape real-time text data from news websites, financial
websites, and (possibly) academic journals in the case that we're
looking at something like a technology stock.
\item Perform some sort of Natural Language Processing and Sentiment
Analysis to determine how this information might affect the market
if / when it becomes widely publicized.
\item Somehow combine this information with time-series stock data
to make a prediction of how the market might behave in the short
and long term.
\item Make a trading decision based on this information.
\end{enumerate}
We propose the following prototype model.
\subsection{Prototype Architecture}
Before we get into it, we want to stress that this is a heavily
compartmentalized model we built to try and capture how we want
information to flow and be processed. In reality, we'd want to
collapse some of the nodes into a single process, and remove some of
the overall complexity to make training easier. We are not saying this
would be easy to implement and / or train without significant
computing power, just that this model could accomplish what we desire.
\begin{figure}[H]
\centering
\begin{tikzpicture}[node distance = 2cm, auto]
\tikzstyle{data} = [
diamond,
draw,
fill=orange!40,
text width = 5em,
text badly centered,
node distance = 3cm,
inner sep = 0pt
]
\tikzstyle{code} = [
rectangle,
draw,
fill=blue!20,
text width=7em,
text centered,
minimum height=4em
]
\tikzstyle{network} = [
rectangle,
draw,
dashed,
fill=green!20,
text width = 6em,
text centered,
rounded corners,
minimum height = 4em
]
\tikzstyle{feedforward} = [
rectangle,
draw,
dashed,
fill=red!20,
text width = 6em,
text centered,
rounded corners,
minimum height = 4em
]
\tikzstyle{filter} = [
draw,
rectangle,
fill=red!20,
text width = 6em,
text centered,
minimum height=3em,
rounded corners
]
\tikzstyle{decision} = [
draw,
rectangle,
fill=yellow!20,
text width = 6em,
text centered,
minimum height=3em,
rounded corners
]
\tikzstyle{otimes} = [
draw,
circle,
fill=gray!20,
text centered,
inner sep = -2pt
]
\node[code] (webscraper) at (0,6) {Webscraper};
\node[data] (time series) at (-4,3) {Stock data};
\node[code] (parser) at (4,6) {Separate metadata from content};
\node[data] (text) at (4,3) {Content};
\node[code] (processer) at (4,0) {Sentiment Analysis + NLP};
\node[filter] (filter) at (-2,0) {Filter};
\node[network] (lstm1) at (-2,-2) {LSTM (1)};
\node[network] (lstm2) at (-6,-2) {LSTM (2)};
\node[feedforward] (feedforward) at (-4,-4) {FFN (2)};
\node[decision] (decision) at (-4,-6) {Decision};
\node[data] (metadata) at (1,3) {Metadata};
\node[draw, rectangle, fill=red!20, text width = 3em, text
centered, minimum height = 2em, rounded corners, dashed]
(ffilter) at (1,1) {FFN (1)};
\node[otimes] (otimes) at (1,0) {$\bigotimes$};
\draw[-latex] (webscraper) -| (time series) node[midway, above]
{Light processing};
\draw[-latex] (webscraper) -- (parser);
\draw[-latex] (parser) -- (text);
\draw[-latex] (text) -- (processer) node[midway, right] {};
\draw (3.8,5.27) -- (3.8,4.5) -- (1,4.5);
\draw[-latex] (1, 4.5) -- (metadata);
\draw[-latex] (time series) -| (filter);
\draw[-latex] (filter) -- (lstm1);
\draw[-latex] (processer) -- (otimes);
\draw[-latex] (otimes) -- (filter);
\draw[-latex] (ffilter) -- (otimes);
\draw[-latex] (metadata) -- (ffilter);
\draw[-latex] (time series) -| (lstm2);
\draw[-latex] (lstm2) |- (feedforward);
\draw[-latex] (lstm1) |- (feedforward);
\draw[-latex] (feedforward) -- (decision);
\draw (decision) -- (0,-6) -- (0,-.25) node[midway, right] {feed
current trading position into filtering module};
\draw[-latex] (0,-.25) -- (-.82,-.25) node[midway, right] {};
\end{tikzpicture}
\caption{Model overview}
\label{fig:goal}
\end{figure}
Of course, as we just said, this is not the optimal architecture. But
let's walk through the rationale before discounting it entirely:
\begin{enumerate}
\item We scrape two types of data: time series stock data, as well
as news / financial / academic articles.
\item To preprocess the stock data, we extract the time series and
convert it into an array, abstracting the data in some manner so
that we can make all inputs the same size. We embed the stock
symbol itself with a word vector, so that the network can learn to
differentiate behaviors of different stocks. This will also be
employed in the subsequent filtering stage.
\item To preprocess the text data, we first separate out the
metadata from the content, and clean things like html tags out of
the body (depending on how we're doing the scraping). We perform
Natural Language Processing methods on the input data (e.g.,
sentiment analysis), to try and quantify whether the article
represents good news or bad news for some stock.
\item We feed the metadata into a feedforward network that is
trained to determine the bias of the source from which each datum
was obtained. The $\tanh()$ function will be used, because we
want to be able to ``flip'' the input from some organization if
it consistently predicts the opposite of what actually happens.
\item At the $\otimes$ junction, we do elementwise multiplication of
the output from FFN (1) with the output from our sentiment
analysis of our articles. These will then be sorted by the stocks
they apply to, and we will take the harmonic mean of these values
to obtain a single predictive value for the behavior of the stock
in the near future.
\item Next, we feed this into a filter, which will choose a few
families of stocks for LSTM (1) to focus on. The filter might
include some neural architecture itself, and will incorporate
information about the network's current assests.
\item We take the most significant stocks output by the filter,
along with some sentiment data about it, and feed this heavily
into LSTM (1), which will be a relatively deep network.
\item Simultaneously, we feed the unprocessed stock data (without
selecting a noteworthy subset) into LSTM (2), which will be a much
shallower network, the idea being that LSTM (2) might pick up on
noteworthy trends that the news articles missed.
\item Finally, this is input into a small feedforward network that
aggregates the processed and unprocessed inputs, and returns
decisions to buy, sell, short, or do nothing with respect to some
particular stock. The process is then repeated, and ideall the
network would turn a profit.
\end{enumerate}
For today's paper, we focused on the filtering stage. In particular,
we wanted to identify ways of classifying families of noteworthy
stocks that LSTM (1) could make predictions on, and identify abstract
relationships between pricing of various stocks. Our rationale was
this: most of the other components of the diagram are famous problems
that others have already performed lost of work on. That is, using
LSTMs to predict stock pricing is, for the most part, a solved
problem. Implementations and hyperparameters may differ, of course,
but underneath, the structure of the model is largely the same. We saw
the filtering process as the component we could perhaps make
advancments on.
\section{Methods}
\subsection{Data Scraping}
We gathered intraday data for three randomly-selected S\&P 500 stocks
using the Alpha Vantage API
(\url{https://github.com/RomelTorres/alpha_vantage}). A portion of the
data for one stock is shown below.
\begin{verbatim}
1. open 2. high 3. low 4. close 5. volume
date
2018-05-08 09:30:00 1058.54 1058.54 1055.00 1055.1600 26407.0
2018-05-08 09:31:00 1054.99 1058.16 1054.99 1056.9248 6384.0
2018-05-08 09:32:00 1057.41 1058.95 1057.41 1058.5700 7160.0
2018-05-08 09:33:00 1058.30 1058.64 1056.93 1058.6400 6448.0
2018-05-08 09:34:00 1058.80 1060.00 1058.50 1059.9600 5548.0
\end{verbatim}
For each stock, we computed the arithmetic mean of the high and low as
a heuristic for summarizing the price of the stock during that minute.
For our minimal working prototype, we decided to ignore the data for
the volume of trading, the open price, and the close price, since
stock prices vary quite a bit and we felt that the high and low prices
would be sufficient in giving us an estimate of how the stock was
preforming at that moment in time. However, when composing a more
polished model in the future, we will be sure to include this
information.
\subsection{Data Windowing}
We then analyzed the data by running a sliding window of 20 data points (20
minutes) across the prices, with each window having a large overlap with the
previous (to make the data smoother). Specifically, each consecutive window was
only shifted by 2 minutes. This windowed data was analyzed in two different ways.
The windows were fitted with polynomials and the coefficients were then
extracted for Fourier analysis. The frequency and amplitude information gained
from the Fourier analysis was then used for classification via a k-means
clustering algorithm.
Linear regression was also performed on the windows and the slopes were
extracted for analysis using a Markov model. Recent slope data was used to
predict upcoming slopes in the hopes of predicting upcoming stock trends.
\subsection{Clustering with Fourier Coefficients}
\subsubsection{Polynomial Fitting on Windowed Data}
With the windowed data, we fit a quintic polynomial to each window. Three such
polynomials fitted to three consecutive windows are shown below.
\begin{figure}[H]
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{img/sliding1}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{img/sliding2}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{img/sliding3}
\end{subfigure}
\caption{Sliding windows of data fitted with polynomials}
\label{fig:sliding}
\end{figure}
% TODO: justify a quintic
\subsubsection{Visualizing the Coefficients}
After performing this sliding window analysis on each of the stocks,
we then plotted the coefficients of our polynomial fits in
$\mathbb{R}^3$ to visualize our data. We were hoping that we'd gain
insight by examining the structure of these trajectories through time.
Some examples are shown below.
\begin{figure}[H]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{img/coeff1}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{img/coeff2}
\end{subfigure}
\caption{Visualization of the lower order coefficients in $\mathbb{R}^3$}
\label{fig:coeff12}
\end{figure}
The distribution of the coefficients in three dimensional space seems
to be a multivariate Gaussian distribution. Furthermore, it seems to
be very flat, appearing to be a line if viewed from the correct angle.
We experimented with different window sizes and different amounts of
shifting between consecutive windows. We found that in general,
varying the window sizes did not affect the shape of our graphs much
but the amounts of shifting affected the smoothness and trajectories
of our graphs. If the amount of shifting was large, such as if it were
half the size of a window, then the trajectory would be very rough and
edgy. If this value were to increase even more such that there was no
overlap between windows, the trajectory would be random, as shown
below.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{img/coeff3}
\caption{Visualization of the lower order coefficients in $\mathbb{R}^3$
without overlapping windows}
\label{fig:coeff3}
\end{figure}
Although it is hard to see on the diagrams above, we noticed that the
trajectory of coefficients seemed to be rotating about the center of
the distribution. Thus, we wanted to examine the following questions
about this representation of the data:
\begin{enumerate}
\item Why is the distribution so flat? Are the coefficients maybe
not as independent from one another as we believed?
\item Why might we be seeing this periodic behavior? And can we use
Fourier analysis to extract meaningful information about it?
\end{enumerate}
For the first, we have a few theories: first, for a degree $N$
polynomial, the coefficients are really only controlled by $N-1$
variables: the roots. Hence, our solutions might be constrained to
some hyperplane. Second, it could be that regularization is
constraining the coefficients to the plane, preventing any one term
from getting too large. So, satisfied with this answer for the time
being, we moved on to Fourier analysis.
\subsubsection{Discrete Fourier Transform}
We would like to be able to compare stocks and be able to say how
similar two stocks are based on trends. To do this, we need some
quantitative measure of the stock trends. The vectors of coefficients
that we have are insufficient for several reasons. For one, we could
have two stocks whose behavior is almost exactly the same except one
is a scaled and shifted version of the other. To remedy this
situation, we could do some sort of scaling of shifting of our
coefficients but that may be difficult and impractical since the two
stocks may in fact be different and this sort of scaling and shifting
would be computationally intensive and pointless. Another issue that
we may run across is that two stocks may resemble each other in two or
more different disjoint periods of time but differ elsewhere. This
resemblance is something we want to consider since this can help us in
predicting the future behavior of our stocks. If we attempt to perform
this scaling and shifting operation on the coefficients, we may at
best have one of these similar regions overlap. We could try scaling
and shifting subsets of our coefficients but that would be even more
computationally expensive. To remedy the situation, we have to find a
new metric by which we compare two different stocks and their trends.
We decided that analyzing the periodic behavior of our stocks would
capture this information better. We hypothesize that two similar
stocks will have similar periodic behavior. To extract information
about the periodicity of our data, we performed a discrete fast
Fourier transform on each of our coefficient vectors, i.e. we
performed the Fourier transform on all the constant terms in each
window to extract information about the periodicity of the constant
terms, then another transform on all the linear terms, etc. We decided
to perform the Fourier transform on the coefficients instead of the
actual stock prices because we wanted to see if this abstraction would
yield any improvements in classification.
% TODO: fix the captions
\begin{figure}[H]
\centering
\includegraphics[width=.55\linewidth]{img/fourier1}
\caption{Visualization of the lower order coefficients in $\mathbb{R}^3$
without overlapping windows}
\label{fig:fourier1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.55\linewidth]{img/fourier2}
\caption{Visualization of the lower order coefficients in $\mathbb{R}^3$
without overlapping windows}
\label{fig:fourier2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.55\linewidth]{img/fourier3}
\caption{Visualization of the lower order coefficients in $\mathbb{R}^3$
without overlapping windows}
\label{fig:fourier3}
\end{figure}
\subsubsection{K-means Clustering}
In this section, we'll give an overview of the techniques that we
applied to get results for the final presentation, then move into the
optimizations and corrections that we applied in the following days.
We decided to apply a K-means algorithm to our fourier coefficients,
to try and get some clustering behavior. Because each stock was fitted
with a quintic polynomial, our fourier data for each stock looked
something like
\begin{align*}
\mathcal{F}(\texttt{GOOGL})
&=
\begin{bmatrix}
a_{0,0} & a_{0,1} & \cdots & a_{0,n} \\
a_{1,0} & a_{1,1} & \cdots & a_{1,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{6,0} & a_{6,1} & \cdots & a_{6,n}
\end{bmatrix}
\end{align*}
where $a_{i,j}$ is the complex amplitude for the $j^{\text{th}}$
frequency in the discrete fourier transform of the coefficient for the
term of degree $i$. Here, $n$ represents the number of total
frequencies in our DFT, as documented in \texttt{np.fft.fftfreq}.
Since we were analyzing a total of 299 stocks, we really have a large,
$299 \times 6 \times n$ array of transformed data. Since our
\texttt{k\_means} function we wrote for homework takes data matrices
as input, we initially thought that we could just flatten each of the
$6 \times n $ fourier sub-arrays, and feed them into the
\texttt{k\_means} classifier. However, when we attempted to do so, we
saw significant performance dropoffs in the initialization of the
clusters. This was confusing, since we were able to initialize
clusters for individual coefficients' fourier vectors quite rapidly,
and the flattened versions were of the same order of magnitude (only 6
times larger). We suspect that either \texttt{np.ndarray.std()} or
\texttt{np.random.multivariate\_normal()} simply doesn't scale well if
one of the axes gets particularly elongated, but we'll look into it
further.
Anyways, for each, stock, we applied \texttt{np.abs()} to the fourier
matrix, so that we wouldn't have to rewrite \texttt{k\_means} to deal
with complex vectors. This is something we'd like to improve in the
future. Then, we applied \texttt{k\_means} clustering along each of
the rows. That is, for each stock, we did \texttt{k\_means} clustering
for the frequency vectors for each of the coefficients, iterating over
40 different values of $k$ to find an optimal number of clusters.
Then, for each stock, we applied a one-hot encoding to store
information about which cluster each of its coefficients were from.
After flattening this collection of one-hot vectors into a single
vector, we once again applied \texttt{k\_means} (iterating through 500
different values of $k$), and output the results.
One might wonder why we chose to use $500$ different $k$-values, as
we had only $299$ stocks. The reasoning was this: we knew our data lay
in a low-dimensional hyperplane of the original space, but the starter
code for \texttt{k\_means} initialized the clusters following a simple
multivariate gaussian distribution. Hence, many of the the centroids
would be far away from the data's hyperplane, and so for small $k$, we
were seeing stocks classified in just one or two clusters. We figured
that if we upped the number of clusters, we'd have a higher chance of
some of the randomly-initialized centroids being closer to different
regions of the data.
Admittedly, this algorithm is quite crude, but as a minimum viable
product, we weren't too concerned with performance issues, and are
willing to revisit accuracy concerns in the future.
\subsection{Markov Chain Prediction}
\subsubsection{Slope Estimation}
For each sliding window, the difference in height between the last point and
the first point was taken as an approximation for the slope of the trend of
that window. We were interested in the slope because it is an indicator for
whether the stock price is falling, stable, or rising.
\subsubsection{Data Binning}
We then decided to bin our data since the slopes were real numbers and binning
the data was necessary to be able to use a Markov model. Otherwise, it would be
extremely likely that our model would not have seen the slope data of the stock
that we were predicting.
We binned our slopes according to the following scheme.
% TODO: fix spacing
\begin{verbatim}
7.5 < x -> 3
4.5 < x < 7.5 -> 2
1.5 < x < 4.5 -> 1
-1.5 < x < 1.5 -> 0
-4.5 < x < -1.5 -> -1
-7.5 < x < -4.5 -> -2
x < -7.5 -> -3
\end{verbatim}
The following graphs show the results of the slope extraction and data binning
for four of our stocks.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{img/slopes}
\caption{Graphs of slopes and binned slopes for four different stocks}
\label{fig:slopes}
\end{figure}
\subsubsection{Markov Model}
We then fed all the slope data into a Markov model. Given any two consecutive
binned slopes, our model would provide a list of all the succeeding binned
slopes it had encountered in the training data immediately after seeing the two
provided consecutive binned slopes.
\subsubsection{Prediction}
Given the binned slopes of the two most recent sliding windows for a stock, our
Markov model would be able to provide a vector of suggested choices in the form
\[
\mathbf{y} = \Bigl<P_{sell}(\mathbf{x}), P_{nothing}(\mathbf{x}),
P_{buy}(\mathbf{x})\Bigr>
\]
It does this by taking the list of upcoming slopes from the Markov model and
collapsing them into counts by putting the binned slopes into three different
bins according to the following scheme.
\begin{verbatim}
-3 -> sell
-2 -> sell
-1 -> do nothing
0 -> do nothing
1 -> do nothing
2 -> buy
3 -> buy
\end{verbatim}
The counts are then normalized to make it a probability distribution.
\section{Results}
\subsection{K-means}
\subsubsection{Performance}
Although quite unoptimized, our algorithm completed analysis of $\sim
2,100,000$ data points in just $\sim 4$ minutes. The process of
windowed fitting for all 299 stocks took roughly $40.6$ seconds, while
the two iterations of \texttt{k\_means} took about $223.1$ seconds on
my macbook (including the hyperparameter search for optimal $k$, which
comprised the bulk of the computation time).
This is very promising, as it indicates that our algorithm would
perform fast enough to be used in real-time, given a decently-powerful
server. First, note that we'd only need to apply fitting to one window
at a time as new data is imported, which would take fractions of a
second. Second, note that the DFT has excellent performance for the
typical array lengths we're feeding it. Third, note that if we were
applying this sort of algorithm in real-time, we wouldn't expect the
optimal values of $k$ to change much between runs (especially if we
find a cleverer way to initialize centroids). Hence, we'd only have to
scan over a few values of $k$, a process that could be parallelized
extremely easily. Thus, we'd be able to perform a full run of our
algorithm in just a few seconds. Since the alphavantage API updates
every minute, this is more than fast enough to process the incoming
data.
\subsubsection{Output clusters}
In this section, we analyize a partial list of some of the output
clusters for one run of the data. First, we look at the only
singleton:
\begin{table}[H]
\centering
\caption{Cluster 1}
\label{c1}
\begin{tabular}{@{}cl@{}}
\toprule
Count & Industry \\ \midrule
1 & Cybersecurity \\
\bottomrule
\end{tabular}
\end{table}
This cluster corresponded to Symantec. In almost all of our runs, it
remained in a cluster by itself. We were curious if there were
anything immediately obvious that could explain this, so we googled
the company, and found that Symantec is in fact currently experiencing
an internal audit, which caused stock prices to take a massive drop
last week.
\begin{table}[H]
\centering
\caption{Cluster 2}
\label{c2}
\begin{tabular}{@{}cl@{}}
\toprule
Count & Industry \\ \midrule
7 & pharmaceutical + biotech research \\
3 & healthcare providers \\
1 & health insurance \\
1 & chemical supplier \\
1 & scientific research \\
& \\
2 & construction / motor supplies \\
1 & automobile company \\
& \\
1 & natural gas / petroleum \\
1 & energy holdings \\
1 & energy engineering / analysis firm \\
& \\
3 & airline \\
1 & travel booking company \\
1 & travel software company \\
& \\
1 & shipping company \\
& \\
3 & public utility providers \\
& \\
4 & financial firms \\
2 & real estate investment firms \\
& \\
2 & food companies \\
1 & grocery store real estate firm \\
1 & mall real estate firm \\
1 & office building / street real estate firm \\
1 & casino \\
& \\
1 & media / news company \\
& \\
1 & telecom company \\
& \\
1 & storage company \\
1 & xerox \\
& \\
1 & semiconductor manufacturer\\
\bottomrule
\end{tabular}
\end{table}
In this cluster, we saw a lot of exciting groupings. First, note the
concentration of companies tied to the healthcare industry --- we have
7 pharmaceutical / biotech companies, and 3 healthcare provider
stocks. Next, note that we also see a group of transportation /
shipping and energy-related companies, as well as a grouping of
grocery / retail related real estate firms, which was interesting.
In the cluster on the next page, we see a lot of clustering of more
industrial and manufacturing firms.
\begin{table}[H]
\centering
\caption{Cluster 3}
\label{c3}
\begin{tabular}{@{}cl@{}}
\toprule
Count & Industry \\ \midrule
5 & oil / petroleum companies \\
1 & energy engineering firm \\
& \\
2 & automotive parts manufacturers \\
1 & tire / rubber manufacturer \\
1 & motorcycle company \\
& \\
1 & heavy machinery rental business \\
1 & freight transportation company \\
1 & cruise company \\
1 & airline \\
1 & travel booking company \\
& \\
2 & semiconductor manufacturers \\
1 & electronics manufacturer \\
1 & applied materials science firm \\
1 & low-level computer systems firm \\
1 & network infrastructure firm \\
1 & data storage company \\
1 & data analytics firm \\
& \\
1 & home appliance manufacturer \\
& \\
1 & office / business supplier \\
1 & clothes supplier \\
1 & cosmetics company \\
& \\
1 & life insurance \\
& \\
1 & biotech firm \\
1 & pharmaceutical firm \\
3 & health care firms \\
& \\
1 & computer systems / hardware supplier for hospitals \\
1 & computer systems for biotech analysis \\
2 & medical device manufacturers \\
1 & implant / joint replacement manufacturers \\
& \\
1 & medical cannabis company \\
& \\
2 & paper companies \\
& \\
1 & telecom company \\
& \\
2 & financial firms \\
1 & mutual fund \\
& \\
1 & entertainment / media company \\
\bottomrule
\end{tabular}
\end{table}
% 5:
% 2 household product manufacturer
% 1 glass / can manufacturers
% 1 bubble wrap / vacuum packing
% 1 adhesive / etc. manufacturer
% 1 home improvement retailer
% 1 department store
% 1 walmart
% 1 fashion
% 1 management consulting service
% 1 investment management company
% 1 investment consulting + risk assessment
% 1 risk management consutling
% 2 video game company
% 1 productivity software (adobe)
% 1 google
% 1 amazon
% 1 HP (hewlett-packard)
% 6 broadcast / network infrastructure
% 2 content delivery / cloud computing
% 2 integrated circuits / pcb manufacturer
% 1 credit card processer
% 1 radio / transmitter manufacturers
% 1 defense contractor (specializes in radio things)
% 1 military aerospace provider
% 1 auto parts manufacturer
% 1 heavy duty marine, aviation, automotive part manufacturer
% 1 shipbuilding
% 1 cruise line stock
% 1 airline
% 3 natural gas / oil
% 8 energy companies (power utility providers)
% 1 water utility
% 2 biopharmaceutical
% 1 lab instrument manufacturer
% 1 medical device manufacturer
% 1 vaccination provider (for pets + livestock)
% 1 tobacco
% 1 timberlands real estate
% 1 senior care real estate investor
% 2 healthcare real estate
% 1 hotel real estate
% 1 real estate \& supply chain
% 1 shopping center real estate
% 6:
% 1 consumer electronics manufacturer (gpus)
% 2 telecom
% 1 software company
% 1 hard drive company
% 1 health insurance
% 1 biotech
% 1 petroleum / natural gas
% 1 lab instrument manufacturer
% 1 dentistry machinery manufacturer
% 3 east-coast real estate trust companies
% 1 risk management company
% 2 fertilizer stocks
% 1 animal / livestock product / service company
% 1 tractor supply (+ agricultural / livestock services / home improvement)
% 1 home construction comapny
% 2 clothes company
% 1 consumer retailer
% 1 media company
\subsection{Markov Analysis}
Testing was performed on stocks from the S\&P500, specifically, 10
stocks (runtime grew too long when using the whole dataset). We used
80\% of the stocks to train our Markov model and the remaining 20\%
was used to test our model. We found a success rate of 84.29\%.
% Unfortunately, we were not able to perform extensive analysis on this
% data before the deadline. However, the current results look promising.
% In the future, we want to try and apply clustering algorithms to the
% frequency data, apply a windowing function to compensate for leakage
% (notice the broad peaks), and maybe apply a low-pass filter to try and
% extract low-frequency information, which should correspond to larger
% trends. And if we really manage to get results efficiently, we might
% try feeding them into an LSTM. Stay tuned for the final project!
\section{Future Directions}
\subsection{Fourier + K-means}
In the future, we'd like to implement a version of the
\texttt{k\_means} algorithm that does not require taking the magnitude
of the fourier vectors. We would also like to improve the manner in
which cluster centroids are initialized. We know that our data
occupies some low-dimensional hyperplane of the total space, and we'd
like to be able to make sure that we're only initializing cluster
centroids within that hyperplane.
Additionally, we would like to have a method for interpreting the
fourier data that does not require clustering. Clustering is a good
first method for analyzing the data, but really, we aren't as
interested in classifying industries as we are in identifying exactly
how certain stocks are related to other ones. Thus we would like to
have some sort of metric that allows us to speak directly about the
``connectedness'' of two distinct stocks. An obvious first choice
would be the standard metric on $\mathbb{C}^n$, but we'd be excited to
examine ones that consider stocks that are ``overtones'' of one
another to be close.
\subsection{Markov Model}
For the slope estimation, the current method is a crude estimate that
was chosen for speed and simplicity. A more accurate measurement of
the slope of the window is to perform a linear regression and extract
the slope. However, this may make the program run slightly slower.
In terms of the Markov model, many things could be done to improve it.
Currently, we are only looking at two windows and predicting the next
window. This means we can only predict the trend for the next minute
given the previous two minutes. This is not very useful. We can make
this better by simply having the Markov model remember more past
information then predict more future information.
Another thing that can be done is to use the actual slopes instead of
completely binning all the slopes. For example, the keys in the Markov
dictionary can be binned slopes but we store actual slope information
in the value lists.
Furthermore, windows can be labelled with stocks and be treated
differently depending on which stock the window belongs to. Currently,
have have clustering data so that can be incorporated into the Markov
model.
Also, in our calculation for the success rate of our Markov model, we
defined a sucess to be if we predicted the correct bin the next slope
would be in. This means that if a stock slope was actually belonging
in the -2 bin, we would treat a prediction of -1 and 3 the same.
However, this seems intuitively wrong since -1 is a much better
prediction than 3. Thus, in the future we could have a better metric
for deciding the performance of our model.
\end{document}
| {
"alphanum_fraction": 0.7284341101,
"avg_line_length": 39.025072324,
"ext": "tex",
"hexsha": "e28e53865fcc152b74c8b76a6f537f253595c5e1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "552c7c70a89a9bc453a0cdd21a792d76cd415342",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "redpanda1234/machine-earnings",
"max_forks_repo_path": "reports/final/189_final_writeup.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "552c7c70a89a9bc453a0cdd21a792d76cd415342",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "redpanda1234/machine-earnings",
"max_issues_repo_path": "reports/final/189_final_writeup.tex",
"max_line_length": 87,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "552c7c70a89a9bc453a0cdd21a792d76cd415342",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "redpanda1234/machine-earnings",
"max_stars_repo_path": "reports/final/189_final_writeup.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10140,
"size": 40469
} |
%! Author = sbbfti
%! Date = 10/06/2020
% Document
\section{Introduction}
Example using glossary entries \gls{pmv} \gls{ti}
Years \gls{pmv} have \cite{Choi2017} winged moveth. Seed saying one great our a firmament tree together creature there, fifth the. Whose. Their. Midst all seasons place may shall blessed void image replenish so doesn't. Cattle it creeping land. To. Years wherein.
They're can't. Light male wherein great our. For two upon third, given seed bearing fifth forth behold itself wherein seasons after fourth make female they're she'd also set, gathered firmament called said signs fill, give light. Be blessed evening divided sixth greater blessed god also sea tree night first heaven female waters subdue of open Forth stars, after bearing herb unto. Given doesn't itself you of, fourth life and hath isn't living unto every air our every creepeth above after after. Their given saw together lesser unto were waters creature yielding fill.
Heaven night good tree our gathering waters male female, won't form. Dry tree Fowl gathered two, beast of. One blessed female is life third over all brought them shall you hath fowl made. After. Upon second creeping greater, life sixth. Day their fruit them. Gathering spirit multiply. Gathered and itself second fly stars divide let seasons You saw kind Evening them moving, subdue have. Seed fruit unto don't darkness us.
| {
"alphanum_fraction": 0.7895114943,
"avg_line_length": 81.8823529412,
"ext": "tex",
"hexsha": "c117e27f8d85f7ea511d44556780583ecb5fcc9f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7e13e1cf7d31abb27566424573c7e3c9bd334a23",
"max_forks_repo_licenses": [
"PostgreSQL",
"MIT"
],
"max_forks_repo_name": "SKazemii/reproducible-research",
"max_forks_repo_path": "manuscript/src/sections/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7e13e1cf7d31abb27566424573c7e3c9bd334a23",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"PostgreSQL",
"MIT"
],
"max_issues_repo_name": "SKazemii/reproducible-research",
"max_issues_repo_path": "manuscript/src/sections/introduction.tex",
"max_line_length": 571,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7e13e1cf7d31abb27566424573c7e3c9bd334a23",
"max_stars_repo_licenses": [
"PostgreSQL",
"MIT"
],
"max_stars_repo_name": "SKazemii/reproducible-research",
"max_stars_repo_path": "manuscript/src/sections/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 318,
"size": 1392
} |
% \documentclass[draft,11pt]{article}
\chapter{Some Basic Optimization, Convex Geometry, and
Linear Algebra}
\label{sec:cvopt}
%%% for this lecture
%\begin{document}
\sloppy
%\lecture{2 --- Wednesday, February 26}
%{Spring 2020}{Rasmus Kyng}{Some Basic Optimization, Convex Geometry, and
%Linear Algebra}
\section{Overview}
In this chapter, we will
\begin{enumerate}
\item Start with an overview (i.e. this list).
\item Learn some basic terminology and facts about optimization.
\item Recall our definition of convex functions and see how
convex functions can also be understood in terms of a
characterization based on first derivatives.
\item See how the first derivatives of a convex function can certify
that we are at a global minimum.
\end{enumerate}
\section{Optimization Problems}
Focusing for now on optimization over $\xx \in \R^n$, we usually write optimization problems as:
\begin{align*}
\min_{\xx \in \R^n} \ ( \textrm{or} \ \max) & \ f(\xx)\\
s.t. & \ g_1(\xx) \leq b_1 \\
& \ . \\
& \ . \\
& \ . \\
& \ g_m(\xx) \leq b_m \label{cond1}
%
\end{align*}
%
where $\{g_{i}(\xx)\}_{i=1}^{m}$ encode the constraints. For example, in the following optimization problem from the previous chapter
\begin{align*}
\min_{\ff \in \R^E} & \sum_e \rr(e) \ff(e)^2 \\
\textrm{s.t. } & \BB \ff = \dd
\end{align*}
we have the constraint $\BB \ff = \dd$. Notice that we can rewrite this constraint as $\BB \ff \le \dd$ and $-\BB \ff \le -\dd$ to match the above setting.\ The set of points which respect the constraints is called the \emph{feasible set}.
\boxdef{For a given optimization problem the set $\mathcal{F}=\{\xx
\in \R^n \ : \ g_{i}(\xx) \leq b_i, \forall i \in [m]\}$ is called
the \textbf{feasible set}. A point $\xx \in \mathcal{F}$ is called
a \textbf{feasible point}, and a point $\xx' \notin \mathcal{F}$ is called an \textbf{infeasible point}.}
Ideally, we would like to find optimal solutions for the optimization problems we consider. Let's define what we mean exactly.
\boxdef{
For a \emph{maximization} problem $\xx^\star$ is called an \textbf{optimal solution } if $f(\xx^\star) \geq f(\xx)$, $\forall \xx \in \mathcal{F}$. Similarly, for a \emph{minimization} problem $\xx^\star$ is an optimal solution if $f(\xx^\star) \leq f(\xx)$, $\forall \xx \in \mathcal{F}$.
}
What happens if there are \emph{no feasible points}?
In this case, an optimal solution cannot exist, and we say the problem
is infeasible.
\boxdef{
If $\mathcal{F} = \emptyset$ we say that the optimization problem is \textbf{infeasible}. If $\mathcal{F} \neq \emptyset$ we say the optimization problem is \textbf{feasible}.
}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig/lect1_attaining-min.png}
% \caption{
% \todo{!}
% }
\caption{}
\label{fig:regions}
\end{figure}
Consider three examples depicted in Figure 2.1:
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{F} = [a,b]$
\item $\mathcal{F} = [a,b)$
\item $\mathcal{F} = [a,\infty)$
\end{enumerate}
In the first example, the minimum of the function is attained at $b$.
In the second case the region is open and therefore there is
no minimum function value, since for every point we will choose,
there will always be another point with a smaller function value.
Lastly, in the third example, the region is unbounded and the function
decreasing, thus again
there will always be another point with a smaller function value.
\paragraph{Sufficient Condition for Optimality.}
The following theorem, which is a fundamental theorem in real analysis, gives us a sufficient (though not necessary) condition for optimality.
\begin{theorem*}[Extreme Value Theorem]
Let $f:\mathbb{R}^n \to \mathbb{R}$ be a continuous function and $\mathcal{F} \subseteq \mathbb{R}^n$ be nonempty, bounded, and closed. Then, the optimization problem $\min f(\xx) : \xx \in \mathcal{F}$ has an optimal solution.
\end{theorem*}
\section{A Characterization of Convex Functions}
Recall the definitions of convex sets and convex functions that we
introduced in Chapter~\ref{}:
\boxdef{
A set $S \subseteq \R^n$ is called a \textbf{convex set} if any two points in $S$ contain their line, i.e. for any $\xx,\yy \in S$ we have that $\theta \xx + (1-\theta)\yy \in S$ for any $\theta \in [0,1]$.
}
\boxdef{
For a convex set $S \subseteq \R^n$, we say that a function $f:S \to \mathbb{R}$ is \textbf{convex on $S$} if for any two points $\xx,\yy \in S$ and any $\theta \in [0,1]$ we have that:
%
$$ f\left (\theta \xx + (1-\theta)\yy \right ) \leq \theta f (\xx) + \Big (1-\theta\Big )f(\yy).$$
%
}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{fig/lecture2_plot3d-nonjointconvex.png}
\caption{This plot shows the function $f(x,y) = xy$. For any fixed
$y_0$, the function $h(x) = f(x,y_0) = xy_0$ is
this is linear in $x$, and so is a convex
function in $x$. But is $f$ convex?}
\label{fig:nonjointconvex}
\end{figure}
We will first give an important characterization of convex function. To do so, we need to characterize multivariate functions via their Taylor expansion.
%\subsection{First Order Taylor Expansion}
\paragraph{Notation for this section.}
In the rest of this section, we frequently consider a multivariate functions $f$
whose domain is a set $S \subseteq \R^n$, which we will require to
be open.
When we additionally require that $S$ is convex, we will specify this.
Note that $S = \R^n$ is both open and convex and it suffices to keep
this case in mind.
Things sometimes get more complicated if $S$ is not open, e.g. when the
domain of $f$ has a boundary.
We will leave those complications for another time.
\subsection{First-order Taylor Approximation}
\boxdef{
%Let $S \subseteq \R^n$.
The \textbf{gradient} of a function $f:S \to \R$ at point $\xx
\in S$ is denoted $\grad f(\xx)$ is:
\begin{displaymath}
\grad f(\xx) = \left[\frac{\partial f(\xx)}{\partial \xx(1)},\ldots, \frac{\partial f(\xx)}{\partial \xx(n)}\right]^\trp
\end{displaymath}
}
%
\paragraph{First-order Taylor expansion.}
For a function $f:\R\rightarrow\R$ of a single variable,
differentiable at $x \in \R$
\begin{displaymath}
f(x+\delta) = f(x) + f'(x) \delta + o(\abs{\delta})
\end{displaymath}
where by definition:
\begin{displaymath}
\lim_{\delta\rightarrow 0} \frac{o(\abs{\delta})}{\abs{\delta}} = 0
.
\end{displaymath}
Similarly, a multivariate function $f:S \rightarrow \R$
%, where $S \subseteq\R^n$,
is said to
be
\emph{(Fr\'{e}chet) differentiable} at $\xx \in S$ when there exists
$\grad f(\xx) \in \R^n$ s.t.
\begin{displaymath}
\lim_{\ddelta \to \veczero}
\frac{\norm{f(\xx+\ddelta) - f(\xx) -\grad f(\xx)^{\trp} \ddelta }_2}{
\norm{\ddelta}_2
} = 0
.
\end{displaymath}
Note that this is equivalent to saying that
$f(\xx+\ddelta) = f(\xx) + \grad f(\xx)^\trp\ddelta +
o(\norm{\ddelta}_2)$.
We say that $f$ is \emph{continuously differentiable} on a set $S \subseteq \R^n $ if it is
differentiable and in addition the gradient is continuous on $S$.
A differentiable convex function whose domain is an
open convex set $S \subseteq \R^n $ is always continuously
differentiable\footnote{See p. 248, Corollary 25.5.1 in \emph{Convex
Analysis} by Rockafellar (my version is the Second print,
1972). Rockefellar's corollary concerns finite convex functions, because he
otherwise allows convex functions that may take on the values $\pm \infty$. }.
\begin{remark*}
In this course, we will generally err on the side of being informal
about functional analysis when we can afford to, and we will not
worry too much about the
about details of different notions of differentiability
(e.g. Fr\'{e}chet and Gateaux differentiability), except when
it turns out to be important.
\end{remark*}
\begin{theorem}[Taylor's Theorem, multivariate first-order remainder form]
If $f:S \rightarrow \R$
%, where $S \subseteq \R^n$,
is continuously differentiable over $[\xx, \yy]$, then for
some $\zz \in [\xx, \yy]$,
\begin{displaymath}
f(\yy) = f(\xx) + \grad f(\zz)^\trp (\yy-\xx).
\end{displaymath}
\end{theorem}
This theorem is useful for showing that the function $f$ can be
approximated by the affine function ${\yy \to f(\xx) + \grad f(\xx)^\trp (\yy-\xx)}$
when $\yy$ is ``close to'' $\xx$ in some sense.
% \begin{figure}[!tbp]
% \centering
% \begin{minipage}[b]{0.45\textwidth}
% \includegraphics[trim = 0mm 0mm 0mm 0mm, height=60mm]{fig/lec8_taylor1.pdf}
% \end{minipage}
% \hfill
% \begin{minipage}[b]{0.45\textwidth}
% \includegraphics[trim = 0mm 0mm 0mm 0mm, height=60mm]{fig/lec8_taylor2.pdf}
% \end{minipage}
% \caption{Depiction of first-order Taylor expansion.
% \todo{improve this plot}
% }\label{fig:taylor}
% \end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{fig/lecture2_taylor-1st-order-remainder.png}
\caption{The convex function $f(\yy)$ sits above the linear function
in $\yy$ given by
${f(\xx) + \grad f(\xx)^\trp (\yy-\xx)}$.}
\label{fig:nonjointconvex}
\end{figure}
% LATEXIT NOTES
% f(\boldsymbol{x}) + \mathbf{\nabla} f(\boldsymbol{x})^{\top} (\boldsymbol{y}-\boldsymbol{x})
%
\subsection{Directional Derivatives}
\boxdef{
Let $f:S \rightarrow \R$
%where $S \subseteq \R^n$,
be a function differentiable at $\xx\in S$ and let us consider $\dd\in\R^n$. We define the \textbf{derivative of $f$ at $\xx$ in direction $\dd$} as:
\begin{displaymath}
\D{f(\xx)}{\dd} = \lim_{\lambda\rightarrow \veczero}\frac{f(\xx+\lambda \dd)-f(\xx)}{\lambda}
\end{displaymath}
}
\begin{proposition}\label{prp:directional}
$\D{f(\xx)}{\dd} = \grad f (\xx)^\trp \dd$.
\end{proposition}
\begin{proof}
Using the first order expansion of $f$ at $\xx$:
\begin{displaymath}
f(\xx+\lambda \dd) = f(\xx) + \grad f(\xx)^\trp (\lambda \dd) +o(\norm{\lambda \dd}_2)
\end{displaymath}
hence, dividing by $\lambda$ (and noticing that $\|\lambda \dd\|_2 = \lambda \norm{ \dd}_2$):
\begin{displaymath}
\frac{f(\xx+\lambda \dd)-f(\xx)}{\lambda} = \grad f(\xx)^\trp \dd + o(\lambda \norm{ \dd}_2)
\end{displaymath}
letting $\lambda$ go to $0$ concludes the proof.
\end{proof}
%\textbf{Example} What is the direction in which $f$ changes the most rapidly? We want to find $d^*\in\arg\max_{\|d\| = 1} f'(x,d) = \grad f(x)^\trp d$.
%From the Cauchy-Schwarz inequality we get $\grad f(x)^\trp d\leq \|\grad f(x)\|\|d\|$ with equality when $d = \lambda \grad f(x),\;\lambda\in\R$. Since $\|d\|=1$ this gives:
%\begin{displaymath}
% d = \pm\frac{\grad f(x)}{\|\grad f(x)\|}
%\end{displaymath}
\subsection{Lower Bounding Convex Functions with Affine Functions}
In order to prove the characterization of convex functions in the next
section we will need the following lemma. This lemma says that any
differentiable convex function can be lower bounded by an affine function.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{fig/lecture2_convex-linear-lb.png}
\caption{The convex function $f(\yy)$ sits above the linear function
in $\yy$ given by
${f(\xx) + \grad f(\xx)^\trp (\yy-\xx)}$.}
\label{fig:nonjointconvex}
\end{figure}
% LATEXIT NOTES
% f(\boldsymbol{x}) + \mathbf{\nabla} f(\boldsymbol{x})^{\top} (\boldsymbol{y}-\boldsymbol{x})
%
\begin{theorem}\label{thm:convexlb}
Let $S$ be an open convex subset of $\mathbb{R}^n$, and
let $f:S\to \mathbb{R}$ be a differentiable function.
Then, $f$ is convex if and only if for any $\xx,\yy \in S$ we have that $f(\yy) \geq f(\xx) + \grad f(\xx)^\trp (\yy-\xx)$.
\end{theorem}
\begin{proof}
[$\implies$] Assume $f$ is convex, then for all $\mathbf{x,y} \in S$
and $\theta \in [0,1]$, if we let $\mathbf z = \theta \yy +
(1-\theta)\xx$, we have that
\[
f(\mathbf z) = f((1-\theta)\xx + \theta \yy) \leq
(1-\theta)f(\xx) + \theta f(\yy)
\]
and therefore by subtracting $f(\xx)$ from both sides we get:
\begin{align*}
f\left (\xx + \theta (\yy -\xx) \right) - f(\xx)
& \leq \theta f(\yy) + (1-\theta)f(\xx) - f(\xx) \\
& = \theta f(\yy) - \theta f(\xx).
\end{align*}
%
% \begin{align*}
% f\left (\xx + \theta (\yy -\xx) \right) - f(\xx)
% & = f\left (\theta \yy + (1-\theta)\xx \right) - f(\xx) \\
% & = f(\mathbf z) - f(\xx) \\
% & \leq \theta f(\yy) + (1-\theta)f(\xx) - f(\xx) \\
% & = \theta f(\yy) - \theta f(\xx).
% \end{align*}
%
Thus we get that (for $\theta > 0$):
$$\frac{f\left (\xx + \theta (\yy -\xx) \right) - f(\xx)}{\theta} \leq f(\yy) - f(\xx)$$
Applying Proposition~\ref{prp:directional} with $\dd =\xx - \yy$ we have that: %$\grad f(\xx)^\trp \mathbf{d} = \lim_{\theta \to 0^{+}} \frac{f(\xx + \theta \mathbf{d}) - f(\xx)}{\theta}$ and therefore:
%
$$\grad f(\xx)^\trp (\yy-\xx) = \lim_{\theta \to 0^{+}} \frac{f(\xx + \theta (\yy - \xx)) - f(\xx)}{\theta} \leq f(\yy) - f(\xx).$$
%
[$\impliedby$] Assume that $f(\yy) \geq f(\xx) + \grad f(\xx)^\trp (\yy-\xx)$ for all $\mathbf{x,y} \in S$ and show that $f$ is convex. Let $\mathbf{x,y} \in S$ and $\mathbf z = \theta \yy+(1-\theta)\xx$. By our assumption we have that:
%
\begin{align}
f(\yy) \geq f(\zz) + \grad f(\zz)^\trp (\yy - \mathbf z) \label{eq:y} \\
f(\xx) \geq f(\zz) + \grad f(\zz)^\trp (\xx - \mathbf z) \label{eq:x}
\end{align}
%
Observe that $\yy-\zz = (1-\theta)(\yy-\xx)$ and $\xx-\zz = \theta
(\yy-\xx)$.
Thus adding
$\theta$ times \eqref{eq:y} to $(1-\theta)$ times \eqref{eq:x} gives
cancellation of the vectors multiplying the gradient, yielding
\begin{align*}
\theta f(\yy) + (1-\theta)f(\xx) \nonumber
& \geq f(\zz) + \grad f(\zz)^\trp \veczero
\\
& =f(\theta \yy+(1-\theta)\xx)
\end{align*}
This is exactly the definition of convexity.
\end{proof}
\section{Conditions for Optimality}
We now want to find necessary and sufficient conditions for local
optimality.
\boxdef{
Consider a differentiable function $f: S \to \R$.
A point $\xx \in S $ at which $\grad f(\xx) = \veczero$ is called a \textbf{stationary point}.
}
\begin{proposition}
\label{prp:gradatmin}
If $\xx$ is a local extremum of a differentiable function
$f:S\rightarrow \R$
then $\grad f(\xx) = \veczero$.
\end{proposition}
\begin{proof}
Let us assume that $\xx$ is a local minimum for $f$. Then for all $\dd\in\R^n$, $f(\xx)\leq f(\xx+\lambda \dd)$ for $\lambda$ small enough. Hence:
\begin{displaymath}
0\leq f(\xx+\lambda \dd)-f(\xx) = \lambda \grad f(\xx)^\trp \dd +
o(\|\lambda \dd\|)
\end{displaymath}
dividing by $\lambda>0$ and letting $\lambda\rightarrow 0^+$, we
obtain $0\leq \grad f(\xx)^\trp \dd$.
But, taking $\dd = - \grad f(\xx) $, we get $0\leq -\norm{\grad f(\xx)}_2^2$.
This implies that $\grad f(\xx) = \veczero$.
The case where $\xx$ is a local maximum can be dealt with similarly.
\end{proof}
\begin{remark}
For this proposition to hold, it is important that $S$ is
open.
\end{remark}
% The above proposition states that a a stationary point can either be a
% minimum, maximum, or a saddle point of the function, and we will see
% in the following section that the Hessian of the function can be used
% to indicate which one exactly.
For convex functions however it turns out that a stationary point
necessarily implies that the function is at its minimum.
Together with the proposition above, this says that for a convex function on
$\R^n$ a point is optimal if and only if it is stationary.
\begin{proposition}
Let $S \subseteq \mathbb{R}^n$ be an open convex set and let $f: S \to \R$ be a
differentiable and convex function. If $\xx$ is a stationary point then $\xx$ is a global minimum.
\end{proposition}
\begin{proof}
From Theorem~\ref{thm:convexlb} we know that for all $\xx,\yy \in S$ : $f(\yy) \geq f(\xx) + \grad f(\xx)(\yy - \xx)$.
Since $\grad f(\xx) = \veczero$ this implies that $f(\yy) \geq
f(\xx)$. As this holds for any $\yy \in S$, $\xx$ is a global
minimum.
%
%Using the Taylor expansion of $f$ we know that for any $\yy\in S$ there is some point $\zz \in [\bar{\xx},\yy]$ s.t.:
%$$
%f(\yy) = f(\bar{\xx}) + \grad f(\bar{\xx})(\yy - \bar{\xx}) + \frac{1}{2}(\yy - \bar{\xx}) ^TH_{f}(\zz)(\yy - \bar{\xx}).
%$$
%Since $f$ is convex, the Hessian must be positive semi-definite, and since $\bar{\xx}$ is a stationary point, then $ \grad f(\bar{\xx})(\yy - \bar{\xx}) = 0$. This leaves us with:
%$$f(\yy) \geq f(\bar{\xx}).$$
%Since this holds for all $\yy \in S$, we have that $\bar{\xx}$ is a global minimum.
\end{proof}
% \FloatBarrier
% \newpage
% \section*{Exercises}
% \begin{itemize}
% \item sufficient conditions for optimality (see below)
% \item taylor theorem remainder form
% \item \url{https://en.wikipedia.org/wiki/Min-max_theorem}
% \item \todo{maybe} pointwise max of convex gives convex
% \item \todo{maybe} joint convex, min over one coord, gives another
% convex fn
% \item norm equals eigenvalue claim plus counter example. Claim~\ref{clm:symnorm}
% \end{itemize}
% BONUS, GRADED:
% \begin{enumerate}
% \item For each of the following functions answer these questions:
% \begin{itemize}
% \item Is the function convex?
% \item \todo{add strongly convex?}
% \end{itemize}
% \begin{enumerate}
% \item $f(x) = \abs{x}^6$ on $x \in \R$
% \item $f(x) = \abs{x}^{1.5}$ on $x \in \R$
% \item $f(x) = \exp(x)$ on $x \in \R$
% \item $f(x) = \exp(x)$ on $x \in (-1,1)$
% \item $f(x,y) = \sqrt{x+y}$ on $ (x,y) \in (0,1) \times (0,1)$.
% \item $f(x,y) = \sqrt{x+y}$ on $ (x,y) \in (1/2,1) \times (1/2,1)$.
% \item $f(x,y) = \sqrt{x^2+y^2}$ on $ (x,y) \in \R^2$.
% \item $f(x) = \log(x)$ on $x \in (0,\infty)$
% \item $f(x) = -\log(x)$ on $x \in (0,\infty)$
% \item $f(x,y) = \log(\exp(x) + \exp(y))$ on $ (x,y) \in \R^2$.
% \end{enumerate}
% \end{enumerate}
% \newpage
% \section*{Background}
% \todo{terminology \& citations}
% \begin{itemize}
% \item \url{https://francisbach.com/chebyshev-polynomials/}
% \item
% \url{https://sachdevasushant.github.io/courses/15s-cpsc665/notes/linsys.pdf}
% \begin{itemize}
% \item starts off well but doesn't get to polynomials
% \end{itemize}
% \item
% \url{https://sachdevasushant.github.io/pubs/fast-algos-via-approx-theory.pdf}
% \item spielman
% \begin{itemize}
% \item \url{http://www.cs.yale.edu/homes/spielman/561/2012/lect17-12.pdf}
% \item
% \url{http://www.cs.yale.edu/homes/spielman/561/2012/lect18-12.pdf}
% \item let's use these plus...? maybe sushant on the polynomial?
% \item but we could also use Sushant's monograph?
% \end{itemize}
% \end{itemize}
% \subsection{Sufficient Condition for Optimality}
% \todo{turn this into an exercise}
% The following theorem, which is a fundamental theorem in real analysis, gives us a sufficient (though not necessary) condition for optimality. Before stating the theorem, let's first recall the Bolzano-Weierstrass theorem from real analysis (which you will prove in section this week).
% \boxthm{(Bolzano-Weierstrass)
% Every bounded sequence in $\mathbb{R}^n$ has a convergent subsequence.
% }
% Secondly, we recall the boundedness theorem:
% \boxthm{(Boundedness Theorem)
% Let $f:\mathbb{R}^n \to \mathbb{R}$ be a continuous function and $\mathcal{F} \subseteq \mathbb{R}^n$ be nonempty, bounded, and closed.
% Then $f$ is bounded on $\mathcal{F}$.
% }
% %
% \begin{theorem*}[Extreme Value Theorem]
% Let $f:\mathbb{R}^n \to \mathbb{R}$ be a continuous function and $\mathcal{F} \subseteq \mathbb{R}^n$ be nonempty, bounded, and closed. Then, the optimization problem $\min f(\xx) : \xx \in \mathcal{F}$ has an optimal solution.
% \end{theorem*}
% % \begin{theorem*}[Weierstrass]
% % Let $f:\mathbb{R}^n \to \mathbb{R}$ be a continuous function and $\mathcal{F} \subseteq \mathbb{R}^n$ be nonempty, bounded, and closed. Then, the optimization problem $\min f(\xx) : \xx \in \mathcal{F}$ has an optimal solution.
% % \end{theorem*}
% \begin{proof}
% Let $\alpha$ be the infimum of $f$ over $\mathcal{F}$ (i.e. the largest value for which any point $\xx \in \mathcal{F}$ respects $f(\xx)\geq \alpha$);
% by the Boundedness Theorem, such a value exists, as $f$ is lower-bounded, and the set of lower bounds has a greatest lower bound, $\alpha$.
% Let
% %
% $$\mathcal{F}_k := \{ \xx \in \mathcal{F}: \alpha \leq f(\xx) \leq \alpha + 2^{-k}\}.$$
% %
% $\mathcal{F}_k$ cannot be empty, since if it were, then $\alpha + 2^{-k}$ would be a strictly greater lower bound on $f$ than $\alpha$.
% For each $k$, let $\xx_k$ be some $\xx \in \mathcal{F}_k$.
% $\setof{\xx_k}_{k = 1}^{\infty}$ is a bounded sequence as
% $\mathcal{F}_k \subseteq \mathcal{F}$, so the Bolzano-Weierstrass
% theorem we know that there is a convergent subsequence,
% $\setof{\yy_k}_{k = 1}^{\infty}$, with limit $\bar{\yy}$.
% Because the set is closed, $\bar{\yy} \in \mathcal{F}$.
% By continuity $f(\bar{\yy}) = \lim_{k \to \infty} f(\yy_k)$, while by
% construction, $\lim_{k \to \infty} f(\yy_k) = \alpha$.
% Thus, the optimal solution is $\bar{\yy}$.
% \end{proof}
% \newpage
%\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "agao21_script"
%%% TeX-engine: luatex
%%% End: | {
"alphanum_fraction": 0.6560791706,
"avg_line_length": 39.5158286778,
"ext": "tex",
"hexsha": "328f72734c7aa5734ef8fca24fe0e87c734f2d10",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "864f2937cdd16ab28b8019b0ae9cfbf31a080a84",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lukevolpatti/agao21_script",
"max_forks_repo_path": "agao21_script/lecture2_mod.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "864f2937cdd16ab28b8019b0ae9cfbf31a080a84",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lukevolpatti/agao21_script",
"max_issues_repo_path": "agao21_script/lecture2_mod.tex",
"max_line_length": 290,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "864f2937cdd16ab28b8019b0ae9cfbf31a080a84",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lukevolpatti/agao21_script",
"max_stars_repo_path": "agao21_script/lecture2_mod.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7176,
"size": 21220
} |
\subsection{Absolute risk aversion}
Given a utility function we can calculate the risk aversion.
\(A(x)=-\dfrac{u''(x)}{u'(x)}\)
Constant Absolute Risk Aversion (CARA) is:
\(A(x)=c\)
\(u(x)=1-e^{\alpha x}\)
Hyperbolic Absolute Risk Aversion (HARA) is:
\(A(x)=\dfrac{1}{ax+b}\)
Increasing and Decreasing Absolute Risk Aversion (IARA and DARA):
Risk aversion increase or decreases in \(x\).
| {
"alphanum_fraction": 0.6825,
"avg_line_length": 18.1818181818,
"ext": "tex",
"hexsha": "cfc94e93447e99f4cc33761498836e1364dcf61e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/ai/uncertainty/02-01-absolute.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/ai/uncertainty/02-01-absolute.tex",
"max_line_length": 65,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/ai/uncertainty/02-01-absolute.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 125,
"size": 400
} |
Prominent tools for indexing and searching datasets and text collections are:
\begin{itemize}
\item Lucene and Solr \\
\url{http://lucene.apache.org/} \\
\url{http://lucene.apache.org/solr/}
\item Terrier \\
\url{http://terrier.org/}
\end{itemize}
Building question answering systems is a complex task; it thus helps to exploit high-level tools for component integration as well as existing architectures for question answering systems:
\begin{itemize}
\item Apache UIMA \\
\url{http://uima.apache.org}
\item Open Advancement of Question Answering Systems (OAQA) \\
\url{http://oaqa.github.io}
\item Open Knowledgebase and Question Answering (OKBQA) \\
\url{http://www.okbqa.org} \\
\url{https://github.com/okbqa}
\item openQA\\
\url{http://aksw.org/Projects/openQA.html}
\end{itemize}
In the remainder of the section we provide a list of resources and tools that can be exploited especially for the linguistic analysis of a question and the matching of natural language expressions with vocabulary elements from a dataset.
\subsubsection*{Lexical resources}
\begin{itemize}
\item WordNet \\
\url{http://wordnet.princeton.edu/}
\item Wiktionary \\
\url{http://www.wiktionary.org/} \\
API: \url{https://www.mediawiki.org/wiki/API:Main_page}
\item FrameNet \\
\url{https://framenet.icsi.berkeley.edu/fndrupal/}
\item English lexicon for DBpedia 3.8 (in \emph{lemon}\footnote{\url{http://lemon-model.net}} format) \\
\url{http://lemon-model.net/lexica/dbpedia_en/}
\item PATTY (collection of semantically-typed relational patterns) \\
\url{http://www.mpi-inf.mpg.de/yago-naga/patty/}
\end{itemize}
\subsubsection*{Text processing}
\begin{itemize}
\item GATE (General Architecture for Text Engineering) \\
\url{http://gate.ac.uk/}
\item NLTK (Natural Language Toolkit) \\
\url{http://nltk.org/}
\item Stanford NLP \\
\url{http://www-nlp.stanford.edu/software/index.shtml}
\item LingPipe \\
\url{http://alias-i.com/lingpipe/index.html}
\end{itemize}
\emph{Romanian:}
\begin{itemize}
\item
\url{http://tutankhamon.racai.ro/webservices/TextProcessing.aspx}
\end{itemize}
\emph{Dependency parser:}
\begin{itemize}
\item MALT \\
\url{http://www.maltparser.org/} \\
Languages (pre-trained): English, French, Swedish
\item Stanford parser \\
\url{http://nlp.stanford.edu/software/lex-parser.shtml} \\
Languages: English, German, Chinese, and others
\item CHAOS \\
\url{http://art.uniroma2.it/external/chaosproject/} \\
Languages: English, Italian
\end{itemize}
\subsubsection*{Named Entity Recognition}
\begin{itemize}
\item DBpedia Spotlight \\
\url{http://spotlight.dbpedia.org}
\item FOX (Federated Knowledge Extraction Framework) \\
\url{http://fox.aksw.org}
\item NERD (Named Entity Recognition and Disambiguation) \\
\url{http://nerd.eurecom.fr/}
\item Stanford Named Entity Recognizer \\
\url{http://nlp.stanford.edu/software/CRF-NER.shtml}
\end{itemize}
\subsubsection*{String similarity and semantic relatedness}
\begin{itemize}
\item Wikipedia Miner \\
\url{http://wikipedia-miner.cms.waikato.ac.nz/}
\item WS4J (Java API for several semantic relatedness algorithms) \\
\url{https://code.google.com/p/ws4j/}
\item SecondString (string matching) \\
\url{http://secondstring.sourceforge.net}
\end{itemize}
\subsubsection*{Textual Entailment}
\begin{itemize}
\item DIRT \\
Paraphrase Collection: \url{http://aclweb.org/aclwiki/index.php?title=DIRT_Paraphrase_Collection} \\
Demo: \url{http://demo.patrickpantel.com/demos/lexsem/paraphrase.htm}
\item PPDB (The Paraphrase Database) \\
\url{http://www.cis.upenn.edu/~ccb/ppdb/}
\end{itemize}
\subsubsection*{Translation systems}
\begin{itemize}
\item English $\leftrightarrow$ \{Romanian,German,Spanish\} \\
\url{http://www.racai.ro/tools/translation/racai-translation-system/}
\end{itemize}
\subsubsection*{Language-specific resources and tools}
\emph{Romanian:}
\begin{itemize}
\item \url{http://nlptools.info.uaic.ro/Resources.jsp}
\end{itemize}
\subsubsection*{Anything missing?}
If you know of a cool resource or tool that we forgot to include (especially for the challenge languages other than English),
please drop us a note!
| {
"alphanum_fraction": 0.7132624769,
"avg_line_length": 33.5503875969,
"ext": "tex",
"hexsha": "f888b6c38593229d9a44df7af634dd84e8ef8419",
"lang": "TeX",
"max_forks_count": 63,
"max_forks_repo_forks_event_max_datetime": "2022-01-13T10:11:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-03-14T02:50:23.000Z",
"max_forks_repo_head_hexsha": "b985de8d29d18157401c005566542350bb4b5fb7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "arademaker/QALD",
"max_forks_repo_path": "5/documents/resources.tex",
"max_issues_count": 14,
"max_issues_repo_head_hexsha": "b985de8d29d18157401c005566542350bb4b5fb7",
"max_issues_repo_issues_event_max_datetime": "2021-10-18T13:53:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-01-09T08:53:38.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "arademaker/QALD",
"max_issues_repo_path": "5/documents/resources.tex",
"max_line_length": 238,
"max_stars_count": 81,
"max_stars_repo_head_hexsha": "b985de8d29d18157401c005566542350bb4b5fb7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "arademaker/QALD",
"max_stars_repo_path": "5/documents/resources.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-26T21:33:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-17T03:38:48.000Z",
"num_tokens": 1226,
"size": 4328
} |
% Default to the notebook output style
% Inherit from the specified cell style.
\definecolor{orange}{cmyk}{0,0.4,0.8,0.2}
\definecolor{darkorange}{rgb}{.71,0.21,0.01}
\definecolor{darkgreen}{rgb}{.12,.54,.11}
\definecolor{myteal}{rgb}{.26, .44, .56}
\definecolor{gray}{gray}{0.45}
\definecolor{lightgray}{gray}{.95}
\definecolor{mediumgray}{gray}{.8}
\definecolor{inputbackground}{rgb}{.95, .95, .85}
\definecolor{outputbackground}{rgb}{.95, .95, .95}
\definecolor{traceback}{rgb}{1, .95, .95}
% ansi colors
\definecolor{red}{rgb}{.6,0,0}
\definecolor{green}{rgb}{0,.65,0}
\definecolor{brown}{rgb}{0.6,0.6,0}
\definecolor{blue}{rgb}{0,.145,.698}
\definecolor{purple}{rgb}{.698,.145,.698}
\definecolor{cyan}{rgb}{0,.698,.698}
\definecolor{lightgray}{gray}{0.5}
% bright ansi colors
\definecolor{darkgray}{gray}{0.25}
\definecolor{lightred}{rgb}{1.0,0.39,0.28}
\definecolor{lightgreen}{rgb}{0.48,0.99,0.0}
\definecolor{lightblue}{rgb}{0.53,0.81,0.92}
\definecolor{lightpurple}{rgb}{0.87,0.63,0.87}
\definecolor{lightcyan}{rgb}{0.5,1.0,0.83}
% commands and environments needed by pandoc snippets
% extracted from the output of `pandoc -s`
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\newenvironment{Shaded}{}{}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}}
\newcommand{\RegionMarkerTok}[1]{{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}}
\newcommand{\NormalTok}[1]{{#1}}
% Define a nice break command that doesn't care if a line doesn't already
% exist.
\def\br{\hspace*{\fill} \\* }
% Math Jax compatability definitions
\def\gt{>}
\def\lt{<}
% Document parameters
\title{}
% Pygments definitions
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
\let\PY@ul=\relax \let\PY@tc=\relax%
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else%
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}}
\expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf}
\expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}}
\expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit}
\expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}}
\expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
\expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}}
\expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}}
\expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}}
\expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}}
\expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}}
\expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}}
\expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}
\expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}}
\expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}}
\expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}}
\expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}}
\expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}}
\expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}}
\expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}}
\def\PYZbs{\char`\\}
\def\PYZus{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\PYZca{\char`\^}
\def\PYZam{\char`\&}
\def\PYZlt{\char`\<}
\def\PYZgt{\char`\>}
\def\PYZsh{\char`\#}
\def\PYZpc{\char`\%}
\def\PYZdl{\char`\$}
\def\PYZhy{\char`\-}
\def\PYZsq{\char`\'}
\def\PYZdq{\char`\"}
\def\PYZti{\char`\~}
% for compatibility with earlier versions
\def\PYZat{@}
\def\PYZlb{[}
\def\PYZrb{]}
\makeatother
% Exact colors from NB
\definecolor{incolor}{rgb}{0.0, 0.0, 0.5}
\definecolor{outcolor}{rgb}{0.545, 0.0, 0.0}
% Prevent overflowing lines due to hard-to-break entities
\sloppy
% Setup hyperref package
\hypersetup{
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
urlcolor=blue,
linkcolor=darkorange,
citecolor=darkgreen,
}
% Slightly bigger margins than the latex defaults
\begin{document}
\maketitle
\section{Classes}\label{classes}
Variables, Lists, Dictionaries etc in python is a object. Without
getting into the theory part of Object Oriented Programming, explanation
of the concepts will be done along this tutorial.
A class is declared as follows
class class\_name:
\begin{verbatim}
Functions
\end{verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}1}]:} \PY{k}{class} \PY{n+nc}{FirstClass}\PY{p}{:}
\PY{k}{pass}
\end{Verbatim}
\textbf{pass} in python means do nothing.
Above, a class object named ``FirstClass'' is declared now consider a
``egclass'' which has all the characteristics of ``FirstClass''. So all
you have to do is, equate the ``egclass'' to ``FirstClass''. In python
jargon this is called as creating an instance. ``egclass'' is the
instance of ``FirstClass''
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}2}]:} \PY{n}{egclass} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}3}]:} \PY{n+nb}{type}\PY{p}{(}\PY{n}{egclass}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}3}]:} instance
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}4}]:} \PY{n+nb}{type}\PY{p}{(}\PY{n}{FirstClass}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}4}]:} classobj
\end{Verbatim}
Now let us add some ``functionality'' to the class. So that our
``FirstClass'' is defined in a better way. A function inside a class is
called as a ``Method'' of that class
Most of the classes will have a function named ``\_\_init\_\_''. These
are called as magic methods. In this method you basically initialize the
variables of that class or any other initial algorithms which is
applicable to all methods is specified in this method. A variable inside
a class is called an attribute.
These helps simplify the process of initializing a instance. For
example,
Without the use of magic method or \_\_init\_\_ which is otherwise
called as constructors. One had to define a \textbf{init( )} method and
call the \textbf{init( )} function.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor} }]:} \PY{n}{eg0} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{p}{)}
\PY{n}{eg0}\PY{o}{.}\PY{n}{init}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
But when the constructor is defined the \_\_init\_\_ is called thus
intializing the instance created.
We will make our ``FirstClass'' to accept two variables name and symbol.
I will be explaining about the ``self'' in a while.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}6}]:} \PY{k}{class} \PY{n+nc}{FirstClass}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{symbol}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{=} \PY{n}{symbol}
\end{Verbatim}
Now that we have defined a function and added the \_\_init\_\_ method.
We can create a instance of FirstClass which now accepts two arguments.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}7}]:} \PY{n}{eg1} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{one}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{eg2} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{two}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}8}]:} \PY{k}{print} \PY{n}{eg1}\PY{o}{.}\PY{n}{name}\PY{p}{,} \PY{n}{eg1}\PY{o}{.}\PY{n}{symbol}
\PY{k}{print} \PY{n}{eg2}\PY{o}{.}\PY{n}{name}\PY{p}{,} \PY{n}{eg2}\PY{o}{.}\PY{n}{symbol}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
one 1
two 2
\end{Verbatim}
\textbf{dir( )} function comes very handy in looking into what the class
contains and what all method it offers
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}9}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{FirstClass}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}9}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_']
\end{Verbatim}
\textbf{dir( )} of an instance also shows it's defined attributes.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}10}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{eg1}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}10}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_', 'name', 'symbol']
\end{Verbatim}
Changing the FirstClass function a bit,
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}11}]:} \PY{k}{class} \PY{n+nc}{FirstClass}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{symbol}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{n} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{s} \PY{o}{=} \PY{n}{symbol}
\end{Verbatim}
Changing self.name and self.symbol to self.n and self.s respectively
will yield,
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}12}]:} \PY{n}{eg1} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{one}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{eg2} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{two}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}13}]:} \PY{k}{print} \PY{n}{eg1}\PY{o}{.}\PY{n}{name}\PY{p}{,} \PY{n}{eg1}\PY{o}{.}\PY{n}{symbol}
\PY{k}{print} \PY{n}{eg2}\PY{o}{.}\PY{n}{name}\PY{p}{,} \PY{n}{eg2}\PY{o}{.}\PY{n}{symbol}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-3717d682d1cf> in <module>()
----> 1 print eg1.name, eg1.symbol
2 print eg2.name, eg2.symbol
AttributeError: FirstClass instance has no attribute 'name'
\end{Verbatim}
AttributeError, Remember variables are nothing but attributes inside a
class? So this means we have not given the correct attribute for the
instance.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}14}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{eg1}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}14}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_', 'n', 's']
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}15}]:} \PY{k}{print} \PY{n}{eg1}\PY{o}{.}\PY{n}{n}\PY{p}{,} \PY{n}{eg1}\PY{o}{.}\PY{n}{s}
\PY{k}{print} \PY{n}{eg2}\PY{o}{.}\PY{n}{n}\PY{p}{,} \PY{n}{eg2}\PY{o}{.}\PY{n}{s}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
one 1
two 2
\end{Verbatim}
So now we have solved the error. Now let us compare the two examples
that we saw.
When I declared self.name and self.symbol, there was no attribute error
for eg1.name and eg1.symbol and when I declared self.n and self.s, there
was no attribute error for eg1.n and eg1.s
From the above we can conclude that self is nothing but the instance
itself.
Remember, self is not predefined it is userdefined. You can make use of
anything you are comfortable with. But it has become a common practice
to use self.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}16}]:} \PY{k}{class} \PY{n+nc}{FirstClass}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n}{asdf1234}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{symbol}\PY{p}{)}\PY{p}{:}
\PY{n}{asdf1234}\PY{o}{.}\PY{n}{n} \PY{o}{=} \PY{n}{name}
\PY{n}{asdf1234}\PY{o}{.}\PY{n}{s} \PY{o}{=} \PY{n}{symbol}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}17}]:} \PY{n}{eg1} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{one}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{eg2} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{two}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}18}]:} \PY{k}{print} \PY{n}{eg1}\PY{o}{.}\PY{n}{n}\PY{p}{,} \PY{n}{eg1}\PY{o}{.}\PY{n}{s}
\PY{k}{print} \PY{n}{eg2}\PY{o}{.}\PY{n}{n}\PY{p}{,} \PY{n}{eg2}\PY{o}{.}\PY{n}{s}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
one 1
two 2
\end{Verbatim}
Since eg1 and eg2 are instances of FirstClass it need not necessarily be
limited to FirstClass itself. It might extend itself by declaring other
attributes without having the attribute to be declared inside the
FirstClass.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}19}]:} \PY{n}{eg1}\PY{o}{.}\PY{n}{cube} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{eg2}\PY{o}{.}\PY{n}{cube} \PY{o}{=} \PY{l+m+mi}{8}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}20}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{eg1}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}20}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_', 'cube', 'n', 's']
\end{Verbatim}
Just like global and local variables as we saw earlier, even classes
have it's own types of variables.
Class Attribute : attributes defined outside the method and is
applicable to all the instances.
Instance Attribute : attributes defined inside a method and is
applicable to only that method and is unique to each instance.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}21}]:} \PY{k}{class} \PY{n+nc}{FirstClass}\PY{p}{:}
\PY{n}{test} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{test}\PY{l+s}{\PYZsq{}}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{symbol}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{=} \PY{n}{symbol}
\end{Verbatim}
Here test is a class attribute and name is a instance attribute.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}22}]:} \PY{n}{eg3} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Three}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}23}]:} \PY{k}{print} \PY{n}{eg3}\PY{o}{.}\PY{n}{test}\PY{p}{,} \PY{n}{eg3}\PY{o}{.}\PY{n}{name}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
test Three
\end{Verbatim}
Let us add some more methods to FirstClass.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}24}]:} \PY{k}{class} \PY{n+nc}{FirstClass}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{symbol}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{=} \PY{n}{symbol}
\PY{k}{def} \PY{n+nf}{square}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{*} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol}
\PY{k}{def} \PY{n+nf}{cube}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{*} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{*} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol}
\PY{k}{def} \PY{n+nf}{multiply}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{x}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{symbol} \PY{o}{*} \PY{n}{x}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}25}]:} \PY{n}{eg4} \PY{o}{=} \PY{n}{FirstClass}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Five}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}26}]:} \PY{k}{print} \PY{n}{eg4}\PY{o}{.}\PY{n}{square}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{n}{eg4}\PY{o}{.}\PY{n}{cube}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
25
125
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}27}]:} \PY{n}{eg4}\PY{o}{.}\PY{n}{multiply}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}27}]:} 10
\end{Verbatim}
The above can also be written as,
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}28}]:} \PY{n}{FirstClass}\PY{o}{.}\PY{n}{multiply}\PY{p}{(}\PY{n}{eg4}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}28}]:} 10
\end{Verbatim}
\subsection{Inheritance}\label{inheritance}
There might be cases where a new class would have all the previous
characteristics of an already defined class. So the new class can
``inherit'' the previous class and add it's own methods to it. This is
called as inheritance.
Consider class SoftwareEngineer which has a method salary.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}29}]:} \PY{k}{class} \PY{n+nc}{SoftwareEngineer}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{age}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{age} \PY{o}{=} \PY{n}{age}
\PY{k}{def} \PY{n+nf}{salary}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{value}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{money} \PY{o}{=} \PY{n}{value}
\PY{k}{print} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name}\PY{p}{,}\PY{l+s}{\PYZdq{}}\PY{l+s}{earns}\PY{l+s}{\PYZdq{}}\PY{p}{,}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{money}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}30}]:} \PY{n}{a} \PY{o}{=} \PY{n}{SoftwareEngineer}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Kartik}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{26}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}31}]:} \PY{n}{a}\PY{o}{.}\PY{n}{salary}\PY{p}{(}\PY{l+m+mi}{40000}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Kartik earns 40000
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}32}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{SoftwareEngineer}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}32}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_', 'salary']
\end{Verbatim}
Now consider another class Artist which tells us about the amount of
money an artist earns and his artform.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}33}]:} \PY{k}{class} \PY{n+nc}{Artist}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{name}\PY{p}{,}\PY{n}{age}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{n}{name}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{age} \PY{o}{=} \PY{n}{age}
\PY{k}{def} \PY{n+nf}{money}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{value}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{money} \PY{o}{=} \PY{n}{value}
\PY{k}{print} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name}\PY{p}{,}\PY{l+s}{\PYZdq{}}\PY{l+s}{earns}\PY{l+s}{\PYZdq{}}\PY{p}{,}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{money}
\PY{k}{def} \PY{n+nf}{artform}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{job}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{job} \PY{o}{=} \PY{n}{job}
\PY{k}{print} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name}\PY{p}{,}\PY{l+s}{\PYZdq{}}\PY{l+s}{is a}\PY{l+s}{\PYZdq{}}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{job}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}34}]:} \PY{n}{b} \PY{o}{=} \PY{n}{Artist}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Nitin}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{20}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}35}]:} \PY{n}{b}\PY{o}{.}\PY{n}{money}\PY{p}{(}\PY{l+m+mi}{50000}\PY{p}{)}
\PY{n}{b}\PY{o}{.}\PY{n}{artform}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Musician}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Nitin earns 50000
Nitin is a Musician
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}36}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{Artist}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}36}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_', 'artform', 'money']
\end{Verbatim}
money method and salary method are the same. So we can generalize the
method to salary and inherit the SoftwareEngineer class to Artist class.
Now the artist class becomes,
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}37}]:} \PY{k}{class} \PY{n+nc}{Artist}\PY{p}{(}\PY{n}{SoftwareEngineer}\PY{p}{)}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{artform}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{job}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{job} \PY{o}{=} \PY{n}{job}
\PY{k}{print} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name}\PY{p}{,}\PY{l+s}{\PYZdq{}}\PY{l+s}{is a}\PY{l+s}{\PYZdq{}}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{job}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}38}]:} \PY{n}{c} \PY{o}{=} \PY{n}{Artist}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Nishanth}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{21}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}39}]:} \PY{n+nb}{dir}\PY{p}{(}\PY{n}{Artist}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}39}]:} ['\_\_doc\_\_', '\_\_init\_\_', '\_\_module\_\_', 'artform', 'salary']
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}40}]:} \PY{n}{c}\PY{o}{.}\PY{n}{salary}\PY{p}{(}\PY{l+m+mi}{60000}\PY{p}{)}
\PY{n}{c}\PY{o}{.}\PY{n}{artform}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Dancer}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Nishanth earns 60000
Nishanth is a Dancer
\end{Verbatim}
Suppose say while inheriting a particular method is not suitable for the
new class. One can override this method by defining again that method
with the same name inside the new class.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}41}]:} \PY{k}{class} \PY{n+nc}{Artist}\PY{p}{(}\PY{n}{SoftwareEngineer}\PY{p}{)}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{artform}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{job}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{job} \PY{o}{=} \PY{n}{job}
\PY{k}{print} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name}\PY{p}{,}\PY{l+s}{\PYZdq{}}\PY{l+s}{is a}\PY{l+s}{\PYZdq{}}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{job}
\PY{k}{def} \PY{n+nf}{salary}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{value}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{money} \PY{o}{=} \PY{n}{value}
\PY{k}{print} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{name}\PY{p}{,}\PY{l+s}{\PYZdq{}}\PY{l+s}{earns}\PY{l+s}{\PYZdq{}}\PY{p}{,}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{money}
\PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{I am overriding the SoftwareEngineer class}\PY{l+s}{\PYZsq{}}\PY{l+s}{s salary method}\PY{l+s}{\PYZdq{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}42}]:} \PY{n}{c} \PY{o}{=} \PY{n}{Artist}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Nishanth}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+m+mi}{21}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}43}]:} \PY{n}{c}\PY{o}{.}\PY{n}{salary}\PY{p}{(}\PY{l+m+mi}{60000}\PY{p}{)}
\PY{n}{c}\PY{o}{.}\PY{n}{artform}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Dancer}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Nishanth earns 60000
I am overriding the SoftwareEngineer class's salary method
Nishanth is a Dancer
\end{Verbatim}
If not sure how many times methods will be called it will become
difficult to declare so many variables to carry each result hence it is
better to declare a list and append the result.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}44}]:} \PY{k}{class} \PY{n+nc}{emptylist}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{data} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{k}{def} \PY{n+nf}{one}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}\PY{n}{x}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{data}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{x}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{two}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{x} \PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{data}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{x}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{three}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{x}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{data}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{x}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{3}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}45}]:} \PY{n}{xc} \PY{o}{=} \PY{n}{emptylist}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}46}]:} \PY{n}{xc}\PY{o}{.}\PY{n}{one}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{)}
\PY{k}{print} \PY{n}{xc}\PY{o}{.}\PY{n}{data}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[1]
\end{Verbatim}
Since xc.data is a list direct list operations can also be performed.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}47}]:} \PY{n}{xc}\PY{o}{.}\PY{n}{data}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{l+m+mi}{8}\PY{p}{)}
\PY{k}{print} \PY{n}{xc}\PY{o}{.}\PY{n}{data}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[1, 8]
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}48}]:} \PY{n}{xc}\PY{o}{.}\PY{n}{two}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}
\PY{k}{print} \PY{n}{xc}\PY{o}{.}\PY{n}{data}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[1, 8, 9]
\end{Verbatim}
If the number of input arguments varies from instance to instance
asterisk can be used as shown.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}49}]:} \PY{k}{class} \PY{n+nc}{NotSure}\PY{p}{:}
\PY{k}{def} \PY{n+nf}{\PYZus{}\PYZus{}init\PYZus{}\PYZus{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{o}{*}\PY{n}{args}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{data} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{\PYZsq{}}\PY{o}{.}\PY{n}{join}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{args}\PY{p}{)}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}50}]:} \PY{n}{yz} \PY{o}{=} \PY{n}{NotSure}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{I}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Do}\PY{l+s}{\PYZsq{}} \PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Not}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Know}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{What}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{To}\PY{l+s}{\PYZsq{}}\PY{p}{,}\PY{l+s}{\PYZsq{}}\PY{l+s}{Type}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}51}]:} \PY{n}{yz}\PY{o}{.}\PY{n}{data}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}51}]:} 'IDoNotKnowWhatToType'
\end{Verbatim}
\section{Where to go from here?}\label{where-to-go-from-here}
Practice alone can help you get the hang of python. Give your self
problem statements and solve them. You can also sign up to any
competitive coding platform for problem statements. The more you code
the more you discover and the more you start appreciating the language.
Now that you have been introduced to python, You can try out the
different python libraries in the field of your interest. I highly
recommend you to check out this curated list of Python frameworks,
libraries and software http://awesome-python.com
The official python documentation : https://docs.python.org/2/
Enjoy solving problem statements because life is short, you need python!
Peace.
Rajath Kumar M.P ( rajathkumar dot exe at gmail dot com)
% Add a bibliography block to the postdoc
\end{document}
| {
"alphanum_fraction": 0.5935568203,
"avg_line_length": 49.0267022697,
"ext": "tex",
"hexsha": "cd3ee460f68d98656caab2e9f3350531c9acf0d8",
"lang": "TeX",
"max_forks_count": 240,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T16:18:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-09-07T01:01:50.000Z",
"max_forks_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "webdevhub42/Lambda",
"max_forks_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/07.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960",
"max_issues_repo_issues_event_max_datetime": "2015-10-08T16:29:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-10-08T15:39:14.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "webdevhub42/Lambda",
"max_issues_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/07.tex",
"max_line_length": 499,
"max_stars_count": 247,
"max_stars_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "webdevhub42/Lambda",
"max_stars_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/07.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-28T17:02:15.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-14T19:36:07.000Z",
"num_tokens": 14920,
"size": 36721
} |
\section{Models}
Different models and model architectures are the result of (restricted) computational resources, new ideas and algorithms, specific tasks, and so on. To approach our problem, we figured that experimenting with different models would likely point us into a direction of certain architectural features which work well with the data at hand.
\subsection{AlexNet}
\citeauthor{Krizhevsky2012} have presented their influential \emph{AlexNet} in \citeyear{Krizhevsky2012}. It consists of only five convolutional layers, some additional max-pooling layers and three fully connected layers at the end, so its structure is rather simple. Nonetheless, it is packed with more than 40 million trainable parameters (the fully connected layers at the end are very large), making it not exactly computationally cheap to train. An AlexNet was supposed to serve as our baseline model from which we wanted to build things up.
\subsection{EfficientNet}
We then implemented a pre-trained EfficientNet with custom top layer. The EfficientNet model family was first introduced in 2019 (\citeauthor{Tan2019}) with the idea of providing scalable CNN architectures. Their architecture is largely based on MobileNets inverted residual blocks, in which bottlenecks serve as in- and output of the residual connections \citep{Sandler2018}. Multiple networks of different size for image classification purposes were introduced, many achieving better results than state-of-the-art models like the MobileNet it built upon and ResNet while being significantly smaller in terms of depth, width, and image resolution \citep{Tan2019}. Two years later, a second, even more powerful and efficient generation was introduced: EfficientNetV2 \citep{Tan2021}. For our purposes, given the low input resolution of $224\times224$, the model of choice is the EfficientNetV2-B0 -- the smallest model of the ensemble. The model is pre-trained on ImageNet21k and we implemented a custom fully-connected layer at the end to match our classification task.
\subsection{InceptionNet}
For an additional comparison, we implemented yet another supposedly very efficiently designed CNN, the InceptionV3 as introduced by \citeauthor{Szegedy2015} in \citeyear{Szegedy2015}. Its computational low cost makes the Inception architecture an attractive choice when resources are limited, e.g. in mobile scenarios. The inception network was a milestone in the development of CNN classifiers. It achieved extremely high accuracies on the ImageNet dataset with InceptionV3 reaching $78.1\%$ accuracy.
The fundamental idea behind the inception network is the inception block. In a traditional convolutional neural network layer, the previous layer's output is the input of the next layer until the state of prediction is accomplished. The inception block takes apart the individual layers. The previous layer's output is passed to four different operations in parallel and their outputs is concatenated -- the network is made wider instead of deeper. The naïve approach consists of a $1\times1$ convolution layer, a $3\times3$ convolution layer and a $5\times5$ convolution layer followed by a maximum pooling operation and a concatenation layer. Due to the high computational cost especially of the $5\times5$ filter, the $1\times1$ filter is first added to the naïve inception module. This leads to a reduction in computations of $90\%$.
Additionally, the $1\times1$ convolutional filters allow learning cross-channel patterns across the depth of the input data.
%picture of Convolution
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{images/InceptionNet.jpg}
\caption{Inception Block}
\label{fig:incepblock}
\end{figure} | {
"alphanum_fraction": 0.8119773402,
"avg_line_length": 154.4583333333,
"ext": "tex",
"hexsha": "752bb6871a6ca79a8ef9683593ed5053a18e4fe4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fb5cfa7036f1f5a9ed9752413fbabb76355ed97b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dhesenkamp/turtleRecall",
"max_forks_repo_path": "documentation/source/sections/Models.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fb5cfa7036f1f5a9ed9752413fbabb76355ed97b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dhesenkamp/turtleRecall",
"max_issues_repo_path": "documentation/source/sections/Models.tex",
"max_line_length": 1070,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "fb5cfa7036f1f5a9ed9752413fbabb76355ed97b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dhesenkamp/turtleRecall",
"max_stars_repo_path": "documentation/source/sections/Models.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-28T14:54:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-28T14:54:52.000Z",
"num_tokens": 795,
"size": 3707
} |
\subsubsection{\stid{6.01} LANL ATDM Tools}
\paragraph{Overview}
The Kitsune Project, part of LANL's ATDM CSE efforts, provides a
compiler-focused infrastructure for improving various aspects of the
exascale programming environment. At present, our efforts are primarily
focused on advanced LLVM compiler and tool infrastructure to support the
use of a \emph{parallel-aware} intermediate representation. In
addition, we are actively involved in the Flang Fortran compiler that
is now an official sub-project within the overall LLVM infrastructure.
All these efforts include interactions across ECP as well as
with the broader LLVM community and industry.
\paragraph{Key Challenges}
A key challenge to our efforts is reaching agreement within the broader community that
a parallel intermediate representation is beneficial and needed within
LLVM. This not only requires showing the benefits but also providing a
full implementation for evaluation and feedback from the community.
In addition, significant new compiler capabilities represent a
considerable effort to implement and involve many complexities and technical
challenges. These efforts and the process of up-streaming potential
design and implementation changes do involve some amount of time and
associated risk.
Additional challenges come from a range of complex issues surrounding
target architectures for exascale systems. Our use of the LLVM
infrastructure helps reduce many challenges here since many processor
vendors and system providers now leverage and use LLVM for their
commercial compilers.
\paragraph{Solution Strategy}
Given the project challenges, our approach takes aspects of
today's node-level programming systems (e.g. OpenMP and Kokkos) and
popular programming languages (e.g. C++ and Fortran) into
consideration and aims to improve and expand upon their capabilities
to address the needs of ECP. This allows us to attempt to strike a
balance between incremental improvements to existing infrastructure
and more aggressive techniques that seek to provide innovative
solutions, thereby managing risk while also providing the ability to introduce new
breakthrough technologies.
Unlike current designs, our approach introduces the notion of explicit
parallel constructs into the LLVM intermediate representation, building
off of work done at MIT on Tapir~\cite{2.3.6.01:kitsune:Schardl:2017}.
We are exploring extensions to this work as well as making some changes
to fundamental data structures within the LLVM infrastructure to assist with
and improve analysis and optimization passes.
\paragraph{Recent Progress}
Our primary focus is the delivery of capabilities for LANL's ATDM
Ristra application (AD 2.2.5.01). In support of the requirements for
Ristra, we are targeting the lowering of Kokkos constructs directly
into the parallel-IR representation. At present, this requires
explicit lowering of Kokkos constructs and we have basic support
for \texttt{parallel\_for} and \texttt{parallel\_reduce} in place. In
addition we are looking at replacement of LLVM's dominator tree, a key
data structure for optimizations including parallelization and
memory usage analysis, with a \emph{dominator directed-acyclic-graph}
(DAG). This work is currently underway and should near its initial
implementation early in the 2020 calendar year. In addition, we are
actively watching recent events within the LLVM community around
multi-level intermediate representations (MLIR) and the relationship
they have with parallel semantics, analysis, optimization, and code
generation. Furthermore, we are participating in ongoing discussions within the
community about general details behind parallel intermediate forms.
\paragraph{Next Steps}
The key next step for our feature set is to complete implementation of Kokkos use cases
that match the needs of Ristra. Where possible, we will also
explore the broader set of use cases within ECP's overall use of
Kokkos constructs. The Kitsune toolchain is still very much an
active \emph{proof-of-concept} effort based on Clang (C and C++) with
plans to add support for Fortran via the newly established Flang front
end within LLVM (ST 2.3.2.12). Even though it is not yet production
ready, we are actively updating and releasing source code and the supporting
infrastructure for deployment as an early evaluation candidate. In
addition to these components, we will actively begin to explore targeting
the exascale systems (Aurora, Frontier, and El Capitan).
| {
"alphanum_fraction": 0.8208888889,
"avg_line_length": 54.8780487805,
"ext": "tex",
"hexsha": "40dd7c124d9a50da1b10c78cc7c022e459e119af",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9563f23b335c3cda19a239a1a8e9086bb682a2c3",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "Franckcappello/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.6-NNSA/2.3.6.01-LANL-ATDM/2.3.6.01-LANL-ATDM-Tools.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9563f23b335c3cda19a239a1a8e9086bb682a2c3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "Franckcappello/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.6-NNSA/2.3.6.01-LANL-ATDM/2.3.6.01-LANL-ATDM-Tools.tex",
"max_line_length": 88,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9563f23b335c3cda19a239a1a8e9086bb682a2c3",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "Franckcappello/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.6-NNSA/2.3.6.01-LANL-ATDM/2.3.6.01-LANL-ATDM-Tools.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 943,
"size": 4500
} |
\documentclass[jou]{apa6}
\usepackage[american]{babel}
\usepackage{hyperref}
\usepackage{amsthm}
\usepackage{thmtools}
\usepackage{csquotes}
\usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex}
\DeclareLanguageMapping{american}{american-apa}
\addbibresource{bibliography.bib}
\title{2019-12-14 Class Summary: Chinese Remainder Theorem}
\author{Kalvis Aps\={\i}tis, {\small \tt kalvis.apsitis}{\small \tt @gmail.com}}
\affiliation{Riga Business School (RBS), University of Latvia (LU)}
\leftheader{NMS Selection Training in Number Theory: 2019-12-14 Class Summary}
\declaretheoremstyle[headfont=\normalfont\bfseries,notefont=\mdseries\bfseries,bodyfont = \normalfont,headpunct={:}]{normalhead}
\declaretheorem[name={Example}, style=normalhead,numberwithin=section]{problem}
\setcounter{section}{2}
\abstract{This document lists the key results from the December 14, 2019 class in
Number Theory. This training for competition math is aimed at 16\textendash{}18 year
olds (typically, Grades 10\textendash{}12).
}
\keywords{Chinese remainder theorem, Bezout's identity, Modular arithmetic.}
\begin{document}
\maketitle
\section{Examples}
These are {\bf not} the homework problems; it is just a supplementary study material
with examples taken from the lecture; most of them were analyzed during the lecture.
In case you have forgotten something, hints for these examples are given at the end of this document.
Full notes and solutions in Latvian: \url{http://linen-tracer-682.appspot.com/numtheory-tales/tale-numtheory-jun03-crt/content.html#/section}
\begin{problem}
Find integers $x,y$ such that $18x + 42y = 6$.
(Here we have chosen $6 = \mathit{gcd}(18,42)$.)
\end{problem}
\begin{problem}
Prove that the sequence $1,11,111,\ldots$ contains
an infinite subsequence such that any two members of that subsequence
are mutually prime.
\end{problem}
{\bf Blankinship Algorithm:} Blankinship Algorithm can be
used to find solutions for the
Bezout's identity: the integers $x,y$ such that $ax+by=d$.
It is explained here: \url{http://mathworld.wolfram.com/BlankinshipAlgorithm.html}.
It applies Gaussian row operations to a $2 \times 3$ matrix: you
should know how to subtract one row from another.
{\bf Inverse Congruence Class:} For a congruence class $a$
that is mutually prime with modulo $m$, denote by $a^{-1}$
a congruence class such that $a^{-1}\cdot a \equiv 1$ modulo $m$.
(In other words: Given a number $a < m$, find some number $b$
such that $a \cdot b$ gives remainder $1$ when divided by $m$.)
\begin{problem}
Find inverses $1^{-1}$, $3^{-1}$, $5^{-1}$, $7^{-1}$, $9^{-1}$, $11^{-1}$, $13^{-1}$, $15^{-1}$
(all modulo $16$).
\end{problem}
{\bf Chinese Remainder Theorem:}
For multiple mutually prime modulos $m_1,m_2,\ldots,m_k$ one can find $x$ that is
congruent to any numbers $a_1,a_2,\ldots,a_k$ with respect to those modulos.
\begin{problem}
Find a natural number $x$ that is a solution to this system of congruences:
$$\left\{ \begin{array}{l}
x \equiv 1\;(\mathit{mod}\,3)\\
x \equiv 2\;(\mathit{mod}\,5)\\
x \equiv 3\;(\mathit{mod}\,7)
\end{array} \right.$$
\end{problem}
\begin{problem}
Assume that you want to find number $x$ that satisfies both congruences:
$$\left\{ \begin{array}{l}
x \equiv 4\;(\mathit{mod}\,5)\\
x \equiv 6\;(\mathit{mod}\,11)
\end{array} \right.$$
You can solve this "graphically" - build a $5 \times 11$ table representing
all possible pairs of remainders, when you divide numbers by $5$ and by $11$.
Fill in this table by choosing $x=1,2,3,\ldots$ until you find the necessary
combination of remainders $(4;6)$.
\end{problem}
\begin{problem}
Find the smallest positive integer $n$, such that
numbers $\sqrt[5]{5n}$, $\sqrt[6]{6n}$, $\sqrt[7]{7n}$ are all positive integers.\\
(From {\em Vilniaus universiteto Matematikos ir informatikos fakulteto olimpiadas} -
a Lithuanian olympiad for high school students by Vilnius university; 2016,
Grade 10, P3.)
\end{problem}
\begin{problem}
Prove that for each positive integer n, there are pairwise relatively prime integers
$k_0, k_1, \ldots, k_n$ , all strictly greater than $1$,
such that $k_0 k_1 \ldots k_n - 1$ is the product of
two consecutive integers.
\end{problem}
\begin{problem}
Prove that for every positive integer $n$, there exist integers $a$ and $b$ such that $4a^2 + 9b^2 - 1$ is divisible by $n$.\\
({\em Math Prize for Girls Olympiad, 2010, P2}).
\end{problem}
\begin{problem}
Are there infinitely many Fibonacci numbers that give the following remainders when divided by $1001$:\\
{\bf (a)} remainder $0$; {\bf (b)} remainder $900$; {\bf (c)} remainder $1000$.
\end{problem}
\begin{problem}
Prove or disprove the following hypotheses.\\
{\bf (a)} For all $k \geq 2,$ each sequence of $k$ consecutive positive integers contains a number that is not divisible by any prime number less than $k$.\\
{\bf (a)} For all $k\geq 2,$ each sequence of $k$ consecutive positive integers contains a number that is relatively prime to all other members of the sequence.\\
({\em Baltic Way, 2016, P2}).
\end{problem}
\newpage
\section{Hints for Some Examples}
{\bf Hint 2.1:} You can find this by trial and error for small numbers.
Or you can run Euclidean Algorithm to find the GCD (greatest common divisor)
of numbers $18$ and $42$. It will tell you, how many times you should add
or subtract $18$ and $42$ to get number $6$. (Blaninship's method is essentially
the same thing.)
{\bf Hint 2.2:} If you build the following sequence of mutual primes:
$2,3,7,43,\ldots$ (every next number equals the product of all the previous ones plus $1$),
then the corresponding numbers $11$, $111$, $1111111$, and so on will give remainder $1$
every time you divide them one by another.
{\bf Hint 2.3:} In order to find, say, $9^{-1}$ (modulo $16$), you can
try out all the odd remainders ($1,3,\ldots,13,15$). Or you can solve the
Bezout's identity $9x - 16y = 1$. Blankinship's algorithm again.
{\bf Hint 2.4:} One can build such a number step by step -
first write all the numbers congruent to $1$ (modulo $3$):
$1,4,7,\ldots$ until you find one that gives remainder $2$ when
divided by $5$, and so on. (Since $3,5,7$ are mutually prime,
Chinese remainder theorem promises that you will succeed.)
{\bf Hint 2.5:} Solution is shown in the table: \url{https://bit.ly/2MCzMmf}.
Try to locate numbers
$0,1,2,3,\ldots$ in this table and see the sequence how they fill up the table.
Similar ideas are used by problems that ask you to "Measure exactly $4$ liters
of water, given two jugs with volumes $5L$ and $11L$ respectively".
{\bf Hint 2.6:} Search for $n$ in the form $n = 2^a3^b5^c7^d$.
Then write the necessary conditions (as modular congruences) for all
the unknown powers $a,b,c,d$.
{\bf Hint 2.7:} Look at the polynomial $F(t) = t^2 + t + 1$ (it is
a product of two consecutive numbers $t$ and $t+1$ plus $1$).\\
Note that all the remainders it gives, when divided by $2$, by $3$, etc.
are periodic. And if $F(t)$ sometimes is divisible by a prime $p_1$ and
sometimes by a prime $p_2$, then eventually $F(t)$ (for some special
arguments $t$) will be divisible by them both: $p_1 \cdot p_2$.\\
Now, all you need to show that there are infinitely many primes that
sometimes divide the values of $F(t)$. At this point remember
the proof that there are infinitely many primes. Assume that this is not true -
i.e. $F(t)$ is divisible by only finitely many primes. Then plug into
$F(t)=t^2 + t + 1$ the number
$t = p_1\cdot{}p_2\ldots{}\cdot{}p_k+1$ their product plus $1$.
{\bf Hint 2.8:} Use Chinese remainder theorem to avoid looking at {\em all} possible
$n$. Just look at the prime powers $p^k$ (and all the remaining $n$ can be
obtained by combining the solutions for $p^k$ in a certain way).\\
Next, consider two separate cases: $n = 2^k$ (you can now pick $b$
so that it is inverse of $3$ modulo $2^k$ - so that the term $9b^2 - 1$
is congruent to $0$). On the other hand, if $n=p^k$ for some other $p \neq 2$,
then pick $a$ equal to inverse of $2$ modulo $p^k$ for similar reasons.
{\bf Hint 2.9:} The remainders of Fibonacci numbers when divided by any fixed $d$
are periodic (because the pairs of neighboring remainders eventually
start to repeat). What is more interesting: All the remainders of Fibonacci sequence
are "clean periodic" (not just "eventually periodic") - every remainder belongs to
the period. If $F_0 = 0$ (divisible by any $d$), then it means that infinitely
often $F_n$ will be divisible by that $d$.\\
{\bf (a)} is simple - since $F_0$ is divisible by $1001$, then the remainder $0$
is clearly in the period (modulo $1001$).\\
For {\bf (b)} and {\bf (c)} you need to factorize $1001$ as a product of three prime
factors and search for the combinations of remainders (as per Chinese remainder
theorem).
{\bf Hint 2.10:} Statement {\bf (a)} is clearly false. Just start from $2$ and you
will find a sequence, where every member is divisible by some small prime.\\
For {\bf (b)} you need to express some segment of $k$ subsequent numbers as an overlap
of several arithmetic progressions with prime differences $d < k$
(so that every progression contains at least two members among these $k$ subsequent
numbers and all the $k$ numbers are covered at least by one sequence).\\
This is doable when $k=17$. See \url{https://bit.ly/2Q2XASA}. Finally - use
Chinese remainder theorem to find an actual value $N$ such that the numbers
from $N$ to $N+16$ (inclusive) give the remainders you need.
\end{document}
| {
"alphanum_fraction": 0.7215057146,
"avg_line_length": 44.7746478873,
"ext": "tex",
"hexsha": "86cbdcc97a3f2073226d87421981d3cb7a054fd1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f21b172d4a58ec8ba25003626de02bfdda946cdc",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "kapsitis/math",
"max_forks_repo_path": "src/site/numtheory/static/selection_problems_2019_2020/class-summary-chinese-remainder-theorem.tex",
"max_issues_count": 10,
"max_issues_repo_head_hexsha": "aff61a99c5b3c2bdc9902eef6581d14f0699f5ea",
"max_issues_repo_issues_event_max_datetime": "2020-09-13T23:55:14.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-17T17:42:53.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "kapsitis/nms-numtheory",
"max_issues_repo_path": "problems/selection_problems_2019_2020/class-summary-chinese-remainder-theorem.tex",
"max_line_length": 162,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "aff61a99c5b3c2bdc9902eef6581d14f0699f5ea",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "kapsitis/nms-numtheory",
"max_stars_repo_path": "problems/selection_problems_2019_2020/class-summary-chinese-remainder-theorem.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2897,
"size": 9537
} |
\section{Input options}
\begin{framenologo}
\frametitle{Input options}
\tableofcontents[currentsection]
\end{framenologo}
\subsection{Chemical potentials}
\begin{frame}%[fragile]
\frametitle{Input options}
\framesubtitle{Chemical potentials}
\begin{itemize}
\item Define list of all different chemical potentials
\item<2-> Define each, different chemical potential
\item<3-> Remark their physical meaning
\end{itemize}
\begin{tikzpicture}[fixed node]
\def\chem{chem}
\def\bias{V/2}
\only<1-5>{
\begin{scope}[yshift=-2cm]
\matrix {
\node[fdf] {\%block TS.ChemPots}; \\
\node[fdf,ind,lmark] (chem-1) {\chem-1}; \\
\node[fdf,ind,lmark] (chem-2) {\chem-2}; \\
\node[fdf,ind] {...}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
}
\uncover<2->{
\begin{scope}[xshift=4.1cm]
\begin{scope}
\matrix {
\node[fdf,rmark] (chem-b-1) {\%block TS.ChemPot.\chem-1}; \\
\node[fdf,ind] (mu-1) {mu \bias}; \\
\node[fdf,ind] (kT-1) {temp $k_BT_1$}; \\
\node[fdf,ind] (E-1) {contour.eq.pole $\langle \mathrm{energy}\rangle_1$}; \\
%\node[fdf,ind] (NP-1) {contour.eq.pole.n $\langle \mathrm{int}\rangle_1$}; \\
\node[fdf,ind] {contour.eq}; \\
\node[fdf,iind] {begin}; \\
\only<5->{
\node[fdf,lmark,iiind] (C-chem-1) {C-chem-1}; \\
\node[fdf,lmark,iiind] (L-chem-1) {L-chem-1}; \\
}
\only<1-4>{
\node[fdf,iiind] (C-chem-1) {C-chem-1}; \\
\node[fdf,iiind] (L-chem-1) {L-chem-1}; \\
}
\node[fdf,iind] {end}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\only<-5>{
\begin{scope}[yshift=-3.8cm]
\matrix {
\node[fdf,rmark] (chem-b-2) {\%block TS.ChemPot.\chem-2}; \\
\node[fdf,ind] (mu-2) {mu -\bias}; \\
\node[fdf,ind] (kT-2) {temp $k_BT_2$}; \\
\node[fdf,ind] (E-2) {contour.eq.pole $\langle \mathrm{energy}\rangle_2$}; \\
\node[fdf,ind] {\dots};\\
%\node[fdf,ind] (NP-2) {contour.eq.pole.n $\langle \mathrm{int}\rangle_2$}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}}
\end{scope}
\only<-5>{
\draw[->] (chem-1) -- ++(1.8,0) to[out=0,in=180] (chem-b-1);
\draw[->] (chem-2) -- ++(1.8,0) to[out=0,in=180] (chem-b-2);
}
}
\begin{scope}[xshift=8.2cm]
\uncover<3->{%
\matrix[yshift=2.5cm] {
\node {$n_{F,1}(E-$};
\&
\node[circle,draw=good] (chem-mu-1) {$\mu$};
\&
\node {$,$};
\&
\node[circle,draw=bad] (chem-kT-1) {$k_BT$};
\&
\node {$)$};
\\
};
\draw[->,good] (mu-1) to[out=0,in=-150] (chem-mu-1);
\draw[->,bad] (kT-1) to[out=0,in=-140] (chem-kT-1);
}
\uncover<3-5>{%
\matrix[yshift=-3cm] {
\node {$n_{F,2}(E-$};
\&
\node[circle,draw=good] (chem-mu-2) {$\mu$};
\&
\node {$,$};
\&
\node[circle,draw=bad] (chem-kT-2) {$k_BT$};
\&
\node {$)$};
\\
};
\draw[<-,good] (chem-mu-2) to[out=-160,in=0] (mu-2);
\draw[<-,bad] (chem-kT-2) to[out=-150,in=0] (kT-2);
}
\end{scope}
\uncover<4->{%
\begin{scope}[xshift=8.5cm,yshift=.75cm]
\draw (-1,0) -- ++(2,0);
\draw (0,0) node[below] {$\mu_1$} -- ++(0,1);
\foreach \pole in {0.2,0.4} {
\fill (0,\pole) circle (2pt);
}
\coordinate (E-pole-1) at (0,0.5);
\foreach \pole in {0.6,0.8,1} {
\draw (0,\pole) circle (2pt);
}
\end{scope}
\draw[->,thick,shorten >=4pt] (E-1) to[out=0,in=180] (E-pole-1);
\only<-5>{%
\begin{scope}[xshift=8.4cm,yshift=-5cm]
\draw (-1,0) -- ++(2,0);
\draw (0,0) node[below] {$\mu_2$} -- ++(0,1);
\foreach \pole in {0.2,0.4,0.6} {
\fill (0,\pole) circle (2pt);
}
\coordinate (E-pole-2) at (0,0.7);
\foreach \pole in {0.8,1} {
\draw (0,\pole) circle (2pt);
}
\end{scope}
\draw[->,thick,shorten >=4pt] (E-2) to[out=0,in=180] (E-pole-2);
}
}
\uncover<5->{
\def\eta{0.1}%
\def\radius{3.25}%
\def\lineS{-1}%
\def\poles{4}%
\def\poleSep{.25}%
% Calculate alpha angle
\pgfmathparse{\poleSep*(\poles+.5)/\radius}%
\edef\betaA{\pgfmathresult}%
\pgfmathparse{atan(\betaA)}%
\edef\alphaA{\pgfmathresult}%
\pgfmathparse{asin(\betaA)}%
\edef\betaA{\pgfmathresult}%
\begin{scope}[xshift=10cm, yshift=-2cm,scale=.5]
% The axes
\begin{scope}[draw=gray!80!black,thick,->]
\draw (-2*\radius+\lineS-.5,0) -- (\radius+1.5,0) node[text=black,below] {$E$};
\draw (0,0) -- (0,\radius+.5) node[text=black,left] {$\Im$};
\end{scope}
\node[below] (mu-1) at (0,0) {$\mu_1$};
% The specific coordinates on the path
\coordinate (EB) at (-2*\radius+\lineS,\eta);
\coordinate (C-mid) at ({-\radius+\lineS-sin(\alphaA)*\radius},{cos(\alphaA)*\radius});
\coordinate (C-end) at (\lineS,{\poleSep*(\poles+.5)});
\coordinate (L-end) at (\radius,{\poleSep*(\poles+.5)});
\coordinate (L-end-end) at (\radius+1,{\poleSep*(\poles+.5)});
\coordinate (real-L-end) at (\radius,\eta);
\coordinate (real-L-end-end) at (\radius+1,\eta);
\begin{scope}[thick]
% The path (we draw it backwards)
\draw[->-=.3,very thick,ok] (L-end) -- node[above right] (L)
{$\mathcal L$} (C-end);
\draw[->-=.333,->-=.666,very thick,bad] (C-end) to[out=90+\betaA,in=\alphaA] (C-mid)
node[above] (C)
{$\mathcal C$}
to[out=180+\alphaA,in=90] (EB);
\draw[->-=.25,->-=.75] (EB) -- (real-L-end) node[above left] {$\mathcal R$};
% draw the continued lines
\draw[densely dotted] (real-L-end) -- (real-L-end-end);
\draw[densely dotted] (L-end) -- (L-end-end);
\end{scope}
% Draw the poles
\foreach \pole in {1,...,14} {
\ifnum\pole>\poles
\draw (0,\pole*\poleSep) circle (2pt);
\else
\fill (0,\pole*\poleSep) circle (2pt);
\fi
}
\node[left,anchor=east] at (0,{\poleSep*(\poles/2+.5)}) {$z_\nu$};
% correct size
\path[use as bounding box] (-8,-.5) rectangle ++(13,4.5);
\draw[densely dotted] (real-L-end-end) to[out=0,in=0] (L-end-end);
\draw[densely dotted,thick] (EB) -- ++(-.5,0);
\end{scope}
\draw[->,thick,bad] (C) -- (C-chem-1);
\draw[->,thick,good] (L) -- (L-chem-1);
}
\uncover<6->{
\begin{scope}[yshift=-3.5cm,xshift=.25cm]
\matrix {
\node[fdf,rmark] (dC-chem-1) {\%block TS.Contour.C-chem-1}; \\
\node[fdf,ind] {part circle}; \\
\node[fdf,iind] {from -40. eV + V/2 to -10 kT + V/2}; \\
\node[fdf,iind] {points 25}; \\
\node[fdf,iind] {method g-legendre}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\begin{scope}[yshift=-3.5cm,xshift=6cm]
\matrix {
\node[fdf,rmark] (dL-chem-1){\%block TS.Contour.L-chem-1}; \\
\node[fdf,ind] {part line}; \\
\node[fdf,iind] {from prev to inf}; \\
\node[fdf,iind] {points 12}; \\
\node[fdf,iind] {method g-fermi}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\draw[->] (C-chem-1) -- (dC-chem-1);
\draw[->] (L-chem-1) -- (dL-chem-1);
}
\end{tikzpicture}
\end{frame}
\subsection{Non-equilibrium contours}
\begin{frame}%[fragile]
\frametitle{Input options}
\framesubtitle{Non-equilibrium}
\begin{block}{Non-equilibrium contour}
\begin{itemize}
\item The contour spans the entire bias-window
\item<3-> One may add additional lines to increase precision at certain energies
\end{itemize}
\end{block}
\small
\begin{tikzpicture}[fixed node]
\begin{scope}
\matrix {
\node[fdf] {\%block TS.Contours.nEq}; \\
\node[fdf,ind,lmark] (lneq) {neq}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\uncover<2->{
\begin{scope}[xshift=6cm]
\matrix {
\node[fdf,rmark] (rneq) {\%block TS.Contour.nEq.neq}; \\
\node[fdf,ind] {part line}; \\
\node[fdf,iind] {from -5 kT -|V|/2 to |V|/2 + 5 kT}; \\
\node[fdf,iind] {delta 0.01 eV}; \\
\node[fdf,iind] {method mid-rule}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\draw[->] (lneq) -- ++(2.5,0) to[out=0,in=180] (rneq);
}
\end{tikzpicture}
\vskip 2em
\uncover<4->{
\begin{tikzpicture}[fixed node]
\begin{scope}
\matrix {
\node[fdf] {\%block TS.Contours.nEq}; \\
\node[fdf,ind,lmark] (lneq-1) {neq-1}; \\
\node[fdf,ind,lmark] (lneq-2) {neq-2}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\begin{scope}[xshift=6cm]
\matrix {
\node[fdf,rmark] (rneq-1) {\%block TS.Contour.nEq.neq-1}; \\
\node[fdf,ind] {part line}; \\
\node[fdf,iind] {from -5 kT -|V|/2 to 0 eV}; \\
\node[fdf,iind] {delta 0.02 eV}; \\
\node[fdf,iind] {method mid-rule}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\draw[->] (lneq-1) -- ++(2.5,0) to[out=0,in=180] (rneq-1);
\begin{scope}[xshift=6cm, yshift=-2.4cm]
\matrix {
\node[fdf,rmark] (rneq-2) {\%block TS.Contour.nEq.neq-2}; \\
\node[fdf,ind] {part line}; \\
\node[fdf,iind] {from prev to |V|/2 + 5 kT}; \\
\node[fdf,iind] {delta 0.005 eV}; \\
\node[fdf,iind] {method mid-rule}; \\
\node[fdf] {\%endblock}; \\
};
\end{scope}
\draw[->] (lneq-2) -- ++(2.5,0) to[out=0,in=180] (rneq-2);
\end{tikzpicture}
}
\end{frame}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "talk"
%%% End:
| {
"alphanum_fraction": 0.4555248619,
"avg_line_length": 31.387283237,
"ext": "tex",
"hexsha": "5712bcac986d949624ae603efe82dbc899c3a8d5",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-06-17T10:18:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-27T10:27:51.000Z",
"max_forks_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rwiuff/QuantumTransport",
"max_forks_repo_path": "ts-tbt-sisl-tutorial-master/presentations/03/chemical.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093",
"max_issues_repo_issues_event_max_datetime": "2020-03-31T03:17:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-31T03:17:38.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rwiuff/QuantumTransport",
"max_issues_repo_path": "ts-tbt-sisl-tutorial-master/presentations/03/chemical.tex",
"max_line_length": 97,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rwiuff/QuantumTransport",
"max_stars_repo_path": "ts-tbt-sisl-tutorial-master/presentations/03/chemical.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-25T14:05:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-25T14:05:45.000Z",
"num_tokens": 3744,
"size": 10860
} |
\section{Timeline}
\begin{itemize}
\item ComCam will go on the TMA next month
\begin{itemize}
\item We will have images, potentially on Sky images, before year end.
\item We will have loads of calibration data
\end{itemize}
\item We should be able to process ComCam images at SLAC in June.
\begin{itemize}
\item Would be nice to process all the BOT data
\end{itemize}
\item Officially engineering first light is Jan 2023 (likely a little later)
\begin{itemize}
\item Then we need OGA racks etc.
\item We will still be getting lots of engineering data from the real camera !
\end{itemize}
\end{itemize}
The USDF timeline is in \citeds{RTN-021} the high level milestone chart is duplicated in\figref{fig:usdfplan} here.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\textwidth]{USDFplan}
\caption{USDF buildup plan from \citeds{RTN-021}\label{fig:usdfplan}}
\end{centering}
\end{figure}
The big question right now is {\em When can we start multi site testing with PanDA/Rucio ..
IDF, SLAC, FRdF all run PanDA \ldots}.
We need to build up tests for distributed processing RUCIO PanDA, which has started!
We need to make Jira Epics and capture decisions in technotes.
We need to agree interfaces between all the moving parts for the DFs to work with.
As always KISS!\footnote{Yes it is in the glossary.}
| {
"alphanum_fraction": 0.7614061331,
"avg_line_length": 31.0930232558,
"ext": "tex",
"hexsha": "98473471a96626538e5c8cc59c3dbfb5310ee3db",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "42ad1fd0fc7140340f666351cbe0050dc2fa12ef",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst/rtn-031",
"max_forks_repo_path": "timeline.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "42ad1fd0fc7140340f666351cbe0050dc2fa12ef",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst/rtn-031",
"max_issues_repo_path": "timeline.tex",
"max_line_length": 115,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "42ad1fd0fc7140340f666351cbe0050dc2fa12ef",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst/rtn-031",
"max_stars_repo_path": "timeline.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 387,
"size": 1337
} |
\documentclass[a4paper,12pt,oneside]{article}
%DIF LATEXDIFF DIFFERENCE FILE
%DIF DEL doc/sed/v4.0/main.tex Thu Oct 4 04:43:18 2018
%DIF ADD doc/sed/v5.0/main.tex Mon Jan 14 11:07:12 2019
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\renewcommand\familydefault{\sfdefault}
% \usepackage[backref=true,backend=biber,hyperref=true]{biblatex}
% \bibliography{help}
\usepackage[hidelinks,bookmarksnumbered,colorlinks]{hyperref}
\hypersetup{
colorlinks = black
citecolor=true}
%\usepackage[none]{hyphenat}
\usepackage{float}
%\usepackage{subfig} Removed this bc it overrides the subcaption one, if it caused problems for anyone contact me // Lucas
\usepackage{graphicx}
%\graphicspath{} Always search for images in this folder. Dont need to add foldername in includegraphics.
\usepackage{csquotes}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{amsmath}
\usepackage{ae}
\usepackage{boldline}
\usepackage{units}
%\usepackage{lscape}
\usepackage{pdflscape}
\usepackage{enumitem}
\usepackage{booktabs}
\usepackage{icomma}
\usepackage{color}
\usepackage{eurosym}
\usepackage{bbm}
\usepackage{multicol}
\usepackage{multirow}
%\usepackage{listings}
\usepackage{color}
\usepackage{verbatim}
\usepackage{wrapfig}
\usepackage[table]{xcolor}
\usepackage[utf8]{inputenc}
\usepackage{mathtools}
\usepackage{amsmath}
\usepackage[square,sort,comma,numbers]{natbib} % (tar bort konstigt fel från \usepackage{natbib})
\usepackage{textcomp}
\usepackage{siunitx}
\usepackage{pdfpages,picture} %
\usepackage{listings}
\usepackage[final]{pdfpages} %Use \includepdf[pages={#,#,#,#,#}]{myfile.pdf}
%To be clear, you need to specify the pages you wish to include, i.e. \includepdf[pages={1,3,5}]{myfile.pdf} would include pages 1, 3, and 5 of the file. To include the entire file, you specify pages={-}, where {-} is a range without the endpoints specified which default to the first and last pages, respectively. The first two things I had to also do were to scale and to reenable my outer page design (to show page numbers again) which can both be set using the configuration, e.g.: \includepdf[pages=-,scale=.8,pagecommand={}]{file}
\usepackage{hyperref}
\usepackage{enumitem}
\usepackage{amsfonts}
\usepackage{afterpage}
\usepackage{longtable}
\usepackage{tabularx} %Extended tabular
\usepackage{ltxtable}
\usepackage{booktabs}
\usepackage{fancyhdr}
\usepackage{gensymb}
\usepackage{parskip} %% Vince added this to standardize paragraph spacing
\usepackage{textcomp}
\usepackage{lastpage,lipsum} %% lipsum for dummy text remove in your file
\usepackage[margin=3cm]{geometry}
\usepackage{xfrac}
\usepackage{array}
\usepackage{filecontents}
\usepackage{etoolbox}
\usepackage{nameref}
\usepackage{ragged2e}
\usepackage{xcolor}
\usepackage{pdfpages}
\usepackage{tablefootnote}
\usepackage[title]{appendix}
\usepackage[bottom]{footmisc}
\usepackage{rotating}
\usepackage{spreadtab} %% Vince added this to test table sum function [2017-01-25, 15:27]
% For prettifying Matlab code.
\usepackage{courier}
\lstset{basicstyle=\footnotesize\ttfamily,breaklines=true}
\lstset{framextopmargin=50pt,frame=bottomline}
% Again, for prettifying Matlab code.
\usepackage{color} %red, green, blue, yellow, cyan, magenta, black, white
\definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue
\definecolor{mylilas}{RGB}{170,55,241}
% Still, for prettifying Matlab code.
\lstset{language=Matlab,%
breaklines=true,%
morekeywords={matlab2tikz},
keywordstyle=\color{blue},%
morekeywords=[2]{1}, keywordstyle=[2]{\color{black}},
identifierstyle=\color{black},%
stringstyle=\color{mylilas},
commentstyle=\color{mygreen},%
showstringspaces=false,%without this there will be a symbol in the places where there is a space
numbers=left,%
numberstyle={\tiny \color{black}},% size of the numbers
numbersep=9pt, % this defines how far the numbers are from the text
emph=[1]{for,end,break},emphstyle=[1]\color{red}, %some words to emphasise
} %Just experimenting to see if this will fix the code compiling issue in Appendix J. -Ivan
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
% % For references
% %\bibliographystyle{plain}
% \usepackage[numbib]{tocbibind}
% %\usepackage[nottoc]{tocbibind}
% \usepackage[paper=A4,pagesize]{typearea} %% For putting in A3 / Hannah
%%%%%%%%%%%%%%% to be able to higlight text, for PDR update %%%%%%%%%%%%%%
\usepackage{soul} %/Hannah
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \usepackage[acronym]{glossaries} %natalie added 13/01/18
% \makeglossaries
% \usepackage[nopostdot]{glossaries}
\usepackage[xindy,toc]{glossaries} %natalie added 14/01/18
\makeglossaries
\usepackage[xindy]{imakeidx}
\makeindex
\usepackage{makecell} % natalie added 15/01/18 this lets you put breaks inside table boxes :3
\usepackage{siunitx}
\usepackage{dcolumn}
\newcolumntype{d}[1]{D{.}{.}{#1}} %hopefully one of these aligns tables decimal places, natalie
\DeclareCaptionType[fileext=ext]{Test}
%\newfloat{Test}{luc}{Test}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% %%%
%% DO NOT CHANGE IN TEX BELOW (that is literally all i have done/robo) %%%
%% %%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{SED}
\author{tba}
\renewcommand*{\appendixsection}{Appendix~\arabic{section}}
\newcommand{\highlight}[1]{%
\colorbox{yellow}{$\displaystyle#1$}}
\makeatletter
\newcommand{\skipitems}[1]{%
\addtocounter{\@enumctr}{#1}%
}
\makeatother
\newcommand\blankpage{%
\null
\thispagestyle{empty}%
%\addtocounter{page}{-1}%
\newpage
}
%
\fancypagestyle{SED}
{
\fancyhf{}
\renewcommand{\headrulewidth}{0.5pt}
\chead{-\hspace{0.05cm}\thepage \hspace{0.05cm}-}
%\renewcommand{\footrulewidth}{0pt}
%DIF 162c162
%DIF < \fancyfoot[RE,LO]{\textit{BX26\_TUBULAR\_SEDv4-0\_04Oct18}}
%DIF -------
\fancyfoot[RE,LO]{\textit{BX26\_TUBULAR\_SEDv5-0\_11Jan19}} %DIF >
%DIF -------
%\fancyfoot[RO]{Page \thepage} % <--- uncomment this if you want DOUBLE page number on every other page
}
\fancypagestyle{cover}
{
\fancyhf{}
\renewcommand{\footrulewidth}{0.5pt}
%DIF 169c169
%DIF < \lfoot{\centering\textit{BX26\_TUBULAR\_SEDv4-0\_04Oct18}}
%DIF -------
\lfoot{\centering\textit{BX26\_TUBULAR\_SEDv5-0\_11Jan19}} %DIF >
%DIF -------
%\cfoot{}
% \rfoot{}
}
\fancypagestyle{firstp}
{
\fancyhf{}
\renewcommand{\headrulewidth}{0pt}
\chead{-\hspace{0.05cm}\thepage \hspace{0.05cm}-}
\renewcommand{\footrulewidth}{0.5pt}
%DIF 180c180
%DIF < \fancyfoot[RE,LO]{\textit{BX26\_TUBULAR\_SEDv4-0\_04Oct18}}
%DIF -------
\fancyfoot[RE,LO]{\textit{BX26\_TUBULAR\_SEDv5-0\_11Jan19}} %DIF >
%DIF -------
%\fancyfoot[RO]{Page \thepage}
}
% \pagestyle{cover}
% \pagenumbering{roman}
%DIF PREAMBLE EXTENSION ADDED BY LATEXDIFF
%DIF UNDERLINE PREAMBLE %DIF PREAMBLE
\RequirePackage[normalem]{ulem} %DIF PREAMBLE
\RequirePackage{color}\definecolor{RED}{rgb}{1,0,0}\definecolor{BLUE}{rgb}{0,0,1} %DIF PREAMBLE
\providecommand{\DIFaddtex}[1]{{\protect\color{blue}\uwave{#1}}} %DIF PREAMBLE
\providecommand{\DIFdeltex}[1]{{\protect\color{red}\sout{#1}}} %DIF PREAMBLE
%DIF SAFE PREAMBLE %DIF PREAMBLE
\providecommand{\DIFaddbegin}{} %DIF PREAMBLE
\providecommand{\DIFaddend}{} %DIF PREAMBLE
\providecommand{\DIFdelbegin}{} %DIF PREAMBLE
\providecommand{\DIFdelend}{} %DIF PREAMBLE
%DIF FLOATSAFE PREAMBLE %DIF PREAMBLE
\providecommand{\DIFaddFL}[1]{\DIFadd{#1}} %DIF PREAMBLE
\providecommand{\DIFdelFL}[1]{\DIFdel{#1}} %DIF PREAMBLE
\providecommand{\DIFaddbeginFL}{} %DIF PREAMBLE
\providecommand{\DIFaddendFL}{} %DIF PREAMBLE
\providecommand{\DIFdelbeginFL}{} %DIF PREAMBLE
\providecommand{\DIFdelendFL}{} %DIF PREAMBLE
%DIF HYPERREF PREAMBLE %DIF PREAMBLE
\providecommand{\DIFadd}[1]{\texorpdfstring{\DIFaddtex{#1}}{#1}} %DIF PREAMBLE
\providecommand{\DIFdel}[1]{\texorpdfstring{\DIFdeltex{#1}}{}} %DIF PREAMBLE
\newcommand{\DIFscaledelfig}{0.5}
%DIF HIGHLIGHTGRAPHICS PREAMBLE %DIF PREAMBLE
\RequirePackage{settobox} %DIF PREAMBLE
\RequirePackage{letltxmacro} %DIF PREAMBLE
\newsavebox{\DIFdelgraphicsbox} %DIF PREAMBLE
\newlength{\DIFdelgraphicswidth} %DIF PREAMBLE
\newlength{\DIFdelgraphicsheight} %DIF PREAMBLE
% store original definition of \includegraphics %DIF PREAMBLE
\LetLtxMacro{\DIFOincludegraphics}{\includegraphics} %DIF PREAMBLE
\newcommand{\DIFaddincludegraphics}[2][]{{\color{blue}\fbox{\DIFOincludegraphics[#1]{#2}}}} %DIF PREAMBLE
\newcommand{\DIFdelincludegraphics}[2][]{% %DIF PREAMBLE
\sbox{\DIFdelgraphicsbox}{\DIFOincludegraphics[#1]{#2}}% %DIF PREAMBLE
\settoboxwidth{\DIFdelgraphicswidth}{\DIFdelgraphicsbox} %DIF PREAMBLE
\settoboxtotalheight{\DIFdelgraphicsheight}{\DIFdelgraphicsbox} %DIF PREAMBLE
\scalebox{\DIFscaledelfig}{% %DIF PREAMBLE
\parbox[b]{\DIFdelgraphicswidth}{\usebox{\DIFdelgraphicsbox}\\[-\baselineskip] \rule{\DIFdelgraphicswidth}{0em}}\llap{\resizebox{\DIFdelgraphicswidth}{\DIFdelgraphicsheight}{% %DIF PREAMBLE
\setlength{\unitlength}{\DIFdelgraphicswidth}% %DIF PREAMBLE
\begin{picture}(1,1)% %DIF PREAMBLE
\thicklines\linethickness{2pt} %DIF PREAMBLE
{\color[rgb]{1,0,0}\put(0,0){\framebox(1,1){}}}% %DIF PREAMBLE
{\color[rgb]{1,0,0}\put(0,0){\line( 1,1){1}}}% %DIF PREAMBLE
{\color[rgb]{1,0,0}\put(0,1){\line(1,-1){1}}}% %DIF PREAMBLE
\end{picture}% %DIF PREAMBLE
}\hspace*{3pt}}} %DIF PREAMBLE
} %DIF PREAMBLE
\LetLtxMacro{\DIFOaddbegin}{\DIFaddbegin} %DIF PREAMBLE
\LetLtxMacro{\DIFOaddend}{\DIFaddend} %DIF PREAMBLE
\LetLtxMacro{\DIFOdelbegin}{\DIFdelbegin} %DIF PREAMBLE
\LetLtxMacro{\DIFOdelend}{\DIFdelend} %DIF PREAMBLE
\DeclareRobustCommand{\DIFaddbegin}{\DIFOaddbegin \let\includegraphics\DIFaddincludegraphics} %DIF PREAMBLE
\DeclareRobustCommand{\DIFaddend}{\DIFOaddend \let\includegraphics\DIFOincludegraphics} %DIF PREAMBLE
\DeclareRobustCommand{\DIFdelbegin}{\DIFOdelbegin \let\includegraphics\DIFdelincludegraphics} %DIF PREAMBLE
\DeclareRobustCommand{\DIFdelend}{\DIFOaddend \let\includegraphics\DIFOincludegraphics} %DIF PREAMBLE
\LetLtxMacro{\DIFOaddbeginFL}{\DIFaddbeginFL} %DIF PREAMBLE
\LetLtxMacro{\DIFOaddendFL}{\DIFaddendFL} %DIF PREAMBLE
\LetLtxMacro{\DIFOdelbeginFL}{\DIFdelbeginFL} %DIF PREAMBLE
\LetLtxMacro{\DIFOdelendFL}{\DIFdelendFL} %DIF PREAMBLE
\DeclareRobustCommand{\DIFaddbeginFL}{\DIFOaddbeginFL \let\includegraphics\DIFaddincludegraphics} %DIF PREAMBLE
\DeclareRobustCommand{\DIFaddendFL}{\DIFOaddendFL \let\includegraphics\DIFOincludegraphics} %DIF PREAMBLE
\DeclareRobustCommand{\DIFdelbeginFL}{\DIFOdelbeginFL \let\includegraphics\DIFdelincludegraphics} %DIF PREAMBLE
\DeclareRobustCommand{\DIFdelendFL}{\DIFOaddendFL \let\includegraphics\DIFOincludegraphics} %DIF PREAMBLE
%DIF LISTINGS PREAMBLE %DIF PREAMBLE
\lstdefinelanguage{codediff}{ %DIF PREAMBLE
moredelim=**[is][\color{red}]{*!----}{----!*}, %DIF PREAMBLE
moredelim=**[is][\color{blue}]{*!++++}{++++!*} %DIF PREAMBLE
} %DIF PREAMBLE
\lstdefinestyle{codediff}{ %DIF PREAMBLE
belowcaptionskip=.25\baselineskip, %DIF PREAMBLE
language=codediff, %DIF PREAMBLE
basicstyle=\ttfamily, %DIF PREAMBLE
columns=fullflexible, %DIF PREAMBLE
keepspaces=true, %DIF PREAMBLE
} %DIF PREAMBLE
%DIF END PREAMBLE EXTENSION ADDED BY LATEXDIFF
\begin{document}
%\pagenumbering{arabic}
\hypersetup{allcolors=black}
\newgeometry{left=3cm,right=2.5cm,top=1cm,bottom=3cm,headheight=20pt,headsep=10pt}
\thispagestyle{cover}
\begin{flushright}
\includegraphics[width=.25\textwidth]{0-cover/img/logo-rexus-bexus.png}
\end{flushright}
\begin{flushright}
\begin{tabular}{p{0.70\textwidth} r}
\vspace{-1cm}
{{\hspace{-8pt}\huge{\textbf{SED}}} \vspace{15pt} \newline \vspace{15pt}
{\hspace{-11pt}\large{\textbf{Student Experiment Documentation}}} \newline \vspace{30pt}
{\hspace{-11pt}\DIFdelbegin %DIFDELCMD < \footnotesize{Document ID: BX26\_TUBULAR\_SEDv4-0\_04Oct18}%%%
\DIFdelend \DIFaddbegin \footnotesize{Document ID: BX26\_TUBULAR\_SEDv5-0\_11Jan19}\DIFaddend }}
& \hspace{3pt}\multirow{3}{*}{\includegraphics[width=.25\textwidth]{0-cover/img/logo-rexus-bexus-tubular.png}}
\end{tabular}
\end{flushright}
\begin{flushleft}
\vspace{5pt}
\noindent \textbf{\hspace{-1pt}Mission: BEXUS 26} \\
\vspace{20pt}
{\hspace{-2pt}\noindent \Large{\textbf{Team Name:} } TUBULAR} \\
\vspace{20pt}
\hspace{-1pt}Experiment Title: Alternative to AirCore for Atmospheric Greenhouse Gas Sampling\\
\vspace{20pt}
\begin{tabular}{p{.33\textwidth} p{.34\textwidth} p{.33\textwidth}}
\textbf{Team} & \textbf{Name} \\
Student Team Leader: & Natalie Lawton \\
\end{tabular}
\vspace{5pt}
\begin{tabular}{p{.33\textwidth} p{.34\textwidth} p{.33\textwidth}}
Team Members: & N\'{u}ria Ag\"{u}es Paszkowsky \\
& Kyriaki Blazaki \\
& Emily Chen \\
& Jordi Coll Ortega \\
& Gustav Dyrssen \\
& Erik Fagerström \\
& Georges L. J. Labr\`{e}che\\
& Pau Molas Roca \\
& Emil Nordqvist \\
& Muhammad Ansyar Rafi Putra \\
& Hamad Siddiqi \\
& Ivan Zankov \\
\end{tabular}
\begin{tabular}{p{.33\textwidth} p{.34\textwidth} p{.33\textwidth}}
University: & Lule\aa \ University of Technology
\end{tabular}
\vspace{0.25cm}
\begin{tabular}{p{.15\textwidth} p{.3\textwidth} p{.25\textwidth} p{.25\textwidth}}
\footnotesize{Version:} & \footnotesize{Issue Date:} & \footnotesize{Document Type:} & \footnotesize{Valid from} \\
\textbf{\DIFdelbegin \DIFdel{4.0}\DIFdelend \DIFaddbegin \DIFadd{5.0}\DIFaddend } & \textbf{\DIFdelbegin \DIFdel{October 4}\DIFdelend \DIFaddbegin \DIFadd{January 11}\DIFaddend , \DIFdelbegin \DIFdel{2018}\DIFdelend \DIFaddbegin \DIFadd{2019}\DIFaddend } & \textbf{Spec} & \textbf{\DIFdelbegin \DIFdel{October 4}\DIFdelend \DIFaddbegin \DIFadd{January 11}\DIFaddend , \DIFdelbegin \DIFdel{2018}\DIFdelend \DIFaddbegin \DIFadd{2019}\DIFaddend }
\DIFdelbegin %DIFDELCMD < \\
%DIFDELCMD < %%%
\DIFdelend \end{tabular}
\vspace{10pt}
\small
{
Issued by:\\
}
\vspace{0.3cm}
\large
{
\textbf{The TUBULAR Team} \\
}
\vspace{0.3cm}
\small
{
Approved by:\\
}
\vspace{0.3cm}
\large
{
\textbf{Dr. Thomas Kuhn}
}
\end{flushleft}
\newgeometry{left=2.5cm,right=2.5cm,top=2.5cm,bottom=3cm,headheight=30pt,headsep=30pt}
\pagestyle{firstp}
%\thispagestyle{firstp}
\section*{\small{\textbf{CHANGE RECORD}}}
\addcontentsline{toc}{section}{CHANGE RECORD}%
\begin{longtable}{|p{1.5cm}|p{2cm}|p{6cm}|p{3cm}|}\hline
\centering
\textbf{Version} & \textbf{Date} & \textbf{Changed chapter} & \textbf{Remarks} \\\hline
0 & 2017-12-20 & New Version & \\
1-0 & 2018-01-15 & All & PDR \\
1-1 & 2018-01-25 & 1.1, 2.2, 2.3, 3.3.3, 3.5, 4.1, 4.4.2, 4.5, 4.6, 4.7, 6.1.5, 6.1.6, 6.2, 6.4, 7.3.1 & Incorporating feedback from PDR \\
1-2 & 2018-03-12 & Added: 4.5.1, 4.5.2, 4.5.3, 4.5.4, 4.6.1, 4.6.2, 4.6.3, 4.6.4, 4.7.1, 4.7.2, 5.2, Appendix: D, E, G, F. Changed: 1.5, 2.1, 2.3, 2.4, 2.5, 3.1, 3.2, 3.3, 3.3.2, 3.4, 3.5, 4.1, 4.3.1, 4.4, 4.4.2, 4.5, 4.5.1, 4.5.2, 4.5.3, 4.5.4, 4.6, 4.6.3, 4.6.4, 4.7, 4.7.1, 4.7.2, 4.8, 5.1, 5.2, 6.1, 6.1.4, 6.2, 6.3, 6.4, Appendix: B C. & \\
2-0 & 2018-05-13 & Added: 5.3.1, 5.3.2, 6.4.2, 7.1, H.6.1 in Appendix H, Appendix I, Changed: 1.5, 2.2, 2.3, 3.2 3.3.1, 3.5, 4.1, 4.3.1, 4.4.2, 4.4.4, 4.5, 4.5.1, 4.5.2, 4.5.3, 4.5.4, 4.6, 4.6.3, 4.6.6, 4.71, 4.7.2, 4.8.2, 4.9, 5.1, 5.2, 5.3, 5.3.1, 6.1, 6.1.4, 6.2, 6.4.1, 7.1, 7.4, 7.4.1 Appendix E.3, F & CDR \\
2-1 & 2018-05-24 & Added: 4.2.2, 4.2.3 & \\
3-0 & 2018-07-10 & Changed: Acknowledgements, Abstract, 1.3, 1.5, 2.1, 2.2, 2.3, 3.1, 3.2, 3.3, 3.4, 3.5, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 5.1, 5.2, 5.3, 6.1, 6.2, 6.3, 7.1, 7.2, Appendix: A, B, C, D, E, F, G, H, I, J, K, L, M, N, O & IPR and appendix reordered. \\
3-1 & 2018-07-22 & Changed: 2.3, 3.3.1, 3.3.2, 3.5, 4.2.1, 4.3, 4.4.5, 4.5.1, 4.5.5, 4.5.6, 4.6.3, 5.1, 5.2, 5.3, 6.1.2, App C, App F, App M, App O & pre-IPR feedback\\ \hline
4-0 & 2018-10-04 & Added: 5.3.9, 5.3.10, 5.3.11, 5.3.12, 5.3.13, 5.3.14, 5.3.15, 5.3.16, 5.3.17, 5.3.18, 5.3.19, 5.3.20, 5.3.21 Changed: 2.2, 2.3, 3.3.1, 3.3.2, 3.5, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.9, 4.9, 5.1, 5.2, 5.3.5, 5.3.6, 5.3.7, 5.3.8, 6.1.2, 6.1.6, 6.2, 6.3, 6.4, 7.4, App A, App B, App C, App D, App E, App F, App M, App N, App O & EAR \DIFaddbegin \\ \DIFaddend \hline
\DIFaddbegin \DIFadd{5-0 }& \DIFadd{2019-01-11 }& \DIFadd{Added: 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.4 Changed: 1.3, 1.4, 3.1, 3.2, 3.3, 3.4, 3.5, 4.1, 4.2, 4.4, 4.5, 4.6, 4.7, 4.8, 4.8, 4.9, 5.2, 6.1, 6.2, 6.3, 6.4, 7.1, 7.2, 7.3, 7.4, 7.5 }& \DIFadd{Final Report }\\ \hline
\DIFaddend \end{longtable}
\newpage
\vspace{1cm}
\begin{tabular}{p{.15\textwidth} p{.85\textwidth}}
\textbf{Abstract:} &
\newpage
Carbon dioxide (CO$_{2}$), methane (CH$_{4}$), and carbon monoxide (CO) are three main greenhouse gases emitted by human activities. Developing a better understanding of their contribution to greenhouse effects requires more accessible, flexible, and scalable air sampling mechanisms. A balloon flight is the most cost-effective mechanism to obtain a vertical air profile through continuous sampling between the upper troposphere and the lower stratosphere. However, recovery time constraints due to gas mixture concerns geographically restrict the sampling near existing research centers where analysis of the recovered samples can take place. The TUBULAR experiment is a technology demonstrator for atmospheric research supporting an air sampling mechanism that would offer climate change researchers access to remote areas by minimizing the effect of gas mixtures within the collected samples so that recovery time is no longer a constraint. The experiment includes a secondary sampling mechanism that serves as reference against which the proposed sampling mechanism can be validated.
%\footnote{The main reason for changing the type of gas that will be detected from N$_2$O to CO is that the model of analyzer used is only able to detect CO$_{2}$, CH$_{4}$ and CO.\label{fn:ChangeN2OtoCO}} \\ %Document Abstract
\textbf{Keywords:} & %Document Keywords
Balloon Experiments for University Students, Climate Change, Stratospheric Air Sampling, AirCore, Sampling Bags, Greenhouse Gas, Carbon Dioxide (CO$_{2}$), Methane (CH$_{4}$), Carbon Monoxide (CO).
\end{tabular}
\vfill
\newpage
\tableofcontents
%\newpage
\newpage
\section*{PREFACE} \markboth{}{}
\addcontentsline{toc}{section}{PREFACE}
The Rocket and Balloon Experiments for University Students (REXUS/BEXUS) programme is realized under a bilateral Agency Agreement between the German Aerospace Center (DLR) and the Swedish National Space Board (SNSB). The Swedish share of the
payload has been made available to students from other European countries through a collaboration with the European Space Agency (ESA).
EuroLaunch, a cooperation between the Esrange Space Center of SSC and the Mobile Rocket Base (MORABA) of DLR, is responsible for the campaign management and operations of the launch vehicles. Experts from DLR, SSC, ZARM, and ESA provide
technical support to the student teams throughout the project.
The Student Experiment Documentation (SED) is a continuously updating document regarding the BEXUS student experiment TUBULAR - Alternative to AirCore for Atmospheric Greenhouse Gas Sampling and will undergo reviews during the preliminary design review, the critical design review, the integration progress review, and final experiment report.
The TUBULAR Team consists of a diverse and inter-disciplinary group of students from Luleå University of Technology's Masters programme in Atmospheric Studies, Space Engineering, and Spacecraft Design. The idea for the proposed experiment stems from concerns over the realities of climate change as a result of human activity coupled with the complexity and limitations in obtaining greenhouse gas profile data to support climate change research.
Based above the Arctic circle in Kiruna, Sweden, the TUBULAR Team is exposed to Arctic science research with which it \DIFdelbegin \DIFdel{will collaborate }\DIFdelend \DIFaddbegin \DIFadd{has collaborated }\DIFaddend in order to produce \DIFdelbegin \DIFdel{a }\DIFdelend research detailing the air sampling methodology, measurements, analysis, and findings.
\newpage
\section*{Acknowledgements} \markboth{}{}
The TUBULAR Team wishes to acknowledge the invaluable support received by the REXUS/BEXUS organizers, SNSB, DLR, ESA, SSC, ZARM, Esrange Space Centre, and ESA Education. In particular, the team's gratitude extends to the following project advisers who show special interest in our experiment:
\begin{itemize}
\item \textbf{Dr. Rigel Kivi}, Senior Scientist at the Finnish Meteorological Institute (FMI). A key project partner, Dr. Kivi's research and experience in Arctic atmospheric studies serves as a knowledge-base reference that ensures proper design of the experiment.
\item \textbf{Mr. Pauli Heikkinen}, Scientist at FMI. A key project partner, Dr. Heikkinen's research and experience in Arctic atmospheric studies serves as a knowledge-base reference that ensures proper design of the experiment.
\item \textbf{Dr. Uwe Raffalski}, Associate Professor at the Swedish Institute of Space physics (IRF) and the project's endorsing professor. Dr. Raffalski's research and experience in Arctic atmospheric studies serves as a knowledge-base reference that ensures proper design of the experiment.
\item \textbf{Dr. Thomas Kuhn}, Associate Professor at Luleå University of Technology (LTU). A project course offered by Dr. Kuhn serves as a merited university module all while providing the team with guidance and supervision.
\item \textbf{Mr. Olle Persson}, Operations Administrator at Luleå University of Technology (LTU). A former REXUS/BEXUS affiliate, Mr. Persson has been providing guidance based on his experience.
\item \textbf{Mr. Grzegorz Izworski}, Electromechanical Instrumentation Engineer at European Space Agency (ESA). Mr. Izworski is the team's mentor supporting design and development of the project to ensure launch success.
\item \textbf{Mr. Koen Debeule}, Electronic Design Engineer at European Space Agency (ESA). Mr. Debeule is the team's supporting mentor.
\item \textbf{Mr. Vince Still}, LTU alumni and previous BEXUS participant with project EXIST. Mr Still assists the team as a thermal consultant.
\end{itemize}
The TUBULAR Team would also like to acknowledge component sponsorship from the following manufacturers and suppliers all of which showed authentic interest in the project and provided outstanding support:
\begin{itemize}
\item \textbf{Restek} develops and manufactures GC and LC columns, reference standards, sample prep materials, and accessories for the international chromatography industry.
\item \textbf{SMC Pneumatics} specializes in pneumatic control engineering to support industrial automation. SMC develops a broad range of control systems and equipment, such as directional control valves, actuators, and air line equipment, to support diverse applications.
\item \textbf{SilcoTek} provides coatings solutions that are corrosion resistant, inert, H2S resistant, anti-fouling, and low stick.
\item \textbf{Swagelok} designs, manufactures, and delivers an expanding range of fluid system products and solutions.
\item \textbf{Teknolab Sorbent} provides products such as analysis instruments and accessories within reference materials, chromatography and separation technology.
%\item \textbf{Parker} develops and manufactures motion and control technologies. Precision engineered solutions for Aerospace, Climate Control, Electromechanical, and Filtration.
\item \textbf{Lagers Masking Consulting} specializes in maintenance products and services for industry, construction, and municipal facilities.
\item \textbf{Bosch Rexroth} manufactures products and systems associated with the control and motion of industrial and mobile equipment.
\item \textbf{KNF} develops, produces, and distributes high quality diaphragm pumps and systems for gases, vapors, and liquids.
\item \textbf{Eurocircuits} are specialist manufacturers of prototype and small batch PCBs.
\end{itemize}
\pagestyle{SED}
\newgeometry{left=2.5cm,right=2.5cm,top=2.5cm,bottom=3cm,headheight=30pt,headsep=30pt}
\raggedbottom % Fixed random spacing in text /H
%\pagenumbering{arabic}
%\input sections
% MAIN TEXT DOCUMENT
\section{Introduction}
\subsection{Scientific Background}
The ongoing and increasingly rapid melting of the Arctic ice cap has served as a reference to the global climate change. Researchers have noted that \enquote{the Arctic is warming about twice as fast as the rest of the world} \cite{Perkins} and projecting an ice-free Arctic Ocean as a realistic scenario in future summers similar to the Pliocene Epoch when \enquote{global temperature was only 2–3$\degree{C}$ warmer than today} \cite{Trace}. Suggestions that additional loss of Arctic sea ice can be avoided by reducing air pollutant and CO$_{2}$ growth still require confirmation through better climate effect measurements of CO$_{2}$ and non-CO$_{2}$ forcings \cite{Trace}. Such measurements bear high costs, particularly in air sampling for trace gas concentrations in the region between the upper troposphere and the lower stratosphere which have a significant effect on the Earth's climate. There is little information on distribution of trace gases at the stratosphere due to the inherent difficulty of measuring gases above aircraft altitudes.
Trace gases are gases which make up less than 1\% of the Earth's atmosphere. They include all gasses except Nitrogen, and Oxygen. In terms of climate change, the main concern for the scientific community is that of CO$_2$ and CH$_4$ concentrations which make up less than 0.1\% of the trace gases and are referred to as Greenhouse gases. Greenhouse gas concentrations are measured in parts per million (ppm), and parts per billion (ppb). They are the main offenders of the greenhouse effect caused by human activity as they trap heat into the atmosphere. Larger emissions of greenhouse gases lead to higher concentrations of those gases in the atmosphere thus contributing to climate change.
\subsection{Mission Statement}
There is little information on the distribution of trace gases at the stratosphere due to the inherent difficulty and high cost of air sampling above aircraft altitudes \cite{Trace}. The experiment seeks to contribute to and support climate change research by proposing and validating a low-cost air sampling mechanism that reduces the current complexities and limitations of obtaining data on stratospheric greenhouse gas distribution.
\pagebreak
\subsection{Experiment Objectives}
Beyond providing knowledge on greenhouse gas distributions, the sampling obtained from the experiment \DIFdelbegin \DIFdel{will serve }\DIFdelend \DIFaddbegin \DIFadd{serves }\DIFaddend as a reference to validate the robustness and reliability of the proposed sampling system through comparative analysis of results obtained with a reference sampling system.
The primary objective of the experiment \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of validating the proposed sampling system as a reliable mechanism that enables sampling of stratospheric greenhouse gases in remote areas. Achieving this objective \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of developing a cost-effective and re-usable stratospheric air sampling system (i.e. AAC). Samples collected by the proposed mechanism \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend to be compared against samples collected by a proven sampling system (i.e. CAC). The proven sampling system is to be part of the experimental payload as a reference that will validate the proof-of-concept air sampling system.
The secondary objective of the experiment \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to analyze the samples by both systems in a manner that will contribute to climate change research in the Arctic region. The trace gas profiles to be analyzed \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend that of carbon dioxide (CO$_{2}$), methane (CH$_{4}$), and carbon oxide (CO)\footnote{The third gas being sampled \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend changed from N$_{2}$O to CO. The main reason for changing this \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend that the model of analyzer used \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend only able to detect CO$_{2}$, CH$_{4}$ and CO.\label{fn:ChangeN2OtoCO}}. The research activities will culminate in a research paper written in collaboration with FMI.
%\textsuperscript{\ref{fn:ChangeN2OtoCO}}
%changing the type of gas that will be detected from N$_2$O to CO
\subsection{Experiment Concept}
The experiment \DIFdelbegin \DIFdel{seeks }\DIFdelend \DIFaddbegin \DIFadd{sought }\DIFaddend to test the viability and reliability of a proposed cost-effective alternative to the The AirCore Sampling System. The AirCore Sampling System \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of a long and thin stainless steel tube shaped in the form of a coil which takes advantage of changes in pressure during descent to sample the surrounding atmosphere and preserve a profile (see Figure \ref{fig:A1} in Appendix \ref{sec:appA}). Sampling during a balloon’s Descent Phase \DIFdelbegin \DIFdel{will result }\DIFdelend \DIFaddbegin \DIFadd{resulted }\DIFaddend in a profile shape extending the knowledge of distribution of trace gases for the measured column between the upper troposphere and the lower stratosphere \cite{Karion}. The \DIFdelbegin \DIFdel{proposed experiment will consist }\DIFdelend \DIFaddbegin \DIFadd{experiment consisted }\DIFaddend of two sampling subsystems: a conventional implementation of AirCore as described above, henceforth referred to as CAC, and a proposed alternative, henceforth referred to as Alternative to AirCore (AAC).
The proposed AAC system \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend primarily motivated by the CAC sampling mechanism lacking flexibility in choice of coverage area due to the geographical restriction imposed by the irreversible process of gas mixing along the air column sampled in its stainless tube. Because of this, the sampling region for the CAC system needs to remain within proximity to research facilities for post-flight gas analysis. The AAC sampling system is a proposed alternative configuration to the CAC sampling system that has been designed to address this limitation all while improving cost-effectiveness. The AAC sampling system consists of a series of small independent air sampling bags (see Figure \ref{fig:A2} in Appendix \ref{sec:appA}) rather than the CAC's single long and coiled tube. Each sampling bag \DIFdelbegin \DIFdel{is to be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend allocated a vertical sampling range capped at 500 meters so that mixing of gases becomes a lesser concern.
The use of sampling bags in series rather than a single long tube is meant to tackle limitations of the CAC by 1) reducing system implementation cost inherent to the production of a long tube and 2) enabling sampling of remote areas by reducing the effect of mixing of gases in post-analysis. However, the AAC comes with its own limitations as its discrete sampling \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{does }\DIFaddend not allow for a the type of continuous profiling made possible by the CAC coiled tube. Overall design of AAC \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be approached with miniaturization, cost-effectiveness, and design for manufacturability (DFM) in mind with the purpose of enabling ease of replication.
\pagebreak
\subsection{Team Details}
The TUBULAR Team consists of diverse and inter-disciplinary team members all of which are studying at the Masters level at LTU's space campus in Kiruna, Sweden.
\bigskip
\begin{longtable}[]{m{0.25\textwidth} m{0.6\textwidth}}
\includegraphics[width=0.2\textwidth]{1-introduction/img/natalie-lawton.jpg} & \textbf{Natalie Lawton - Management and Electrical Division}
\smallskip
\textit{Current Education}: MSc in Spacecraft Design.
\smallskip
\textit{Previous Education}: MEng in Aerospace Engineering. Previous experience in UAV avionic systems and emissions measurement techniques.
\smallskip
\textit{Responsibilities}: Acting as Systems Engineer/Project Manager from the CDR until the end of the project. Previously was acting as deputy to these roles and in the electrical division. \DIFdelbegin \DIFdel{Ensures testing is }\DIFdelend \DIFaddbegin \DIFadd{Ensured testing was }\DIFaddend planned and executed. \DIFdelbegin \DIFdel{Oversees }\DIFdelend \DIFaddbegin \DIFadd{Oversaw }\DIFaddend manufacture, maintaining coordination between different teams and preventing project creep. Coordinating between different teams, project stakeholders, and documentation efforts.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/georges-louis-joseph-labreche.jpg} & \textbf{Georges L. J. Labrèche - Management Division}
\smallskip
\textit{Current Education}: MSc in Spacecraft Design.
\smallskip
\textit{Previous Education}: BSc in Software Engineering with experience in technical leadership and project management in software development.
\smallskip
\textit{Responsibilities}: Acting as Systems Engineer / Project Manager and managing overall implementation of the project until the Critical Design Review (CDR). Establishing and overseeing product development cycle. Coordinating between different teams, project stakeholders, and documentation efforts.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/agues-paszkowsky.jpg} & \textbf{Nuria Agües Paszkowsky - Scientific Division}
\smallskip
\textit{Current Education}: MSc in Earth Atmosphere and the Solar System.
\smallskip
\textit{Previous Education}: BSc in Aerospace Engineering.
\smallskip
\textit{Responsibilities}: Defining experiment parameters; data analysis; interpreting and documenting measurements; research on previous CAC experiments for comparative analysis purposes; contacting researchers or institutions working on similar projects; exploring potential partnership with researchers and institutions, evaluating the reliability of the proposed AAC sampling system; conducting measurements of collected samples; documenting and publishing findings.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/kiki-blazaki.jpg} & \textbf{Kyriaki Blazaki - Scientific Division}
\smallskip
\textit{Current Education}: MSc in Earth Atmosphere and the Solar System.
\smallskip
\textit{Previous Education}: BSc in Physics.
\smallskip
\textit{Responsibilities}: Coordinating between the Scientific Division and the Project Manager; defining experiment parameters; data analysis; interpreting and documenting measurements; research on previous CAC experiments for comparative analysis purposes; evaluating the reliability of the proposed AAC sampling system; conducting measurements of collected samples; documenting and publishing findings.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/emily-chen.jpeg} & \textbf{Emily Chen - Mechanical Division}
\smallskip
\textit{Current Education}: MSc in Space Engineering (5th Year).
\smallskip
\textit{Responsibilities}: Mechanical designing and assembly of CAC subsystem; analyzing the test results and changing the design as needed in collaboration with the team leader; integrating and assembling final design.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/jordi-coll-ortega.jpg} & \textbf{Jordi Coll Ortega - Mechanical Division}
\smallskip
\textit{Current Education}: MSc in Spacecraft Design.
\smallskip
\textit{Previous Education}: BASc in Aerospace Vehicle Engineering.
\smallskip
\textit{Responsibilities}: Designing or redesigning cost-effective mechanical devices using analysis and computer-aided design; developing and testing prototypes of designed devices; analyzing the test results and changing the design as needed in collaboration with the team lead; integrating and assembling final design.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/gustav-dryssen.jpg} & \textbf{Gustav Dyrssen - Software Division}
\smallskip
\textit{Current Education}: MSc in Space Engineering (5th Year).
\smallskip
\textit{Responsibilities}: Leading quality assurance and testing efforts; Enforcing software testing best practices such as continuous integration testing and regression testing; reviewing requirements and specifications in order to foresee potential issues; provide input of functional requirements; advising on design; formalizing test cases; tracking defects and ensuring their resolution; facilitating code review sessions; supporting software implementation efforts.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/erik-fagerstrom.jpg} & \textbf{Erik Fagerström - Thermal Division}
\smallskip
\textit{Current Education}: MSc in Space Engineering (5th Year).
\smallskip
\textit{Responsibilities}: Coordinating between the Thermal Division and the Project Manager. Planning project thermal analysis and testing strategy. Thermal simulations of proposed designs and analyze results.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/pau-molas-roca.jpg} & \textbf{Pau Molas Roca - Mechanical Division}
\smallskip
\textit{Current Education}: MSc in Spacecraft Design.
\smallskip
\textit{Previous Education}: BSc in Aerospace Technology Engineering, Mechanical experience.
\smallskip
\textit{Responsibilities}: Coordinating between the Mechanical Division and the Project Manager; designing or redesigning cost-effective mechanical devices using analysis and computer-aided design; producing details of specifications and outline designs; overseeing the manufacturing process for the devices; identifying material and component suppliers; integrating and assembling final design. \bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/emil-nordqvist.jpg} & \textbf{Emil Nordqvist - Electrical Division}
\smallskip
\textit{Current Education}: MSc in Space Engineering (5th Year).
\smallskip
\textit{Responsibilities}: Quality assurance of circuit design and implementation. Developing, testing, and evaluating theoretical designs. \bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/muhammad-ansyar-rafi-putra.jpg} & \textbf{Muhammad Ansyar Rafi Putra - Software Division}
\smallskip
\textit{Current Education}: MSc in Spacecraft Design.
\smallskip
\textit{Previous Education}: BSc in Aerospace Engineering.
\smallskip
\textit{Responsibilities}: Coordinating between the Software Division and the Project Manager; gathering software requirements; formalizing software specifications; drafting architecture design, detailed design; leading software implementation efforts.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/hamad-saddiqi.jpg} & \textbf{Hamad Siddiqi - Electrical Division}
\smallskip
\textit{Current Education}: MSc Satellite Engineering.
\smallskip
\textit{Previous Education}: BSc in Electrical Engineering with experience in telecommunication industry and electronics.
\smallskip
\textit{Responsibilities}: Coordinating between the Electrical Division and the Project Manager; designing and implementing cost-effective circuitry using analysis and computer-aided design; producing details of specifications and outline designs; developing, testing, and evaluating theoretical designs; identifying material as well as component suppliers.
\bigskip
\\
\includegraphics[width=0.2\textwidth]{1-introduction/img/ivan-zankov.jpg} & \textbf{Ivan Zankov - Thermal Division}
\smallskip
\textit{Current Education}: MSc in Spacecraft Design.
\smallskip
\textit{Previous Education}: BEng in Mechanical Engineering.
\smallskip
\textit{Responsibilities}: Thermal analysis of proposed designs and analysis result based recommendations.
\\
\label{tab:people}
\end{longtable}
\raggedbottom
\pagebreak
\section{Experiment Requirements and Constraints}
Requirements in this section does not list obsolete requirements. For a complete list of requirements that include obsolete ones, refer to Appendix \ref{sec:appFullListOfRequirements}.
\subsection{Functional Requirements}
\begin{enumerate}
\item[F.2] The experiment \textit{shall} collect air samples by the CAC.
\item[F.3] The experiment \textit{shall} collect air samples by the AAC.
\item[F.9] The experiment \textit{should} measure the air intake flow to the AAC.
\item[F.10] The experiment \textit{shall} measure the air pressure.
\item[F.11] The experiment \textit{shall} measure the temperature.
\end{enumerate}
\subsection{Performance Requirements}
\begin{enumerate}
%\item The programmable sampling rate of the pressure sensor \textit{shall} not be lesser than .
\item[P.12] The accuracy of the ambient pressure measurements \textit{shall} be -1.5/+1.5 hPa for 25$\degree{C}$.
\item[P.13] The accuracy of temperature measurements \textit{shall} be +3.5/-3$\degree{C}$ (max) for condition of -55$\degree{C}$ to 150$\degree{C}$.
\item[P.23] The temperature sensor sampling rate \textit{shall} be 1 Hz.\label{newsamplerate}
\item[P.24] The temperature of the Pump \textit{shall} be between 5$\degree{C}$ and 40$\degree{C}$.
\item[P.25] The minimum volume of air in the bags for analysis \textit{shall} be 0.18 L at ground level.
\item[P.26] The equivalent flow rate of the pump \textit{shall} be between 8 to 3 L/min from ground level up to 24 km altitude.
\item[P.27] The accuracy range of the sampling time, or the resolution, \textit{shall} be less than 52.94 s, or 423.53 m.
\item[P.28] The pressure sensor sampling rate \textit{shall} be 1 Hz.\label{newsamplerate}
\item[P.29] The airflow sensor sampling rate \textit{shall} be 1 Hz.\label{newsamplerate}
\item[P.30] The accuracy of the pressure measurements inside the tubing and sampling bags \textit{shall} be -0.005/+0.005 bar for 25$\degree{C}$.
\end{enumerate}
\pagebreak
\subsection{Design Requirements}
\begin{enumerate}
\item[D.1] The experiment \textit{shall} operate in the temperature profile of the BEXUS flight\cite{BexusManual}.
\item[D.2] The experiment \textit{shall} operate in the vibration profile of the BEXUS flight\cite{BexusManual}.
\item[D.3] The experiment \textit{shall} not have sharp edges or loose connections to the gondola that can harm the launch vehicle, other experiments, and people.%\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.4] The experiment's communication system \textit{shall} be compatible with the gondola's E-link system with the RJF21B connector over UDP for down-link and TCP for up-link.
\item[D.5] The experiment's power supply \textit{shall} have a 24v, 12v, 5v and 3.3v power output and be able to take 28.8v input through the Amphenol PT02E8-4P connector supplied from the gondola.
\item[D.7] For the supplied voltage of 28.8 V, the total continuous DC current draw \textit{should} be below 1.8 A.
\item[D.8] The total power consumption \textit{should} be below 374 Wh.
\item[D.16] The experiment \textit{shall} be able to autonomously turn itself off just before landing.
\item[D.17] The experiment box \textit{shall} be placed with at least one face exposed to the outside.
\item[D.18] The experiment \textit{shall} operate in the pressure profile of the BEXUS flight\cite{BexusManual}.
\item[D.19] The experiment \textit{shall} operate in the vertical and horizontal accelerations profile of the BEXUS flight\cite{BexusManual}.
\item[D.21] The experiment \textit{shall} be attached to the gondola's rails.
\item[D.22] The telecommand data rate \textit{shall} not be over 10 kb/s.
\item[D.23] The air intake rate of the air pump \textit{shall} be equivalent to a minimum of 3 L/min at 24 km altitude.
\item[D.24] The temperature of the Brain\footnote{The Brain is a central command unit which contains Electronic Box and pneumatic sampling system.} \textit{shall} be between -10$\degree{C}$ and 25$\degree{C}$.
\item[D.26] The air sampling systems \textit{shall} filter out all water molecules before filling the sampling bags.
\item[D.27] The total weight of the experiment \textit{shall} be less than 28 kg.
\item[D.28] The AAC box \textit{shall} be able to fit at least $6$ air sampling bags.
\item[D.29] The CAC box \textit{shall} take less than 3 minutes to be removed from the gondola without removing the whole experiment.
\item[D.30] The AAC \textit{shall} be re-usable for future balloon flights.
\item[D.31] The altitude from which a sampling bag will start sampling \textit{shall} be programmable.
\item[D.32] The altitude from which a sampling bag will stop sampling \textit{shall} be programmable.
\end{enumerate}
\pagebreak
\pagebreak
\subsection{Operational Requirements}
\begin{enumerate}
\item[O.13] The experiment \textit{should} function automatically.
\item[O.14] The experiment's air sampling mechanisms \textit{shall} have a manual override.
\end{enumerate}
\subsection{Constraints}
\begin{enumerate}
\item[C.1] Constraints specified in the BEXUS User Manual.
\end{enumerate}
\pagebreak
\section{Project Planning}
\subsection{Work Breakdown Structure}
The team \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend categorized into different groups of responsibilities with dedicated leaders who \DIFdelbegin \DIFdel{will report to and coordinate }\DIFdelend \DIFaddbegin \DIFadd{reported to and coordinated }\DIFaddend with the Project Manager. Leadership \DIFdelbegin \DIFdel{may be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend organized on a rotational basis \DIFdelbegin \DIFdel{should the need arise}\DIFdelend \DIFaddbegin \DIFadd{when the need arose}\DIFaddend . The formation of these divisions \DIFdelbegin \DIFdel{constitute }\DIFdelend \DIFaddbegin \DIFadd{constituted }\DIFaddend a work breakdown structure \DIFdelbegin \DIFdel{in }\DIFdelend which is illustrated in Figure \ref{fig:work-breakdown-structure}.
The \DIFdelbegin \DIFdel{interaction between the divisions will be refined over the course of project implementation to acknowledge the interdisciplinary nature of the experiment around a Payload / Platform scheme.
}%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdel{The Management is }\DIFdelend \DIFaddbegin \DIFadd{Management was }\DIFaddend composed of a Project Manager and a Deputy Project Manager, both \DIFdelbegin \DIFdel{acting }\DIFdelend \DIFaddbegin \DIFadd{acted }\DIFaddend as Systems Engineer and \DIFdelbegin \DIFdel{managing }\DIFdelend \DIFaddbegin \DIFadd{managed the }\DIFaddend overall implementation of the project. The Project Manager \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for establishing and overseeing product development cycle; coordinating between different teams, project stakeholders, and documentation efforts; outreach and public relations; Fundraising; monitoring and reporting; system integration; and quality assurance. Once all subsystems \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend been assembled, the Project Manager \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was also }\DIFaddend responsible for overseeing the integration processes leading to the final experiment setup and \DIFdelbegin \DIFdel{will }\DIFdelend put emphasis on leading quality assurance integration testing efforts. The Deputy Project Manager \DIFdelbegin \DIFdel{assists }\DIFdelend \DIFaddbegin \DIFadd{assisted }\DIFaddend the Project Manager in all management duties in a manner that \DIFdelbegin \DIFdel{ensures }\DIFdelend \DIFaddbegin \DIFadd{ensured }\DIFaddend replaceability when necessary.
The Scientific Division \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for defining experiment parameters; data analysis; interpreting and documenting measurements; researching previous CAC experiments for comparative analysis purposes; evaluating the reliability of the proposed AAC sampling system; conducting measurements of collected samples; documenting and publishing findings; defining experiment parameters; contacting researchers or institutions working on similar projects; exploring potential partnership with researchers and institutions; documenting and publishing findings.
The Mechanical Division \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for designing or redesigning cost-effective mechanical devices using analysis and computer-aided design; producing details of specifications and outline designs; overseeing the manufacturing process for the devices; identifying material and component suppliers; developing and testing prototypes of designed devices; analyzing test results and changing the design as needed; and integrating and assembling final design.
The Electrical Division \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for designing and implementing cost-effective circuitry using analysis and computer-aided design; producing details of specifications and outline designs; developing, testing, and evaluating theoretical designs; identifying material as well as component suppliers; reviewing and testing proposed designs; recommending modifications following prototype test results; and assembling designed circuitry.
The Software Division \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for gathering software requirements; formalizing software specifications; drafting architecture design; leading software implementation efforts; leading quality assurance and testing efforts; enforcing software testing best practices such as continuous integration testing and regression testing; reviewing requirements and specifications in order to foresee potential issues; providing input for functional requirements; advising on design; formalizing test cases; tracking defects and ensuring their resolution; facilitating code review sessions; and supporting software implementation efforts.
The Thermal Division \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for ensuring thermal regulation of the payload as per operational requirements of all experiment components; evaluating designs against thermal simulation and propose improvements; managing against mechanical design and electrical power limitations towards providing passive and active thermal control systems.
\begin{landscape}
\begin{figure}[p]
\begin{align*}
\includegraphics[width=24cm]{3-project-planning/img/work-breakdown-structure-updated.png}
\end{align*}
\caption{Work Breakdown Structure.}\label{fig:work-breakdown-structure}
\end{figure}
\end{landscape}
\pagebreak
\subsection{Schedule}
Scheduling of the project is presented in a Gantt Chart overview on Figure \ref{fig:schedule-gantt-chart}. Exam period constraints \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend included in order to evaluate risks in person-day allocations to project implementation. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend expected during exam periods the team work output \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be lower than usual but project activities \DIFdelbegin \DIFdel{will continue, therefore time has been }\DIFdelend \DIFaddbegin \DIFadd{did continue, with time }\DIFaddend planned accordingly to accommodate this:
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{3-project-planning/img/BEXUS-SED-GanttChart-Overview.png}
\end{align*}
\caption{Project Schedule Gantt Chart.}\label{fig:schedule-gantt-chart}
\end{figure}
\DIFdelbegin \DIFdel{Deadlines of the five Student Experiment Documentations (SED) versions have been estimated based on past REXUS/BEXUS Cycles. }\DIFdelend An expanded version of the Gantt Chart with detailed listing of all sub-tasks not shown in Figure \ref{fig:schedule-gantt-chart} can be found in Appendix \ref{sec:appF}. This expanded Gantt Chart includes all tasks related to the test plan and internal deadlines \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend set so that a first draft of the documentation \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend completed one week in advance to allow contents to be checked. Build and test internal deadlines \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also placed one week in advance to allow a buffer in case things \DIFdelbegin \DIFdel{do }\DIFdelend \DIFaddbegin \DIFadd{did }\DIFaddend not go as expected. The tests \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend scheduled for as early as possible to allow time for rescheduling if the result \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a fail. With some high priority tests, see Section \ref{sec:5.2.1-testpriority}, it \DIFdelbegin \DIFdel{is expected these will }\DIFdelend \DIFaddbegin \DIFadd{was expected these would }\DIFaddend be very difficult to reschedule therefore extra time \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend built into the test duration to allow for multiple attempts at the test.
\pagebreak
\subsection{Resources}
\subsubsection{Manpower}
The TUBULAR Team is categorized into divisions as summarized in Table \ref{tab:divisions-members}:
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\textbf{Management} & \textbf{Scientific} & \textbf{Mechanical} & \textbf{Electrical} & \textbf{Thermal} & \textbf{Software} \\ \hline
Natalie Lawton* & Kyriaki Blazaki* & Pau Molas Roca* & Hamad Siddiqi* & Erik Fragerström* & Muhammad Ansyar Rafi Putra* \\ \hline
Georges L. J. Labrèche & Nuria Agues Paszkowsky & Jordi Coll Ortega & Emil Nordqvist & Ivan Zankov & Gustav Dyrssen \\ \hline
& & Emily Chen & & & \\ \hline
\end{tabular}%
}
\caption{Project Divisions and Members (Asterisks Denote Division Leaders).}
\label{tab:divisions-members}
\end{table}
\raggedbottom
The experience of TUBULAR Team members are listed in Table \ref{tab:team-member-experience}:
% Please add the following required packages to your document preamble:
% \usepackage{graphicx}
\begin{table}[H]
\centering
\begin{tabular}{|l|m{11cm}|}
\hline
\textbf{Team Member} & \textbf{Project Related Experience} \\ \hline
Natalie Lawton & MSc in Spacecraft Design (2nd Year). \\& MEng in Aerospace Engineering.\\& Previous experience in UAV avionic systems and emissions measurement techniques. \\ \hline
Nuria Agues Paszkowsky & MSc in Earth Atmosphere and the Solar System (2nd Year). \\& BSc in Aerospace Engineering.\\ \hline
Kyriaki Blazaki & MSc in Earth Atmosphere and the Solar System (2nd Year). \\& BSc in Physics. \\ \hline
Emily Chen & MSc in Space Engineering (5th Year). \\ \hline
Jordi Coll Ortega & MSc in Spacecraft Design (2nd Year). \\& BSc in Aerospace Vehicle Engineering. \\ \hline
Gustav Dyrssen & MSc in Space Engineering (5th Year).\\ \hline
Erik Fagerström & MSc in Space Engineering (5th Year). \\ \hline
Georges L. J. Labrèche & MSc in Spacecraft Design (2nd Year). \\& BSc in Software Engineering.\\ \hline
Muhammad Ansyar Rafi Putra & MSc in Spacecraft Design (2nd Year). \\& BSc in Aerospace Engineering. \\ \hline
Pau Molas Roca & MSc in Spacecraft Design (2nd Year). \\& BSc in Aerospace Technology Engineering, Mechanical experience. \\ \hline
Emil Nordqvist & MSc in Space Engineering (5th Year). \\ \hline
Hamad Siddiqi & MSc Satellite Engineering (4th Year) \\& BSc in Electrical Engineering.\\& Experience in telecommunication industry and electronics. \\ \hline
Ivan Zankov & MSc in Spacecraft Design (2nd Year). \\& BEng in Mechanical Engineering.\\ \hline
\end{tabular}
\caption{Project Related Experience of Team Members.}
\label{tab:team-member-experience}
\end{table}
\raggedbottom
The initial projected effort to be contributed by each team member was averaged at 1.5 hour per person per day corresponding to a team total of 15 hours per day. Since then, 3 new members \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend been included in the team thus increasing the projected daily effort to 19.5 hours per day. During the \DIFdelbegin \DIFdel{period leading up to the launch it is expected that the effort contributed will be double to 3 hours per person per day}\DIFdelend \DIFaddbegin \DIFadd{summer period many team members were away which meant that the team hours put in had little significant overall change}\DIFaddend . The period of these different effort capacities are listed in Table \ref{tab:daily-team-effort-per-period}:
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{From} & \textbf{To} & \textbf{Capacity (hours/day)} \\ \hline
08/01/2018 & 18/03/2018 & 15 \\ \hline
19/03/2018 & 08/04/2018 & 16.5 \\ \hline
09/04/2018 & 09/05/2018 & 18 \\ \hline
10/05/2018 & 15/08/2018 & 19.5 \\ \hline
15/08/2018 & 22/10/2018 & \DIFdelbeginFL \DIFdelFL{39 }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{19.5 }\DIFaddendFL \\ \hline
23/10/2018 & 31/01/2019 & 19.5 \\ \hline
\end{tabular}
\caption{Projected Daily Team Effort per Period.}
\label{tab:daily-team-effort-per-period}
\end{table}
Taking into account all team members and the mid-project changes in team size, the efforts/capacity projected to be allocated to each stages of the project during 2018 are summarized in Table \ref{tab:effort-allocation-stages}:
\begin{table}[H]
\centering
\begin{tabular}{lcc|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Stage}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Start\\ Date\end{tabular}}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}End\\ Date\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Duration\\ (days)\end{tabular}}} & \multicolumn{3}{c|}{\textbf{Effort (hours)}} \\ \cline{5-7}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & & & \textbf{Capacity} & \textbf{Actual} & \multicolumn{1}{l|}{\textbf{Diff. (\%)}} \\ \hline
\multicolumn{1}{|l|}{Preliminary Design} & \multicolumn{1}{c|}{08/01} & 11/02 & 35 & 525 & 708 & +29.68 \\ \hline
\multicolumn{1}{|l|}{Critical Design} & \multicolumn{1}{c|}{12/02} & 03/06 & 112 & 1,680 & 2,649 & +57.66 \\ \hline
\multicolumn{1}{|l|}{Experiment Building and Testing} & \multicolumn{1}{c|}{04/06} & 16/09 & 105 & \DIFdelbeginFL \DIFdelFL{3,072 }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{2,048 }\DIFaddendFL & 1,943 & \DIFdelbeginFL \DIFdelFL{-36.76 }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{-5.40 }\DIFaddendFL \\ \hline
\multicolumn{1}{|l|}{Final Experiment Preparations} & \multicolumn{1}{c|}{17/09} & 11/10 & 25 & \DIFdelbeginFL \DIFdelFL{976 }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{488 }\DIFaddendFL & \DIFdelbeginFL \DIFdelFL{571* }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{571 }\DIFaddendFL & \DIFdelbeginFL \DIFdelFL{-41.55* }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{+17.00 }\DIFaddendFL \\ \hline
\multicolumn{1}{|l|}{Launch Campaign} & \multicolumn{1}{c|}{12/10} & 22/10 & 10 & 390 & \DIFdelbeginFL \DIFdelFL{- }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{777 }\DIFaddendFL & \DIFdelbeginFL \DIFdelFL{- }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{+99.23 }\DIFaddendFL \\ \hline
\multicolumn{1}{|l|}{Data Analysis and Reporting} & \multicolumn{1}{c|}{23/10} & 30/01 & 69 & 1,346 & \DIFdelbeginFL \DIFdelFL{- }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{245 }\DIFaddendFL & \DIFdelbeginFL \DIFdelFL{- }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{-81.78 }\DIFaddendFL \\ \hline
\multicolumn{1}{r}{\textbf{}} & \multicolumn{1}{l}{} & \multicolumn{1}{r|}{\textbf{Total:}} & \textbf{356} & \textbf{7,989} & \textit{\DIFdelbeginFL \DIFdelFL{5871*}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{6939}\DIFaddendFL } & \textit{\DIFdelbeginFL \DIFdelFL{-26.51*}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{-13.14}\DIFaddendFL } \\ \cline{4-7}
\end{tabular}
\caption{Project Effort Allocation per Project Stages\DIFdelbeginFL \DIFdelFL{(Asterisks Denote Still Ongoing Stages)}\DIFdelendFL .}
\label{tab:effort-allocation-stages}
\end{table}
\DIFaddbegin \DIFadd{It can be seen that it was necessary at some stages to work more than was projected and at other stages less work was required to achieve the aims.
}
\DIFaddend All TUBULAR Team members are based in Kiruna, Sweden, just \SI{40}{\kilo\meter} from Esrange Space Center. Furthermore, all team members are enrolled in LTU Master programmes in Kiruna and thus \DIFdelbegin \DIFdel{expected to remain }\DIFdelend \DIFaddbegin \DIFadd{remained }\DIFaddend in LTU during the entire project period. Special attention \DIFdelbegin \DIFdel{will have to be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend made for planning tasks during the summer period where many team members \DIFdelbegin \DIFdel{are expected to travel }\DIFdelend \DIFaddbegin \DIFadd{traveled }\DIFaddend abroad. A timeline of team member availability until January 2019 is available in Appendix \ref{sec:appD}. A significant risk \DIFdelbegin \DIFdel{can be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend observed during the summer months from June to August where most members \DIFdelbegin \DIFdel{will only be }\DIFdelend \DIFaddbegin \DIFadd{were only }\DIFaddend partially available and some completely unavailable. As such, team member availability and work commitments over the summer \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend negotiated across team members in order to guarantee that at least one member per division \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend present in Kiruna over the Summer with the exception of the Software Division which \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend work remotely. Furthermore, the Project Manager role \DIFdelbegin \DIFdel{will have to be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend assigned to the Deputy Project Manager due to an extended full time unavailability after the CDR.
As part of their respective Master programmes, most TUBULAR Team members are enrolled in a project course at LTU. The TUBULAR project acts as the course's project for most team members from which they will obtain ECTS credits. This course is supervised by Dr. Thomas Kuhn, Associate Professor at LTU.
\pagebreak
\subsubsection{Budget}
\label{sec:3.2.2}
The experiment \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a total mass of \SI{24}{\kilo\gram} at a cost of 33,211 EUR. An error margin \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend included in the budget corresponding to 10\% of the total costs of components to be purchased. A complete budget is available in Appendix \ref{sec:appO} and a detailed component mass and cost breakdown is available in Section \ref{sec:experiment-components} Experiment Components. This breakdown does not include spare components accounted for in the total costs. Dimensions and mass of the experiment are summarized in Table \ref{table:experiment-summary} in Section \ref{sec:mechanical-design} and Table \ref{tab:dim-mass-tab} in Section \ref{sec:dim-mass}. A contingency fund of 900 EUR \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend allocated for unseen events such as component failures. Component loan and donations from sponsors account for 85\% of the project's total cost. LTU and SNSB funding accounts for the remaining 15\%.
%\input{3-project-planning/tables/mass-and-cost-budget.tex}
The project \DIFdelbegin \DIFdel{benefits }\DIFdelend \DIFaddbegin \DIFadd{benefited }\DIFaddend from component donations from Restek, SMC Pneumatics, Teknolab Sorbent, KNF, Eurocircuits, and Lagers Masking Consulting as well as component loans from FMI. Furthermore, discounts were offered by Teknolab Sorbent and Bosch Rexroth. Euro value allocation of these sponsorships are presented in Table \ref{table:sponsroship-allocation}.
\begin{table}[H]
\centering
\begin{tabular}{l|m{0.12\textwidth}|l|r|r|r|c}
\hline
\multicolumn{1}{|l|}{\textbf{Sponsor}} & \multicolumn{1}{|c|}{\textbf{Type}} & \multicolumn{1}{c|}{\textbf{Value}} & \multicolumn{1}{c|}{\textbf{Allocated}} & \multicolumn{1}{c|}{\textbf{Unallocated}} & \multicolumn{1}{c|}{\textbf{\% Allocation}} & \multicolumn{1}{c|}{\textbf{Status}} \\ \hline
\multicolumn{1}{|l|}{LTU} & Funds & 2,500.00 & 2,301.57 & 1,874.62 & 75 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{SNSB} & Funds & 2,909.80 & 2,634.40 & 275.40 & 91 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{FMI} & Component loan & 22,561.45 & 22,561.45 & 0.00 & 100 & \DIFdelbeginFL %DIFDELCMD < \multicolumn{1}{c|}{Confirmed} %%%
\DIFdelendFL \DIFaddbeginFL \multicolumn{1}{c|}{Received} \DIFaddendFL \\ \hline
\multicolumn{1}{|l|}{Restek} & Component donation & 1,120.00 & 1,120.00 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{Teknolab} & Component donation & 380.00 & 380.00 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{SMC} & Component donation & 860.00 & 860.00 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{Lagers Maskin} & Component donation & 300.00 & 300.00 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{Swagelok} & Component donation & 1,863.82 & 1,863.82 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{KNF} & Component loan & 350.00 & 350.00 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{SilcoTek} & Component donation & 840.00 & 840.00 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
\multicolumn{1}{|l|}{Eurocircuits} & Component donation & 426.95 & 426.95 & 0.00 & 100 & \multicolumn{1}{c|}{Received} \\ \hline
& \multicolumn{1}{l|}{\textbf{Total}} & \textbf{34,112.01} & \textbf{33,211.24} & \textbf{900.77} & \textbf{97} & \multicolumn{1}{l}{} \\ \cline{2-6}
\end{tabular}
\caption{Allocation of Sponsorship Funds and Component Donation Values. Amounts in EUR.}
\label{table:sponsroship-allocation}
\end{table}
\subsubsection{External Support}
Partnership with FMI, and IRF \DIFdelbegin \DIFdel{will provide }\DIFdelend \DIFaddbegin \DIFadd{has provided }\DIFaddend the team with technical guidance in implementing the sampling system. FMI’s experience in implementing past CAC sample systems provide invaluable lessons learned towards conceptualizing, designing, and implementing the proposed AAC sampling system.
\DIFdelbegin \DIFdel{FMI is }\DIFdelend \DIFaddbegin \DIFadd{FM Iwas }\DIFaddend a key partner in the TUBULAR project, its scientific experts \DIFdelbegin \DIFdel{will advise and support }\DIFdelend \DIFaddbegin \DIFadd{have advised and supported }\DIFaddend the TUBULAR project by sharing knowledge, experience, and granting accessibility of equipment. As per the agreement shown in Appendix \ref{sec:appG}, FMI \DIFdelbegin \DIFdel{will provide }\DIFdelend \DIFaddbegin \DIFadd{had provided }\DIFaddend the TUBULAR Team with the AirCore stainless tube component of the CAC subsystem as well as the post-flight gas analyzer. This arrangement \DIFdelbegin \DIFdel{requires }\DIFdelend \DIFaddbegin \DIFadd{required }\DIFaddend careful considerations on the placement of the experiment in order to minimize hardware damage risks. These contributions \DIFdelbegin \DIFdel{result }\DIFdelend \DIFaddbegin \DIFadd{resulted }\DIFaddend in significant cost savings regarding equipment and component procurement.
Daily access to LTU's Space Campus in Kiruna \DIFdelbegin \DIFdel{exposes }\DIFdelend \DIFaddbegin \DIFadd{exposed }\DIFaddend the team to scientific mentorship and expert guidance from both professors and researchers involved in the study of greenhouse gases and climate change. Dr Uwe Raffalski, IRF, Associate professor (Docent) \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend one of many researchers involved in climate study \DIFdelbegin \DIFdel{who is mentoring }\DIFdelend \DIFaddbegin \DIFadd{whom mentored }\DIFaddend the team.
\pagebreak
\subsection{Outreach Approach}
The experiment as well as the REXUS/BEXUS programme and its partners \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{has been }\DIFaddend be promoted through the following activities:
\begin{itemize}
\item Research paper \DIFdelbegin \DIFdel{published }\DIFdelend \DIFaddbegin \DIFadd{publication work }\DIFaddend in partnership with FMI detailing the sampling methodology, measurement result, analysis, and findings.
\item Collected data will be licensed as open data to be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control.
\item A website to summarize the experiment and provide regular updates. Backend web analytics included to gauge interest on the project through number of visitors and their origins (See Appendix \ref{sec:appE}).
\item Dedicated Facebook page used as publicly accessible logbook detailing challenges, progress, and status of the project. Open for comments and questions (See Figure \ref{fig:outreach-facebook} in Appendix \ref{sec:appE}).
\item Two Instagram accounts for short and frequent image and video focused updates. A primary Instagram account will be dedicated to project updates. A secondary account will reach out to a broader audience by focusing on space instruments in general and cross-reference TUBULAR related activities when relevant (See Figures \ref{fig:outreach-instagram}, \ref{fig:outreach-instagram-si-1}, and \ref{fig:outreach-instagram-si-2} in Appendix \ref{sec:appE}).
\item GitHub account to host all project software code under free and open source license (See Figure \ref{fig:outreach-github} in Appendix \ref{sec:appE}). Other REXUS/BEXUS teams \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend invited to host their code in this account\DIFdelbegin \DIFdel{in what will hopefully become a centralized GitHub account and code archive for present and future REXUS/BEXUS projects}\DIFdelend .
\item\DIFdelbegin \DIFdel{Reddit Ask Me Anything (AMA) thread to discuss the project with community of online enthusiasts.
}%DIFDELCMD < \item%%%
\item%DIFAUXCMD
\DIFdelend \enquote{Show and Tell} trips to local high schools and universities. Team members \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend responsible to organize such presentations through any of their travel opportunities abroad.
\item Articles and/or blogposts about the project in team members' alma mater websites.
\item In-booth presentation and poster display in the seminars or career events at different universities.
\item A thoroughly documented and user-friendly manual on how to build replicate and launch CAC and AAC sampling systems will be produced and published.
\end{itemize}
\pagebreak
\subsection{Risk Register}
\textbf{Risk ID}
\begin{enumerate}[label={}]
\item TC – Technical/Implementation
\item MS – Mission (operational performance)
\item SF – Safety
\item VE – Vehicle
\item PE – Personnel
\item EN – Environmental
\item OR - Outreach
\item BG - Budget
\end{enumerate}
Adapt these to the experiment and add other categories.
Consider risks to the experiment, to the vehicle and to personnel.
\textbf{Probability (P)}
\begin{enumerate}[label=\Alph*]
\item Minimum – Almost impossible to occur
\item Low – Small chance to occur
\item Medium – Reasonable chance to occur
\item High – Quite likely to occur
\item Maximum – Certain to occur, maybe more than once
\end{enumerate}
\textbf{Severity (S)}
\begin{enumerate}
\item Negligible – Minimal or no impact
\item Significant – Leads to reduced experiment performance
\item Major – Leads to failure of subsystem or loss of flight data
\item Critical – Leads to experiment failure or creates minor health hazards
\item Catastrophic – Leads to termination of the REXUS/BEXUS programme, damage to the vehicle or injury to personnel
\end{enumerate}
The rankings for probability (P) and severity (S) \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend combined to assess the overall risk classification, ranging from very low to very high and being coloured green, yellow, orange or red according to the SED guidelines.
\DIFdelbegin \DIFdel{Whether a risk is }\DIFdelend \DIFaddbegin \DIFadd{SED guidelines were used to determine whether a risk was }\DIFaddend acceptable or unacceptable\DIFdelbegin \DIFdel{has been assigned according to the SED guidelines. Where mitigation is written for acceptable risksthis details }\DIFdelend \DIFaddbegin \DIFadd{. For acceptable risks, details of }\DIFaddend the mitigation undertaken \DIFdelbegin \DIFdel{in order to reduce }\DIFdelend \DIFaddbegin \DIFadd{were included which reduced }\DIFaddend the risk to an acceptable level.
\begin{landscape}
\begin{longtable}{|m{0.075\textwidth}| m{0.48\textwidth} |m{0.02\textwidth} |m{0.02\textwidth}|m{0.10\textwidth}| m{0.64\textwidth}|}
\hline
\textbf{ID} & \textbf{Risk (\& consequence if)} & \textbf{P} & \textbf{S} & \textbf{P * S} & \textbf{Action} \\ \hline
TC10 & Software fails to store data & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Extensive testing \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{has been }\DIFaddend done. Using telemetry, all data gathered from sensors will be sent to ground station. \\ \hline
TC20 & Failure of several sensors & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Thermal test (Test Number 5) \DIFdelbegin \DIFdel{to approve }\DIFdelend \DIFaddbegin \DIFadd{approved }\DIFaddend the functionality of the experiment. \\ \hline
TC30 & Critical component is destroyed in testing & B & 1 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Spare components can be ordered but for expensive ones, they \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend ordered and tested early in the project in case we \DIFdelbegin \DIFdel{need }\DIFdelend \DIFaddbegin \DIFadd{needed }\DIFaddend to order more. \\ \hline
TC40 & Electrical connections dislodges or short circuits because of vibration or shock & B & 4 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk. D-sub connections will be screwed in place. It \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be ensured that there \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend no loose connections and zip ties \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to help keep wires in place. Careful soldering and extensive testing \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend applied. \\ \hline
TC50 & Experiment electronics fail due to long exposure to cold or warm temperatures & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Thermomechanical and thermoelectrical solutions \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend simulated and tested in detail to help prevent this from happening. \\ \hline
TC60 & Software and electrical fail to control heaters causing temperature to drop or rise below or above operational range & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Tests \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend performed prior to the flight to detect and minimize the risk of occurrence. The system \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be monitored during flight and handled manually if \DIFaddbegin \DIFadd{it was }\DIFaddend necessary. \\ \hline
TC70 & Software fails to enter safe mode (may result in loss of data) & B & 1 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Extensive testing \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be done. \\ \hline
TC80 & On-board memory will be full (flight time longer than expected) & A & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: The experiment \DIFdelbegin \DIFdel{shall go }\DIFdelend \DIFaddbegin \DIFadd{went }\DIFaddend through testing and analysis to guarantee the onboard memory size \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend sufficient.\\ \hline
TC90 & Connection loss with ground station & A & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Experiment \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend designed to operate autonomously. \\ \hline
TC100 & Software fails to control valves autonomously & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Extensive testing \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done. Telecommand \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend also be used to manually control the valves. \\ \hline
TC110 & Software fails to change modes autonomously & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Extensive testing \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done. Telecommand \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend also be used to manually change experiment modes. \\ \hline
TC120 & Complete software failure & B & 4 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: A long duration testing (bench test) \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend performed to catch the failures early. \\ \hline
TC130 & Failure of fast recovery system & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Clear and simple instructions \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend given to the recovery team. A test \DIFdelbegin \DIFdel{will take }\DIFdelend \DIFaddbegin \DIFadd{took }\DIFaddend place before launch to ensure someone unfamiliar with the experiment \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend remove the CAC box. Test number: 12. \\ \hline
TC140 & The gas analyzer isn't correctly calibrated and returns inaccurate results & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: \DIFdelbegin \DIFdel{Calibrate the gas analyzer }\DIFdelend \DIFaddbegin \DIFadd{Gas analyzer was calibrated }\DIFaddend before use.\\ \hline
TC150 & Partnership with FMI does not materialize, resulting in loss of access to CAC coiled tube. & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Signed agreement has been obtained. AAC sample analysis results can be validated against available historical data from past FMI CAC flights. \\ \hline
MS10 & Down link connection is lost prematurely & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Data \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend also be saved on SD card. \\ \hline
MS20 & Condensation on experiment PCBs which could causes short circuits & A & 3 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: The Brain \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend sealed to prevent condensation. \\ \hline
MS30 & Temperature sensitive components that are essential to full the mission objective might be below their operating temperature. & C & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Safe mode to prevent the components to operate out of its operating temperature range. \\ \hline
MS40 & Experiment lands in water causing electronics failure & B & 1 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: \DIFdelbegin \DIFdel{Check if SD card needs waterproof shell or is waterproof in itself. Also, all }\DIFdelend \DIFaddbegin \DIFadd{All }\DIFaddend the necessary data \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be downloaded during the flight. \\ \hline
MS50 & Interference from other experiments and/or balloon & A & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: no action. \\ \hline
MS60 & Balloon power failure & B & 2 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Valves default state \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend closed so if all power is lost valves \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend automatically close preserving all samples collected up until that point. \\ \hline
MS70 & Sampling bags disconnect & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: The affected bags could not collect samples. The connection between the spout of the bags and the T-union \DIFdelbegin \DIFdel{shall be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend double checked before flight. The system has passed vibration testing with no disconnects. \\ \hline
MS71 & Sampling bags puncture & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: The affected bags could not collect samples. Inner styrofoam walls have been choosen and no sharp edges \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend exposed to avoid puncture from external elements. \\ \hline
MS72 & Sampling bags' hold time is typically 48h & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable risk: Validation studies have demonstrated acceptable stability for up to 48 hours. \\ \hline
MS80 & Pump failure & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: A pump was chosen based on a previous similar experiment. The pump has also been tested in a low pressure chamber down to 10hPa and has successfully turned on and filled a sampling bag. The CAC subsystem is not reliant on the pump therefore would still operate even in the event of pump failure. \\ \hline
MS90 & Intake pipe blocked by external element & C & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: The bags would not be filled and thus the AAC system would fail. An air filter \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend placed in both intake and outlet of the pipe to prevent this. \\ \hline
MS100 & Expansion/Contraction of insulation & B & 2 &\cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: The insulation selected has flown successfully on similar flights in the past. Test \DIFdelbegin \DIFdel{shall be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done to see how it reacts in a low pressure environment. \\ \hline
MS110 & Sampling bags are over-filled resulting in bursting and loss of collected samples. & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Test \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend performed at target ambient pressure levels to identify how long the pump needs to fill the sampling bags. A static pressure sensor on board \DIFdelbegin \DIFdel{will monitor }\DIFdelend \DIFaddbegin \DIFadd{monitored }\DIFaddend the in-bag pressure during sampling and no bag \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend ever be over pressured. In addition an airflow rate sensor \DIFdelbegin \DIFdel{will monitor }\DIFdelend \DIFaddbegin \DIFadd{monitored }\DIFaddend the flow rate and a timer started when a bag valve is opened. The sampling \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend stop when either the maximum allowed pressure or maximum allowed time is reached. \\ \hline
SF10 & Safety risk due to pressurized vessels during recovery. & A & 1 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: The volume of air in the AAC decreases during descent because the pressure inside is lower than outside. The CAC is sealed at nearly sea level pressure, therefore there is only a small pressure difference. \\ \hline
SF20 & Safety risk due to the use of chemicals such as magnesium perchlorate. & A & 4 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: The magnesium perchlorate \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be kept in a sealed container or filter at all times. Magnesium perchlorate filters are made of stainless steel which has high durability, and have been used before without any sealing problems. \\ \hline
VE10 & SD-card is destroyed at impact & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: All data \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be transmitted to the ground. Most of the data is the gas stored in the AAC and CAC. \\ \hline
VE20 & Gondola Fixing Interface & B & 4 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: The experiment box could detach from the gondola’s rails and the two boxes could detach one from the other. The experiment will be secured to the gondola and to each other with multiple fixings. These \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also be tested. \\ \hline
VE30 & Structure damage due to bad landing & B & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Landing directly on a hard element could break the structure or the protective walls. Consistent design implemented to prevent it. \\ \hline
VE40 & Hard landing damages the CAC equipment & C & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Structural analysis has been done and choosing a wall consisting of an aluminum sheet and Styrofoam to dampen the landing. \\ \hline
VE50 & Hard landing damages the AAC equipment & C & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Structural analysis has been done and choosing a wall consisting of an aluminum sheet and Styrofoam to dampen the landing. \\ \hline
EN10 & Vibrations from pump affect samples & C & 1 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Vibrations do not affect the sampled air. No action required. \\ \hline
EN20 & The air samples must be protected from direct sunlight and stored above 0$\degree{C}$ to prevent condensation & C & 3 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Stratospheric air is generally dry and water vapor concentrations are higher closer to the surface. In addition magnesium perchlorate dryers \DIFdelbegin \DIFdel{will be used to minimizing }\DIFdelend \DIFaddbegin \DIFadd{were used to minimize }\DIFaddend the risk of condensation. \\ \hline
PE10 & Change in Project Manager after the CDR introduces a gap of knowledge in management responsibilities. & E & 1 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: A Deputy Project Manager \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend selected at an early stage and \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend progressively handed over project management tasks and responsibilities until complete handover after the CDR. The previous Project Manager remotely \DIFdelbegin \DIFdel{assists }\DIFdelend \DIFaddbegin \DIFadd{assisted }\DIFaddend the new Project Manager until the end of the project. The Deputy Project Manager \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend also part of the Electrical Division so a new team member has been included to that division in order compensate for the Deputy Project Manager's reduced bandwidth to work on Electrical Division tasks once she is appointed Project Manager.\\ \hline
PE20 & Team members from the same division are unavailable during the same period over the summer. & C & 2 & \cellcolor[HTML]{FCFF2F}Low & Acceptable Risk: Summer travel schedules have been coordinated among team members so that there is at least one member from each division available during the summer. \\ \hline
PE30 & No one from management is available to oversee the work for a reasonable period. & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Management summer travel schedules have been planned to fit around known deadlines. There \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend always be at least one member from management available via phone at all times. All team members \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend made aware of which members will be available at what times so work \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be planned accordingly. \\ \hline
PE40 & Miscommunication between team members results in work being incomplete or inaccurate & B & 2 & \cellcolor[HTML]{34FF34}Very Low & Acceptable Risk: Whatsapp, Asana and Email \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used in combination to ensure that all team members are up to date with the most current information. \\ \hline
% & & & & &\\ \hline
%\end{tabular}
\caption{Risk Register.}
\label{tab:risk-register}
\end{longtable}
\raggedbottom
\end{landscape}
\pagebreak
\section{Experiment Design}
\subsection{Experiment Setup} \label{Experiment_Setup}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdel{The experiment consists }\DIFdelend \DIFaddbegin \DIFadd{The experiment consisted }\DIFaddend of the AAC subsystem, with six sampling bags, and the CAC coiled tube subsystem. Shown in Figure {\ref{fig:3D_tubular_render}}, the AirCore \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fitted into the CAC box, and the alternative sampling system with bags in the AAC box, together with the pneumatic system and the electronics placed inside the \emph{Brain}. The principal aim \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to validate the AAC sampling method. To do so, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend necessary to sample during Descent Phase in order to compare the results with the ones obtained from the CAC. This \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend because the CAC \DIFdelbegin \DIFdel{collects }\DIFdelend \DIFaddbegin \DIFadd{collected }\DIFaddend its air sample passively by pressure differentials in the descent. Flight speeds mentioned in this section \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend obtained from the BEXUS manual as well as through analysis of past flights. Figure \ref{fig:block-diagram} shows a generic block diagram of the main subsystems interconnection.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{4-experiment-design/img/Mechanical/tubular_render_labels.jpg}
\end{align*}
\caption{Physical Setup of the Experiment.}
\label{fig:3D_tubular_render}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{4-experiment-design/img/Mechanical/Block-Diagram.png}
\end{align*}
\caption{Block Diagram of the Experiment.}
\label{fig:block-diagram}
\end{figure}
The primary concern regarding the AAC air sampling subsystem \DIFdelbegin \DIFdel{occurs }\DIFdelend \DIFaddbegin \DIFadd{occured }\DIFaddend after the cut-off \DIFdelbegin \DIFdel{when the gondola will tumble and fall }\DIFdelend \DIFaddbegin \DIFadd{while the gondola was tumbling and falling }\DIFaddend at an average speed of 50 m/s for approximately two minutes \cite{BexusManual}. This descent speed \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend too large in order to sample air at the desired vertical resolution, capped at 500 m. As such, sampling \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend only be done after the gondola \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend stabilized at a descent speed of 8 m/s \cite{BexusManual}. The tumbling phase \DIFdelbegin \DIFdel{will vertically span for approximately 6 km. Considering }\DIFdelend \DIFaddbegin \DIFadd{was vertically spanned for approximately 8 km. With }\DIFaddend a Float Phase altitude of \DIFdelbegin \DIFdel{25 }\DIFdelend \DIFaddbegin \DIFadd{approximately 27.3 }\DIFaddend km, sampling during the Descent Phase \DIFdelbegin \DIFdel{will commence }\DIFdelend \DIFaddbegin \DIFadd{would have commenced }\DIFaddend at approximately 19 km in altitude. However, the primary region of interest in terms of sampling \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend in the stratosphere, particularly between 19 km and \DIFdelbegin \DIFdel{25 }\DIFdelend \DIFaddbegin \DIFadd{27.3 }\DIFaddend km in altitude. \DIFdelbegin \DIFdel{Sampling will thus }\DIFdelend \DIFaddbegin \DIFadd{This was why sampling was planned to }\DIFaddend also occur during the Ascent Phase. Out of the six sampling bags present in the payload, two \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{were planned to }\DIFaddend be used during the Ascent Phase at 18 km and 21 km and four during the Descent Phase at 17.5 km, 16 km, 14 km and 12 km as seen in Table \ref{tab:minimum-volume}. Details regarding the sampling strategy can be found in Appendix \ref{sec:appH}.
%\input{4-experiment-design/tables/samplingaltitudes.tex}
The maximum pressure that the sampling bags \DIFdelbegin \DIFdel{can withstand has }\DIFdelend \DIFaddbegin \DIFadd{could withstand had }\DIFaddend to be taken into account in order to avoid bursting. Decreasing pressure during the Ascent Phase \DIFdelbegin \DIFdel{poses }\DIFdelend \DIFaddbegin \DIFadd{would have posed }\DIFaddend a risk to sampling bags which already \DIFdelbegin \DIFdel{contain }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend samples as the gas inside \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend expand which may cause the bag to burst. In order to avoid this, the sampling bags \DIFdelbegin \DIFdel{will not }\DIFdelend \DIFaddbegin \DIFadd{were not planned to }\DIFaddend be completely filled. Filling the \DIFaddbegin \DIFadd{sampling }\DIFaddend bags up to a maximum pressure of 2 psi/0.14 bar/140 hPa or alternatively filling the sampling bag up to 80\% of its capacity \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend recommended by the manufacturers for the Multi-Layer Foil sampling bags that \DIFdelbegin \DIFdel{are to be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used. Therefore, the \DIFdelbegin \DIFdel{maximum expected }\DIFdelend \DIFaddbegin \DIFadd{expected maximum }\DIFaddend pressure inside the bags, that \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend filled during the Ascent Phase, \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be 1.6 psi/0.11 bar/110 hPa. The inverse \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend also true for the Descent Phase where compression \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend occur. As such, the sampling bags \DIFdelbegin \DIFdel{should }\DIFdelend \DIFaddbegin \DIFadd{had to }\DIFaddend be fully filled during the Descent Phase in order to ensure that enough samples \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend collected for analysis. During the Descent Phase, the \DIFaddbegin \DIFadd{expected }\DIFaddend maximum pressure inside the bags \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend expected to be 1.98 psi/0.13 bar/130 hPa. Past research \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend revealed that the selected sampling bags \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{were able to }\DIFaddend withstand pressure difference of 310 hPa at 30 km of altitude, which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend equivalent to 0.31 bar \cite{LISA}. Test 16 and 18, shown in Table \ref{tab:sampling-system-test} respective Table \ref{tab:pump-low-pressure-test}, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend conducted in order to confirm the maximum allowable pressure for the bags.
The maximum operating pressure for the tubes, according to the manufacturers, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 2.2 psi/0.15 bar/150 hPa. The valve's leakage rate, given by the manufacturers, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 0.001 l/min.
Due to the difference in pressure between sea level and sampling altitudes, the volume of the sample taken \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend considerably reduced when it \DIFdelbegin \DIFdel{reaches }\DIFdelend \DIFaddbegin \DIFadd{reached }\DIFaddend sea level. This shrinking \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend to be taken into account as the minimum volume that \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend to be present in the sampling bag at sea level in order to obtain results with the Picarro analyzer. A minimum amount \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend required for the analyzer to detect concentrations of the targeted trace gases. This minimum amount \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 0.18 L at sea level and it \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend to be specially considered for the samples taken at higher altitudes. The samples taken at lower altitudes \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend exposed to smaller changes in pressure, therefore their size \DIFdelbegin \DIFdel{will not be }\DIFdelend \DIFaddbegin \DIFadd{was not }\DIFaddend critically reduced. Table \ref{tab:minimum-volume} shows the minimum volume of air that \DIFdelbegin \DIFdel{needs }\DIFdelend \DIFaddbegin \DIFadd{was needed }\DIFaddend to be sampled at different altitudes in order to assure the minimum air sample of 0.18L left at sea level. \\
This \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the worst case scenario, and testing \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend shown that the higher the volume of the air sample left at sea level, the better the results. This \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend why the aimed volume of the samples, at sea level \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend at least 0.6L.
%and the corresponding temperature and pressure conditions
%pressure and temperature (288 K)
% Depending on the sampling altitude,there is a minimum volume of air that needs to be sampled in order the sample volume left at sea level pressure is at least 0.18 L. A sample volume of 0.18 L corresponds to the minimum amount required for the Picarro analyzer to detect concentrations of the targeted trace gases.
% Please add the following required packages to your document preamble:
% \usepackage{multirow}
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
& \textbf{\begin{tabular}[c]{@{}l@{}}Minimum \\ Sampling Volume\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Sampling \\ Altitudes\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Ambient \\ Pressure\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Ambient \\ Temperature\end{tabular}} \\ \hline
\multirow{2}{*}{\textbf{Ascent Phase}} & 1.8 L & 18 km & 75.0 hPa & 216.7 K \\ \cline{2-5}
& 2.4 L & 21 km & 46.8 hPa & 217.6 K \\ \hline
\multirow{4}{*}{\textbf{Descent Phase}} & 1.7 L & 17.5 km & 81.2 hPa & 216.7 K \\ \cline{2-5}
& 1.3 L & 16 km & 102.9 hPa & 216.7 K \\ \cline{2-5}
& 1.0 L & 14 km & 141.0 hPa & 216.7 K \\ \cline{2-5}
& 0.7 L & 12 km & 193.3 hPa & 216.7 K \\ \hline
\end{tabular}
\caption{Sampling Altitudes as well as the Corresponding Ambient Pressures and Temperatures According to the 1976 US Standard Atmosphere and the Minimum Sampling Volume at Each Altitude to Obtain Enough Air to Perform a Proper Analysis (0.18 L at Sea Level), Appendix \ref{sec:appH}.}
\label{tab:minimum-volume}
\end{table}
The AAC \DIFdelbegin \DIFdel{will need }\DIFdelend \DIFaddbegin \DIFadd{needed }\DIFaddend an air pump for sampling due to low ambient pressure at stratospheric altitudes. The air pump \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend also needed in order to assure the intake flow rate and obtain a good resolution. An air pump with an intake rate of at least 3 L/min \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to ensure that the vertical resolution of the sampling air \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{remained }\DIFaddend under 500 m during the Ascent Phase's ascent speed of 5 m/s and the Descent Phase's descent speed of 8 m/s. A flushing valve (see Figure \ref{pneumatic_system}, No.23) \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to flush the AAC system before each bag \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend filled and make sure that each bag \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend filled with fresh air from the corresponding altitude. This filling/flushing procedure \DIFdelbegin \DIFdel{occurs }\DIFdelend \DIFaddbegin \DIFadd{was planned to occur }\DIFaddend twice, the first time during the Ascent Phase for the first two sampling bags and the second time during the Descent Phase for the remaining four sampling bags.
Shortly after the launch, the CAC valve \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend opened in order to allow the fill gas that \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend inside the tube to flush, while the AAC valves \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend closed until reaching the sampling altitude. Flushing of the CAC tube \DIFdelbegin \DIFdel{happens }\DIFdelend \DIFaddbegin \DIFadd{happened }\DIFaddend passively through the progressive decrease in air pressure during the balloon's Ascent Phase and it \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend emptied by the time it \DIFdelbegin \DIFdel{reaches }\DIFdelend \DIFaddbegin \DIFadd{reached }\DIFaddend the Float Phase. Filling of the CAC tube also \DIFdelbegin \DIFdel{happens }\DIFdelend \DIFaddbegin \DIFadd{happened }\DIFaddend passively through the progressive increase in air pressure during the balloon's Descent Phase. The CAC valve \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was planned to }\DIFaddend remain open at all time during the Ascent, Float, and Descent phases. \DIFdelbegin \DIFdel{The valve will close }\DIFdelend \DIFaddbegin \DIFadd{Due to some problems, it was briefly closed and opened again for a few times without really compromising the results. The valve should have been closed }\DIFaddend just before hitting the ground in order to preserve the sample.
The ambient pressure \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend measured by three pressure sensors located outside the experiment box. Only one of them \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend necessary for AAC and CAC, but using three\DIFdelbegin \DIFdel{will provide redundancy }\DIFdelend \DIFaddbegin \DIFadd{, redundancy was provided}\DIFaddend . To measure the pressure inside the bag that \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend currently being filled, one analogue static pressure sensor \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend connected to the pneumatic system. To measure the ambient temperature in the CAC, three sensors \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend allocated in the CAC box (in the Styrofoam). Temperature inside the coil \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend assumed to quickly adjust to the ambient temperature inside the CAC box, therefore there \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend not be differentiation in temperature between the air inside the tube and the air surrounding the tube. For the bags three more temperature sensors \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend placed in the bags' box (in the Styrofoam). To control the temperature for the pump and the valves in pneumatic subsystem, one temperature sensor \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used for each of them. In total, there \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend three pressure sensors and eight temperature sensors.
The sampling of the AAC \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend triggered by the pressure reading from the sensors outside the experiment box. When the required pressure \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend reached, as seen in Table \ref{tab:minimum-volume} the valve inside the manifold corresponding to the bag that \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to be sampled, \DIFdelbegin \DIFdel{will open }\DIFdelend \DIFaddbegin \DIFadd{should have opened }\DIFaddend and the sampling \DIFdelbegin \DIFdel{will start}\DIFdelend \DIFaddbegin \DIFadd{should have started}\DIFaddend . The closing of the valve \DIFdelbegin \DIFdel{depends }\DIFdelend \DIFaddbegin \DIFadd{depended }\DIFaddend on two conditions and it \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend triggered when either one of the conditions \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend true. These conditions \DIFdelbegin \DIFdel{are}\DIFdelend \DIFaddbegin \DIFadd{were}\DIFaddend : maximum sampling time or maximum pressure difference between inside/outside the bags. They \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend determined from past research \cite{LISA}. A first estimation of the maximum sampling time \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend already been made, from Test 18 shown in Table \ref{tab:pump-low-pressure-test}. Completed tests, such as Test 14 and Test 18, shown in Table \ref{tab:vacuum-test} respective Table \ref{tab:pump-low-pressure-test}, the maximum pressure condition \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend been determined and the maximum sampling times \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend been confirmed.
The CAC emptying as well as the AAC and CAC sampling sequence is represented in Figures \ref{fig:ascent} and \ref{fig:descent}. It should be kept in mind that the different pressures \DIFdelbegin \DIFdel{are what triggers }\DIFdelend \DIFaddbegin \DIFadd{were what should have triggered }\DIFaddend the opening of the valves.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{4-experiment-design/img/ascent-phase.jpeg}
\end{align*}
\caption{The Emptying and Sampling Sequence-Ascent Phase.}
\label{fig:ascent}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{4-experiment-design/img/descent-phase.jpeg}
\end{align*}
\caption{The Emptying and Sampling Sequence-Descent Phase.\label{fig:descent}}
\end{figure}
In the diagrams, 0 denotes closed/off and 1 denotes opened/on. The horizontal axis denotes the different pressure levels throughout the flight, with p$_0$ being the sea level pressure and p$_8$ being the pressure during Float Phase.
The ambient pressure dependent timeline of the experiment \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was planned to be }\DIFaddend as follow:
\textbf{Ascent Phase:}\\
$p_0$ – $p_1$
\begin{itemize}
\item CAC valve shall be closed.
\item AAC valves shall be closed.
\end{itemize}
$p_1$ – $p_2$
\begin{itemize}
\item CAC valve shall be opened.
\item CAC tube shall start flushing.
\end{itemize}
$p_2$ – $p_3$
\begin{itemize}
\item AAC flushing valve shall be opened, allowing for the system to flush.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_3$ – $p_4$
\begin{itemize}
\item AAC flushing valve shall be closed.
\item Valve 1 shall be opened, allowing for air to enter the first bag.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_4$ – $p_5$
\begin{itemize}
\item Valve 1 shall be closed.
\item AAC flushing valve shall be closed.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_5$ - $p_6$
\begin{itemize}
\item AAC flushing valve shall be opened, allowing the system to flush.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_6$ - $p_7$
\begin{itemize}
\item AAC flushing valve shall be closed.
\item Valve 2 shall be opened, allowing for air to enter the second bag.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_7$ - $p_8$
\begin{itemize}
\item Valve 2 shall be closed.
\item AAC flushing valve shall be closed.
\item CAC shall finish flushing.
\end{itemize}
\textbf{\\Float Phase:}\\
No action \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend taken other than continued telemetry.
\textbf{Descent Phase:}
$p_9$ – $p_{10}$
\begin{itemize}
\item CAC shall start sampling.
\item AAC valves shall be closed.
\end{itemize}
$p_{10}$ – $p_{11}$
\begin{itemize}
\item AAC flushing valve shall be opened allowing the system to flush.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{11}$ – $p_{12}$
\begin{itemize}
\item AAC flushing valve shall be closed.
\item Valve 3 shall be opened, allowing for air to enter the third bag.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{12}$ – $p_{13}$
\begin{itemize}
\item Valve 3 shall be closed.
\item AAC flushing valve shall be closed.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{13}$ – $p_{14}$
\begin{itemize}
\item AAC flushing valve shall be opened allowing the system to flush.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{14}$ – $p_{15}$
\begin{itemize}
\item AAC flushing valve shall be closed.
\item Valve 4 shall be opened, allowing for air to enter the fourth bag.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{15}$ – $p_{16}$
\begin{itemize}
\item Valve 4 shall be closed.
\item AAC flushing valve shall be closed.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{16}$ – $p_{17}$
\begin{itemize}
\item AAC flushing valve shall be opened, allowing the system to flush.
\item CAC \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{17}$ – $p_{18}$
\begin{itemize}
\item AAC flushing valve shall be closed.
\item Valve 5 shall be opened, allowing for air to enter the fifth bag.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{18}$ – $p_{19}$
\begin{itemize}
\item Valve 5 shall be closed.
\item AAC flushing valve shall be closed.
\item CAC valve \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{should remain }\DIFaddend open.
\end{itemize}
$p_{19}$ – $p_{20}$
\begin{itemize}
\item AAC flushing valve shall be opened, allowing the system to flush.
\item CAC \DIFdelbegin \DIFdel{remains }\DIFdelend \DIFaddbegin \DIFadd{valve should remain }\DIFaddend open.
\end{itemize}
$p_{20}$ – $p_{21}$
\begin{itemize}
\item AAC flushing valve shall be closed.
\item Valve 6 shall be opened, allowing for air to enter the sixth bag.
\DIFaddbegin \item \DIFadd{CAC valve should remain open.
}\DIFaddend \end{itemize}
$p_{pre-landing}$
\begin{itemize}
\item Valve 6 shall be closed.
\item AAC flushing valve shall be closed.
\item CAC valve shall be opened.
\end{itemize}
$p_{0-landing}$
\begin{itemize}
\item CAC valve shall be closed.
\end{itemize}
Note: The AAC system's air pump is only on during sampling into the air sampling bags and flushing of the system.
\raggedbottom
\pagebreak
\subsection{Experiment Interfaces}
\subsubsection{Mechanical Interfaces}
\label{sec:4.2.1}
\bigskip
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\scalebox{0.8}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Component} & \textbf{Interface} & \textbf{Amount} & \textbf{Dimensions} & \begin{tabular}[c]{@{}c@{}}\textbf{Total}\\ \textbf{weight}\end{tabular} \\ \hline
Bracket standard 20/20 slot 6/6 & AAC-Gondola & $8$ & $20 \times 20 \times 20\ mm$ & $40\ g$ \\ \hline
Tolerance holes bracket & CAC-Gondola & $2$ & $ 20 \times 30 \times 52 \ mm$ & $50\ g$ \\ \hline
4-hole plate & AAC-CAC & $6$ & $1 \times 60 \times 45\ mm$ & $100\ g$ \\ \hline
Rubber bumpers M6 & AAC-Gondola, CAC-Gondola & $10$ & $19 \times 19 \times 15\ mm$ & $300\ g$ \\ \hline
T-nut slot 6 M4 & AAC-CAC, AAC-Gondola, CAC-Gondola & $44$ & $4 \times 5.9 \times 11.5\ mm$ & $132\ g$ \\ \hline
T-nut slot 8 M6 & AAC-Gondola, CAC-Gondola & $10$ & $6 \times 11 \times 16\ mm$ & $60\ g$ \\ \hline
Steel bolt M4 & AAC-CAC, AAC-Gondola & $44$ & $8\ mm $ length & $34\ g$ \\ \hline
Steel washer M4 & AAC-CAC, AAC-Gondola & $24$ &\begin{tabular}[c]{@{}c@{}}$ID=4.3\ mm$\\ $OD=9\ mm$\end{tabular} & $4.8\ g$\\ \hline
% Polyamide bolt M4 & Styrofoam-CAC-AAC & $8$ & $20\ mm $ length & $3\ g$ & $24\ g$ \\ \hline
% Polyamide washer M4 & Styrofoam-CAC-AAC & $8$ &\begin{tabular}[c]{@{}c@{}}$ID=4.3\ mm$\\ $OD=25\ mm$\end{tabular} & $4\ g$ & $32\ g$\\ \hline
Styrofoam bars & AAC-Gondola, CAC-Gondola & $4$ & see Appendix \ref{sec:mech_drawings} & $450\ g$ \\ \hline
Handles & CAC \& AAC & $4$ & $ 18.6 \times 25.2 \times 112.5 \ mm$ & $80\ g$ \\ \hline
\end{tabular}}}
\caption{Summary of Gondola-AAC-CAC Interfaces Components.}
\label{table:attaching-components}
\end{table}
\underline{Gondola - TUBULAR joining}
\smallskip
The experiment box \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fixed to the gondola rails by means of $10$ brackets interfacing the experiment outside structure with the hammer nuts in the rails. Two different types of brackets \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to be flexible with respect to the gondola rails distances, which can be modified by use after previous BEXUS campaigns. Eight small $20/20$ brackets (Figure \ref{fig:bracket_small}) \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to fix the AAC box to specific rails placement, and two other big brackets (Figure \ref{fig:bracket_big}) \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to fix the CAC box to the nearest rail. This method is secure as well as fast enough to provide an accessible and easy recovery for later analysis.
\begin{figure}[H]
\noindent\makebox[\textwidth]{%
\begin{subfigure}{.3\textwidth}
\centering\includegraphics[width=0.55\textwidth]{4-experiment-design/img/Mechanical/bracket.jpg}
\caption{Rexroth 20/20.}
\label{fig:bracket_small}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{.3\textwidth}
\centering\includegraphics[width=0.55\textwidth,angle=90] {4-experiment-design/img/Mechanical/long_hole_bracket.jpg}
\caption{Tolerance Holes.}
\label{fig:bracket_big}
\end{subfigure}}
\caption{Bracket Components.}
\label{fig:bracket}
\end{figure}
\bigskip
\underline{CAC - AAC joining}
\smallskip
A simple but reliable fixing interface between the two boxes of the experiment has been designed to ensure the fast recovery of the CAC box. The latter \DIFdelbegin \DIFdel{requires }\DIFdelend \DIFaddbegin \DIFadd{required }\DIFaddend only unscrewing 12 bolts as well as unplugging a D-Sub connector marked in RED, see Figure \ref{fig:electrical_interfaces}. Once the CAC box \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend detached, the AAC Box \DIFdelbegin \DIFdel{will still remain perfectly }\DIFdelend \DIFaddbegin \DIFadd{still remained }\DIFaddend fixed in the gondola. Table \ref{table:attaching-components} includes all the components required to fix the experiment to the gondola.
\bigskip
\underline{Handles}
\smallskip
Four top handles, as shown in Figure \ref{fig:handles} \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend mounted to facilitate the experiment box manipulation when moving it in and out of the gondola.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Figure_8.png}
\caption{Handling Interfaces.}
\label{fig:handles}
\end{figure}
\bigskip
\underline{Inlet/Outlet Pipes}
\label{subsec:pipes}
\smallskip
In order to collect reliable air samples, the experiment \DIFdelbegin \DIFdel{requires }\DIFdelend \DIFaddbegin \DIFadd{was required }\DIFaddend to be mounted with at least one side exposed to the outside. This \DIFdelbegin \DIFdel{will reduce }\DIFdelend \DIFaddbegin \DIFadd{reduced }\DIFaddend the pipe length used to collect clean air. As it can be seen in Figure \ref{fig:3D_tubular_render}, three pipes \DIFdelbegin \DIFdel{will extend }\DIFdelend \DIFaddbegin \DIFadd{were extended }\DIFaddend from the experiment box face: one for the CAC sampling and two, input and output, for the AAC sampling.
These pipes \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend welded/drawn $304$ grade stainless steel tubes from RESTEK company, which \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend specially recommended for chromatography applications and gas delivery systems with low pressures and inert environments. These tubes \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend sulfinert, which is a required treatment for metal components when analyzing for parts-per-billion levels of organo-sulfur compounds.
The tubes, which \DIFdelbegin \DIFdel{are the same that will be }\DIFdelend \DIFaddbegin \DIFadd{were the same ones }\DIFaddend used in the pneumatic system of the \emph{Brain} (see Section \ref{sec:4.4.5}), \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend an outer diameter $OD = 6.35\ mm$ ($1/4$ inches) and an inner diameter $ID = 4.57\ mm$ ($0.18$ inches).
\bigskip
\underline{Pump vibration}
\label{subsec:vibration}
To mitigate the vibrations produced by the pump, an extra piece of styrofoam has been added between the pump's anchor plate and the surface of the level 1 of the brain, where this key component is fixed.
\subsubsection{Thermal Interfaces}
\label{sec:4.2.2}
Both main structural components and external walls of the two boxes of the experiment \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend made by aluminum and steel components. For this reason, since these \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend conductive materials, a direct attachment to the gondola creates many heat paths with the internal space and subsystems of the experiment. Considering that the temperature gradient between the gondola and the operative requirements of the electronic components can be quite high, this conductive connections drastically decrease the efficiency of the thermal insulation. Therefore, a system based on rubber bumpers and styrofoam bars (see Figure \ref{fig:thermal_interface}) has been designed to remove heat bridges and minimize temperature leaks from the inside of the experiment to the outside.
Figure \ref{fig:rubber_bumper} shows a CAD model of the bumper component and how it looks like when attached to the gondola with the brackets explained in the previous section.
\begin{figure}[H]
\noindent\makebox[\textwidth]{%
\begin{subfigure}{.3\textwidth}
\centering\includegraphics[width=0.55\textwidth]{4-experiment-design/img/Mechanical/rubber_bumper.jpg}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{.3\textwidth}
\centering\includegraphics[width=0.55\textwidth] {4-experiment-design/img/Mechanical/real_bumper.jpg}
\end{subfigure}}
\caption{Rubber Bumper.}
\label{fig:rubber_bumper}
\end{figure}
The styrofoam bars \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend attached directly to the rails of the experiment structure by M4 plastic screws and big washers.
\begin{figure}[H]
\centering
\DIFdelbeginFL %DIFDELCMD < \includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/gondola_fixation.png}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/gondola_fixation.jpg}
\DIFaddendFL \caption{Thermal Interfaces TUBULAR-Gondola.}
\label{fig:thermal_interface}
\end{figure}
\subsubsection{CAC Interfaces}
An uncoupled quick connector, shown in Figure \ref{fig:Quick-connector-body}, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend attached at each end of the coiled tube to seal the opening. It \DIFdelbegin \DIFdel{will remain }\DIFdelend \DIFaddbegin \DIFadd{remained }\DIFaddend tightly sealed until the quick connectors \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend manually coupled.
\begin{figure}[H]
\centering
\includegraphics[width=0.2\textwidth]{4-experiment-design/img/Mechanical/CAC-QC-Outlet.jpg}
\caption{Swagelok Quick Connector Body.}
\label{fig:Quick-connector-body}
\end{figure}
The interfaces between the other parts in the CAC set up \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend joined with specific tube fittings, listed in Table \ref{tab:CAC-interfaces}. All the chosen interfaces \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend from Swagelok. Using products from the same manufacture minimizes the risk for leakage or mismatched interfaces in the system.
\begin{table}[H]
\centering
\scalebox{0.8}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Component} & \textbf{Interface} & \textbf{Amount} & \textbf{Fitting Size} \\ \hline
\begin{tabular}[c]{@{}c@{}}Quick connector body\\ SS-QC4-B-200\end{tabular} & Outlet of coiled tube & 1 & 1/8 in. \\ \hline
\begin{tabular}[c]{@{}c@{}}Quick connector body\\ SS-QC4-B-400\end{tabular} & Inlet of coiled tube & 1 & 1/4 in. \\ \hline
\begin{tabular}[c]{@{}c@{}}Quick connector stem\\ SS-QC4-D-400\end{tabular} & Inlet of coiled tube - Filter & 1 & 1/4 in. \\ \hline
\begin{tabular}[c]{@{}c@{}}Male connector\\ SS-400-1-2\end{tabular} & Tube fitting - Solenoid valve & 2 & \begin{tabular}[c]{@{}c@{}}Tube OD 1/8 in. to \\ Tube OD 1/4 in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Straight Tube Union\\ SS-200-6\end{tabular} & Quick connector 1/8 in. - Tube 1/8 in. & 1 & 1/8 in.
\\ \hline
\begin{tabular}[c]{@{}c@{}}Tube Reducer\\ SS-400-6-2 \end{tabular} & Tube 1/8 in. - Tube 1/4 in. & 1 & \begin{tabular}[c]{@{}c@{}}Tube OD 1/8 in. to \\ Tube OD 1/4 in.\end{tabular}
\\ \hline
\begin{tabular}[c]{@{}c@{}}Straight Tube Union\\ SS-400-6\end{tabular} & Tube 1/4 in. - 90 degree connector & 1 & 1/4 in.
\\ \hline
\begin{tabular}[c]{@{}c@{}}Union 90-degree connector\\ SS-400-9\end{tabular} &
\begin{tabular}[c]{@{}c@{}} Between certain tube fittings\\ Outlet tube \end{tabular} & 3 & 1/4 in.
\\ \hline
\begin{tabular}[c]{@{}c@{}}Tube fitting\\ SS-401-PC\end{tabular} & \begin{tabular}[c]{@{}c@{}} Between certain tube fittings\\ Magnesium dryer filter \end{tabular} & 5 & 1/4 in.
\\ \hline
\end{tabular}}
\caption{Interfaces within CAC Setup.}
\label{tab:CAC-interfaces}
\end{table}
\subsubsection{AAC Interfaces}
In the AAC system, the interfaces between various components \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend a mixture of eleven different types of tube fittings from Swagelok. The selected types \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend straight and elbow union, T-union, female and male elbows, male and female connectors, tube fittings, and quick coupling with a certain specifications. Some of them are shown in Figure \ref{fig:AAC-interfaces-fittings}. Information regarding the fitting's placement in the AAC and fitting sizes are summarized in Table \ref{tab:AAC-interfaces}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/AAC-interfaces.jpg}
\caption{From Left to Right: Male Connector, Male Elbow, T-union, Straight Union and Female Connector.}
\label{fig:AAC-interfaces-fittings}
\end{figure}
\begin{table}[H]
\centering
\DIFdelbeginFL %DIFDELCMD < \scalebox{0.8}{
%DIFDELCMD < \begin{tabular}{|c|c|c|c|}
%DIFDELCMD < \hline
%DIFDELCMD < \textbf{Component} & \textbf{Interface} & \textbf{Amount} & \textbf{Fitting Size} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Male connector\\ SS-400-1-2\end{tabular} & \begin{tabular}[c]{@{}c@{}} Tube to Manifold - Flushing valve\\ Flushing valve - Outlet tube\\ Manifold valve - Tube to bag\end{tabular} & 8 & \begin{tabular}[c]{@{}c@{}}Male 1/8 in. to \\ Tube OD 1/4 in.\end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Male connector\\ SS-400-1-4\end{tabular} & \begin{tabular}[c]{@{}c@{}} Manifold - Tube to Flushing valve\end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Male 1/4 in. to \\ Tube OD 1/4 in.\end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Male elbow\\ SS-400-2-4\end{tabular} & \begin{tabular}[c]{@{}c@{}}Static pressure sensor - Manifold \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Male 1/4 in. to \\ Tube OD 1/4 in. \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Female elbow\\ SS-400-8-4\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pump tube - Airflow sensor \\ Airflow sensor - Static pressure sensor \end{tabular} & 2 & \begin{tabular}[c]{@{}c@{}}Female 1/4 in. to \\ Tube OD 1/4 in. in.\end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Female connector\\ SS-4-TA-7-4RG\end{tabular} & \begin{tabular}[c]{@{}c@{}}Static pressure sensor - T-Union \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Female 1/4 in. and \\ Tube OD 1/4 in. in.\end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Straight union\\ SS-400-6\end{tabular} & \begin{tabular}[c]{@{}c@{}}Filter - Tube filter\\ Tube filter - Pump\end{tabular} & 3 & \begin{tabular}[c]{@{}c@{}}Tube OD 1/4 in.\end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Elbow union\\ SS-400-9\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pump tube - Filter tube \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Tube OD 1/4 in. in.\end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}T-Union\\ SS-400-3\end{tabular} & \begin{tabular}[c]{@{}c@{}}Static pressure sensor \end{tabular} & 3 & Tube OD 1/4 in. \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}T-Union\\ SS-400-3-4TTM\end{tabular} & Tube valve - Bag valve - Quick Connector & 5 &\begin{tabular}[c]{@{}c@{}} Male 1/4 in. and \\ 2 x Tube OD 1/4 in. \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}T-Union\\ SS-400-3-4TMT\end{tabular} & Tube valve - Bag valve - Quick Connector & 1 & \begin{tabular}[c]{@{}c@{}}Male 1/4 in. \\ 2 x Tube OD 1/4 in. \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}T-Union\\ SS-6M0-3\end{tabular} & Pump Inlet and Outlet & 2 & \begin{tabular}[c]{@{}c@{}} Tube OD 6mm \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Tube Fitting\\ SS-401-PC\end{tabular} & Filter - Pump & 1 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Tube Fitting Reducer\\ SS-400-R-6M\end{tabular} &\begin{tabular}[c]{@{}c@{}} Filter - Pump \\ Pump - Airflow sensor \end{tabular} & 2 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. to \\ Tube OD 6mm \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Tube Inserts\\ SS-6M5-4M\end{tabular} & Pump Inlet and Outlet & 2 & \begin{tabular}[c]{@{}c@{}} OD 6mm - ID 4mm \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Tube Fitting Female \\ SS-4-TA-7-4RG\end{tabular} &\begin{tabular}[c]{@{}c@{}} Static pressure sensor \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. to \\ female 1/4" \end{tabular} \\ \hline
%DIFDELCMD < \begin{tabular}[c]{@{}c@{}}Quick Coupling \\ SS-QC4-B-4PF\end{tabular} &\begin{tabular}[c]{@{}c@{}} T-Union of bags \end{tabular} & 6 & \begin{tabular}[c]{@{}c@{}} SS female 1/4" \end{tabular} \\ \hline
%DIFDELCMD < \end{tabular}}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \scalebox{0.8}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Component} & \textbf{Interface} & \textbf{Amount} & \textbf{Fitting Size} \\ \hline
\begin{tabular}[c]{@{}c@{}}Male connector\\ SS-400-1-2\end{tabular} & \begin{tabular}[c]{@{}c@{}} Tube to Manifold - Flushing valve\\ Flushing valve - Outlet tube\\ Manifold valve - Tube to bag\end{tabular} & 8 & \begin{tabular}[c]{@{}c@{}}Male 1/8 in. to \\ Tube OD 1/4 in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Male connector\\ SS-400-1-4\end{tabular} & \begin{tabular}[c]{@{}c@{}} Manifold - Tube to Flushing valve\end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Male 1/4 in. to \\ Tube OD 1/4 in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Male elbow\\ SS-400-2-4\end{tabular} & \begin{tabular}[c]{@{}c@{}}Static pressure sensor - Manifold \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Male 1/4 in. to \\ Tube OD 1/4 in. \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Female elbow\\ SS-400-8-4\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pump tube - Airflow sensor \\ Airflow sensor - Static pressure sensor \end{tabular} & 2 & \begin{tabular}[c]{@{}c@{}}Female 1/4 in. to \\ Tube OD 1/4 in. in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Female connector\\ SS-4-TA-7-4RG\end{tabular} & \begin{tabular}[c]{@{}c@{}}Static pressure sensor - T-Union \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Female 1/4 in. and \\ Tube OD 1/4 in. in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Straight union\\ SS-400-6\end{tabular} & \begin{tabular}[c]{@{}c@{}}Filter - Tube filter\\ Tube filter - Pump\end{tabular} & 3 & \begin{tabular}[c]{@{}c@{}}Tube OD 1/4 in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Elbow union\\ SS-400-9\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pump tube - Filter tube \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}}Tube OD 1/4 in. in.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}T-Union\\ SS-400-3\end{tabular} & \begin{tabular}[c]{@{}c@{}}Static pressure sensor \end{tabular} & 3 & Tube OD 1/4 in. \\ \hline
\begin{tabular}[c]{@{}c@{}}T-Union\\ SS-400-3-4TTM\end{tabular} & Tube valve - Bag valve - Quick Connector & 5 &\begin{tabular}[c]{@{}c@{}} Male 1/4 in. and \\ 2 x Tube OD 1/4 in. \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}T-Union\\ SS-400-3-4TMT\end{tabular} & Tube valve - Bag valve - Quick Connector & 1 & \begin{tabular}[c]{@{}c@{}}Male 1/4 in. \\ 2 x Tube OD 1/4 in. \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}T-Union\\ SS-6M0-3\end{tabular} & Pump Inlet and Outlet & 2 & \begin{tabular}[c]{@{}c@{}} Tube OD 6mm \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Tube Fitting\\ SS-401-PC\end{tabular} & Filter - Pump & 1 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Tube Fitting Reducer\\ SS-400-R-6M\end{tabular} &\begin{tabular}[c]{@{}c@{}} Filter - Pump \\ Pump - Airflow sensor \end{tabular} & 2 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. to \\ Tube OD 6mm \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Tube Inserts\\ SS-6M5-4M\end{tabular} & Pump Inlet and Outlet & 2 & \begin{tabular}[c]{@{}c@{}} OD 6mm - ID 4mm \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Tube Fitting Female \\ SS-4-TA-7-4RG\end{tabular} &\begin{tabular}[c]{@{}c@{}} Static pressure sensor \end{tabular} & 1 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. to \\ female 1/4" \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Quick Coupling \\ SS-QC4-B-4PF\end{tabular} &\begin{tabular}[c]{@{}c@{}} T-Union of bags \end{tabular} & 6 & \begin{tabular}[c]{@{}c@{}} SS female 1/4" \end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Tube adapter \\ SS-300-R-4\end{tabular} &\begin{tabular}[c]{@{}c@{}} T-Union - Bag valve \end{tabular} & 6 & \begin{tabular}[c]{@{}c@{}} Tube OD 1/4in. to \\ Tube OD 3/16" \end{tabular} \\ \hline
\end{tabular}}
\DIFaddendFL \caption{Interface Descriptions Inside AAC System.}
\label{tab:AAC-interfaces}
\end{table}
\subsubsection{Electrical Interfaces}
\label{sec:4.2.3}
The experiment \DIFdelbegin \DIFdel{will connect }\DIFdelend \DIFaddbegin \DIFadd{was connected }\DIFaddend to the gondola electrically via a 4 pin, male, box mount receptacle MIL - C-26482P series 1 connector with an 8-4 insert arrangement (MS3112E8-4P) \cite{BexusManual}. It \DIFdelbegin \DIFdel{will connect }\DIFdelend \DIFaddbegin \DIFadd{was connected }\DIFaddend to one 28.8 V/1 mA battery pack which \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of eight SAFT LSH20 batteries in series where each \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a 5 A fuse\cite{BexusManual}. The expected maximum current is 1.1 A.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{4-experiment-design/img/connectors.png}
\caption{Connectors.}
\label{fig:connectors}
\end{figure}
The E-Link connection shall be made between the experiment and the E-Link system using a RJ45 connection which \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend supplied by SSC and an Ethernet protocol. The Amphenol RJF21B connector \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend mounted on either the front or the side of the experiment\cite{BexusManual}.
The CAC and AAC \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected together with a D-SUB 9-pin connector where power, ground and signals for the sensors in the CAC \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected. A female connector \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend located on the AAC wall and a male connector on the CAC wall.
Another female D-SUB 9-pin connector \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend located on the wall of the AAC in which the connections for the three ambient pressure sensors \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located. Connectors with different pin configuration are shown in Figure \ref{fig:connectors}.
The expected data rate \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 1.58 kbits/s for downlink and 1.08 kbits/s for uplink.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Figure_Detail_Interfaces.png}
\caption{Electrical Interfaces.}
\label{fig:electrical_interfaces}
\end{figure}
%\begin{figure}[H]
% \begin{align*}
% \includegraphics[width=0.7\textwidth]{4-experiment-design/img/Diagram_pipe.png}
% \end{align*}
% \caption{Diagram of the experiment box face exposed to the outside.}\label{fig:pipes_interface_1}
%\end{figure}
\iffalse
\subsubsection{Radio Frequencies (Optional)}
\begin{centering}
Not required.
\end{centering}
\bigskip
\subsubsection{Thermal (Optional)}
\begin{centering}
Not required.
\end{centering}
\bigskip
\fi
\raggedbottom
\begin{landscape}
\subsection{Experiment Components} \label{components}
\label{sec:experiment-components}
Component tables were generated from the project budget spreadsheet in Appendix \ref{sec:appO} using the scripts included in Appendix \ref{sec:appK}.
\subsubsection{Electrical Components}
Table \ref{tab:components-table-electrical} shows all required electrical components with their total mass and price.\\
\begin{longtable} {|m{0.05\textwidth}|m{0.25\textwidth}|m{0.17\textwidth}|m{0.2\textwidth}|m{0.05\textwidth}|m{0.07\textwidth}|m{0.07\textwidth}|m{0.25\textwidth}|m{0.11\textwidth}|} \hline \textbf{ID} & \textbf{Component Name} & \textbf{Manufacturer} & \textbf{Manufacturer Code} & \textbf{Qty} & \textbf{Total Mass [g]} & \textbf{Total Cost [EUR]} & \textbf{Note} & \textbf{Status} \\ \hline E1 & Arduino Due with headers & Arduino & A000062 & 1 & 36 & 35 & Fast and has many analog, and digital pins & Received \\ \hline E2 & Ethernet Shield & SEEED Studio & SKU 103030021 & 1 & 36 & 28 & Can be mounted on top of the board & Received \\ \hline E3 & Miniature diagphram air pump & KNF & NMP 850.1.2 KNDC-B & 1 & 430 & 350 & & Received \\ \hline E4 & Pressure sensor & SENSOR SOLUTIONS & MS560702BA03-50 & 4 & 20 & 9.2 & High resolution, large measuring range & Received \\ \hline E5 & Sampling Valve (inlet and outlet 1/8" female) & SMC & VDW22UANXB & 1 & 100 & 45 & & Received \\ \hline E6 & Airflow sensor & Honeywell & AWM5102VN & 1 & 60 & 130 & 0-10 SLPM & Received \\ \hline E7 & Heater & Minco & HK5160R157L12 & 4 & 16 & 380 & Easy to mount, compact size & Received \\ \hline E9 & Temperature sensor & Maxim Integrated & DS1631+-ND & 8 & 16 & 24 & I2C digital output interface, temperature range down to -55 $\degree{C}$ & Received \\ \hline E10 & DC/DC converter 24 V & Traco Power & S24SP24003PDFA & 2 & 92 & 98 & Provides required output voltage and power, 93\% efficiency & Received \\ \hline E12 & MicroSD & Kingston Technology & SDCIT/16GB & 1 & 0.5 & 20 & Small, good temperature range, sufficient storage & Received \\ \hline E13 & Logic CAT5E Network & Valueline & VLCT85000Y30 & 1 & 90 & 7 & For testing and ground station & Received \\ \hline E14 & Resistors & n/a & n/a & 25 & 25 & 0 & & Received \\ \hline E15 & Capacitors (0.1 uF, 5uF, 10 uF, 100uF) & n/a & n/a & 7 & 7 & 0 & & Received \\ \hline E16 & Mosfet for current control & IR & IRLB8748PBF & 11 & 22 & 7.7 & Cheap, good temperature range & Received \\ \hline E17 & Diodes for DCDC converters & Diotec Semiconductor & 1N5059 & 4 & 1.6 & 0.4 & Cheap, good temperature range & Received \\ \hline E18 & LED 3.3 V & Wurth Elektronik & 151034GS03000 & 16 & 6.4 & 8.3 & For monitoring, testing & Received \\ \hline E19 & 15-pin D-SUB Female connector with pins & RND Connect & RND 205-00779 & 2 & 22 & 1.5 & For connecting distributed components & Received \\ \hline E20 & 9-pin D-SUB Female connector with pins & RND Connect & RND 205-00777 & 3 & 26 & 2 & For connecting distributed components & Received \\ \hline E21 & 9 pin D-SUB Female connector with soldering cups & RND Connect & RND 205-00704 & 3 & 27 & 1.7 & For connecting distributed components & Received \\ \hline E22 & 9 pin D-SUB Male connector with soldering cups & RND Connect & RND 205-00700 & 6 & 54 & 2.9 & For connecting distributed components & Received \\ \hline E23 & 15-pin D-SUB Male connector with soldering cups & RND Connect & RND 205-00701 & 2 & 22 & 1.2 & For connecting distributed components & Received \\ \hline E24 & 9-pin D-SUB backing & Enchitech & MHDTZK-9-BK-K & 6 & 240 & 17 & For connecting distributed components & Received \\ \hline E25 & 15-pin D-SUB backing & Enchitech & MHDTZK-15-BK-K & 2 & 130 & 6.1 & For connecting distributed components & Received \\ \hline E26 & Wall mounting bolts & RND Connect & RND 205-00786 & 3 & 7.5 & 3.1 & For connecting distributed components & Received \\ \hline E28 & 3.3 V Zener diode & RND Components & RND 1N746A & 15 & 7.5 & 1.1 & Regulate indication LED voltage & Received \\ \hline E29 & Male connector on PCB & Binder & Serie 768 & 1 & 5 & 8.5 & & Received \\ \hline E30 & Female connector from wall & Binder & Serie 768 & 1 & 11 & 12 & & Received \\ \hline E31 & Grounding contact & Vogt & DIN 46234 & 4 & 2.3 & 8.6 & 1 pack of 100 pcs & Received \\ \hline E32 & Logic CAT5 E-link for inside box 0.5m & Valueline & VLCP85121E05 & 1 & 20 & 1.3 & To connect from wall to Arduino shield & Received \\ \hline E33 & Signal wire & Alpha Wire & 5854/7 YL005 & 1 & 120 & 34 & Roll of 30 m. Half will be used approximately & Received \\ \hline E34 & Flushing valve (inlet and outlet 1/8" female) & SMC & VDW22UANXB & 1 & 100 & 45 & & Received \\ \hline E35 & Valves manifold (outlet 1/8" female) & SMC & VDW23-5G-1-H-Q & 6 & 600 & 240 & & Received \\ \hline E36 & Power wire - Back & Alpha Wire & 5856 BK005 & 1 & 73 & 46 & Roll of 30 m. A fifth will be used approximately & Received \\ \hline E37 & Electrical Tape for marking wires - White & Hellerman Tyton & HTAPE-FLEX15WH-15X10 & 1 & 14 & 0.82 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E38 & Electrical Tape for marking wires - Black & Hellerman Tyton & HTAPE-FLEX15BK-15X10 & 1 & 13 & 0.82 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E39 & Electrical Tape for marking wires - Green & Hellerman Tyton & HTAPE-FLEX15GN-15X10 & 1 & 14 & 0.82 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E40 & Electrical Tape for marking wires - Violet & Hellerman Tyton & HTAPE-FLEX15VT-15X10 & 1 & 14 & 0.82 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E41 & Electrical Tape for marking wires - Gray & Hellerman Tyton & HTAPE-FLEX15GY-15X10 & 1 & 14 & 0.82 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E42 & Electrical Tape for marking wires - Brown & Hellerman Tyton & HTAPE-FLEX15BN-15X10 & 1 & 14 & 0.82 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E43 & Electrical Tape for marking wires - Blue & Hellerman Tyton & HTAPE-FLEX15BU-15X10 & 1 & 14 & 1.9 & Roll of 10 m. A forth will be used approximately & Received \\ \hline E48 & Power wire - Red & Alpha Wire & 5856 RD005 & 1 & 73 & 46 & Roll of 30 m. A fifth will be used approximately & Received \\ \hline E50 & 6-pin male double row header & RND Connect & RND 205-00634 & 2 & 2 & 0.44 & & Received \\ \hline E51 & 8-pin male single row header & RND Connect & RND 205-00629 & 5 & 5 & 1.4 & & Received \\ \hline E52 & 10-pin male single row header & Prostar & SD-2X5-T1-7/3MM & 1 & 1 & 0.26 & & Received \\ \hline E53 & 36-pin male double row header & Würth Elektronik & 61303621121 & 1 & 2 & 1.7 & & Received \\ \hline E54 & DC/DC converter 12 V & Delta & R-7812-0.5 & 2 & 40 & 68 & 12V,1.67A, 20W DCDC & Received \\ \hline E55 & Potentiometer 50 kOhm & Bourns & 3296Y-1-503LF & 10 & 10 & 18 & & Received \\ \hline E56 & Static Pressure Sensor & Gems Sensors and Controls & 3500S0001A05E000 & 1 & 53 & 120 & & Received \\ \hline E57 & Connector for the Static Pressure Sensor & Schneider Electric & XZCPV1141L2 & 1 & 14 & 14 & -25 to 80 celcius, female 4 pin M12 connector with 2 meter wire & Received \\ \hline E58 & PCB board & Eurocircuits & n/a & 1 & 100 & 180 & Will be custom-made & Received \\ \hline E59 & Pressure sensor PCB & Eurocircuits & n/a & 3 & 75 & 42 & Will be custom-made & Received \\ \hline E60 & Arduino Due without headers & Arduino & 2 & 1 & 36 & 34 & & Received \\ \hline \caption{Electrical Components Table} \label{tab:components-table-electrical} \end{longtable} \raggedbottom
\end{landscape}
\begin{landscape}
\subsubsection{Mechanical Components}
Table \ref{tab:components-table-mechanical} shows all required mechanical components with their total mass and price.\\
\begin{longtable} {|m{0.05\textwidth}|m{0.25\textwidth}|m{0.17\textwidth}|m{0.2\textwidth}|m{0.05\textwidth}|m{0.07\textwidth}|m{0.07\textwidth}|m{0.25\textwidth}|m{0.11\textwidth}|} \hline \textbf{ID} & \textbf{Component Name} & \textbf{Manufacturer} & \textbf{Manufacturer Code} & \textbf{Qty} & \textbf{Total Mass [g]} & \textbf{Total Cost [EUR]} & \textbf{Note} & \textbf{Status} \\ \hline M1 & Strut profile 20x20 M6/M6, length: 460 mm & Bosch - Rexroth & 3842993231 & 16 & 2900 & 93 & Railed geometry, Structural element & Received \\ \hline M2 & Strut profile 20x20 M6/M6, length: 360 mm & Bosch - Rexroth & 3842993231 & 4 & 580 & 22 & Railed geometry, Structural element & Received \\ \hline M3 & Strut profile 20x20 M6/M6, length: 190 mm & Bosch - Rexroth & 3842993231 & 4 & 300 & 21 & Railed geometry, Structural element & Received \\ \hline M4 & T-nut N6 M4 & Bosch - Rexroth & 3842536599 & 100 & 300 & 74 & Wall, Protective element & Received \\ \hline M5 & Sliding block N6 M4 & Bosch - Rexroth & 3842523140 & 100 & 300 & 90 & Wall, Protective element & Received \\ \hline M6 & Bracket standard 20x20 N6/6 & Bosch - Rexroth & 3842523508 & 100 & 500 & 52 & Wall, Protective element & Received \\ \hline M7 & Variofix block S N6 20x20 & Bosch - Rexroth & 3842548836 & 100 & 500 & 62 & Wall, Protective element & Received \\ \hline M8 & Cubic connector 20/3 N6 & Bosch - Rexroth & 3842523872 & 16 & 160 & 39 & & Received \\ \hline M9 & Strap-shaped handle & Bosch - Rexroth & 3842518738 & 4 & 80 & 19 & & Received \\ \hline M10 & Retainer ring M4 & Bosch - Rexroth & 3842542328 & 100 & 50 & 5.4 & & Received \\ \hline M11 & DIN 7984 M4x8 bolts & n/a & n/a & 150 & 150 & 0 & & Received \\ \hline M12 & M6x16 bolts & Bossard & 79850616 & 48 & 240 & 6.2 & & Received \\ \hline M13 & ISO 4762 bolts & n/a & n/a & 8 & 16 & 0 & & Received \\ \hline M14 & Washers & n/a & n/a & 20 & 4 & 0 & & Received \\ \hline M15 & Aluminum sheets & - & 204599 & 1 & 2500 & 25 & & Received \\ \hline M16 & Styrofoam 250 SL-A-N & Isover & 3542005000 & 1 & 1800 & 97 & & Received \\ \hline M17 & Fixing bar for the bags & Maskindelen & n/a & 2 & 26 & 6 & & Received \\ \hline M18 & Flat plate interface for fixing bar & Alfer & n/a & 4 & 130 & 0 & The cost is included in M20 & Received \\ \hline M19 & CAC-AAC interface 6-hole plate & Alfer & n/a & 4 & 65.6 & 0 & The cost is included in M20 & Received \\ \hline M20 & Steel sheet 500x250x0.75 mm & Alfer & n/a & 3 & 0 & 17.6 & Used for M18 and M19 & Received \\ \hline M22 & DIN 7984 M4x8 bolts & n/a & n/a & 26 & 26 & 0 & & Received \\ \hline M23 & DIN 7984 M4x30 bolts & n/a & n/a & 16 & 32 & 0 & & Received \\ \hline M24 & Nut M4 & n/a & n/a & 42 & 42 & 0 & & Received \\ \hline M26 & 15mm M3 Standoff/Spacer for PCB & Keystone Electronics & 24339 & 10 & 20 & 7.8 & & Received \\ \hline M27 & Lock nut M3 (DIN985) for PCB & n/a & n/a & 5 & 5 & 0 & & Received \\ \hline M28 & M3 Cheese Head Screws 6mm & n/a & n/a & 5 & 4 & 0 & & Received \\ \hline M32 & Coiled tube & FMI & n/a & 1 & 6200 & 22000 & \DIFdelbegin \DIFdel{FMI will bring it on 11th Oct }\DIFdelend \DIFaddbegin \DIFadd{- }\DIFaddend & \DIFdelbegin \DIFdel{To Be Delivered }\DIFdelend \DIFaddbegin \DIFadd{Received }\DIFaddend \\ \hline M34 & Interface tube-screw male (OD 1/4" - ID 5/32" to male 1/8") & Swagelok & SS-400-1-2 & 2 & 26 & 20 & & Received \\ \hline M36 & Interface attached to the coiled tube outlet, quick connector & Swagelok & SS-QC4-B-200 & 1 & 91 & 65 & & Received \\ \hline M37 & Interface attached to the coiled tube inlet, quick connector & Swagelok & SS-QC4-B-400 & 1 & 68 & 50 & & Received \\ \hline M38 & Interface quick connector stem with valve & Swagelok & SS-QC4-D-400 & 1 & 58 & 40 & & Received \\ \hline M43 & Gas Sampling Bag, Multi-Layer Foil, 3L, 10"x10", 5pk & Restek & 22951 & 6 & 30 & 120 & & Received \\ \hline M44 & Manifold (inlet and outlet 1/8" female) & SMC & VV2DW2-H0601N-F-Q & 1 & 440 & 140 & & Received \\ \hline M45 & Interface tube-screw male (OD 1/4" - ID 5/32" to male 1/8") & Swagelok & SS-400-1-2 & 8 & 100 & 110 & & Received \\ \hline M46 & Interface tube-screw male 90 degree(OD 1/4" - ID 5/32" to male 1/8") & Swagelok & SS-400-2-2 & 3 & 39 & 48 & & Received \\ \hline M47 & Male 90-degree connector (OD 1/4" - ID 5/32" to male 1/4") & Swagelok & SS-400-2-4 & 1 & 16 & 14 & & Received \\ \hline M49 & Interface T-Union (male 1/4") & Swagelok & SS-400-3 & 1 & 71 & 33 & & Received \\ \hline M51 & Tubing, Sulfinert 304SS Welded/Drawn 50ft (OD 1/4" - ID 0.21") & SilcoTek & 29255 & 1 & 600 & 840 & & Received \\ \hline M52 & Quick Coupling female 1/4" & Swagelok & SS-QC4-B-4PF & 6 & 270 & 300 & & Received \\ \hline M53 & 90 degree elbow 1/4" & Swagelok & SS-400-9 & 1 & 55 & 19 & & Received \\ \hline M54 & Interface female 90-degree connector (OD 1/4" - ID 5/32" to female 1/4") & Swagelok & SS-400-8-4 & 2 & 120 & 47 & & Received \\ \hline M55 & Female tube adapter (Tube OD 1/4" to female 1/4" ISO) & Swagelok & SS-4-TA-7-4RG & 1 & 46 & 20 & & Received \\ \hline M56 & Tube Fitting Reducer (OD 3/16 in. to 1/4 in. Tube OD) & Swagelok & SS-300-R-4 & 6 & 120 & 72 & & Received \\ \hline M57 & Tube plug 1/4 in. & Swagelok & SS-400-C & 4 & 0 & 27 & Will only be used before and after flight & Received \\ \hline M58 & Magnesium filter tube with interface & FMI & n/a & 1 & 65 & 150 & & Received \\ \hline M59 & T-Union 6 mm Tube OD & Swagelok & SS-6M0-3 & 2 & 100 & 54 & & Received \\ \hline M60 & Tube Fitting Reducer (1/4 in. to 6 mm Tube OD) & Swagelok & SS-400-R-6M & 2 & 60 & 25 & & Received \\ \hline M61 & Tubing Insert, 6 mm OD x 4 mm ID & Swagelok & SS-6M5-4M & 4 & 40 & 11 & & Received \\ \hline M62 & Male Branch Tee (OD 1/4" - 1/4" Male NPT) & Swagelok & SS-400-3-4TTM & 5 & 320 & 92 & & Received \\ \hline M63 & Male Branch Tee (OD 1/4" - 1/4" Male NPT) & Swagelok & SS-400-3-4TMT & 1 & 63 & 63 & & Received \\ \hline M64 & Male connector (OD 1/4" - 1/4" Male NPT) & Swagelok & SS-400-1-4 & 1 & 30 & 8.6 & & Received \\ \hline M65 & Straight tube union (OD 1/4" - ID 5/32") & Swagelok & SS-400-6 & 1 & 30 & 13 & & Received \\ \hline M66 & Straight reducing tube union (OD 1/4" to OD 1/8") & Swagelok & SS-400-6-2 & 1 & 25 & 17 & & Received \\ \hline M67 & 90 degree elbow 1/4" & Swagelok & SS-400-9 & 3 & 170 & 58 & & Received \\ \hline M68 & Port Connector, 1/4 in. Tube OD & Swagelok & SS-401-PC & 5 & 25 & 32 & & Received \\ \hline M69 & Magnesium filter with interface & FMI & n/a & 1 & 65 & 150 & & Received \\ \hline M70 & Aluminum sheets & n/a & 204599 & 1 & 100 & 1 & & Received \\ \hline M71 & Styrofoam (bulk - 1 piece from 1.16) & Isover & 3542005000 & 1 & 110 & 0 & The cost is included in M16 & Received \\ \hline M72 & Strut profile 20x20 M6/M6, length: 360 mm & Bosch - Rexroth & 3842992888 & 2 & 290 & 6 & & Received \\ \hline M73 & Strut profile 20x20 M6/M6, length: 170 mm & Bosch - Rexroth & 3842992888 & 2 & 140 & 4.5 & & Received \\ \hline M74 & Strut profile 20x20 M6/M6, length: 263 mm & Bosch - Rexroth & 3842993230 & 1 & 110 & 4.2 & & Received \\ \hline M75 & Sliding block N8 M6 & Bosch - Rexroth & 3842547815 & 10 & 30 & 11 & & Received \\ \hline M76 & Straight tube union (OD 1/8" - ID 5/32") & Swagelok & SS-200-6 & 1 & 30 & 13 & & Received \\ \hline M77 & Rubber bumper & n/a & n/a & 10 & 355 & 0 & & Received \\ \hline \caption{Mechanical Components Table} \label{tab:components-table-mechanical} \end{longtable} \raggedbottom
\raggedbottom
\end{landscape}
\begin{landscape}
\subsubsection{Other Components}
Table \ref{tab:component-table-other} shows other components which contribute to the mass and/or price.\\
\begin{longtable} {|m{0.05\textwidth}|m{0.25\textwidth}|m{0.17\textwidth}|m{0.2\textwidth}|m{0.05\textwidth}|m{0.07\textwidth}|m{0.07\textwidth}|m{0.25\textwidth}|m{0.11\textwidth}|} \hline \textbf{ID} & \textbf{Component Name} & \textbf{Manufacturer} & \textbf{Manufacturer Code} & \textbf{Qty} & \textbf{Total Mass [g]} & \textbf{Total Cost [EUR]} & \textbf{Note} & \textbf{Status} \\ \hline O1 & Hand Tube Bender 1/4 in & Swagelok & MS-HTB-4T & 1 & - & 250 & & Received \\ \hline O2 & Tube Cutter (4 mm to 25 mm) & Swagelok & MS-TC-308 & 1 & - & 35 & & Received \\ \hline O3 & Tubing Reamer & Swagelok & MS-TDT-24 & 1 & - & 26 & & Received \\ \hline O4 & Travel to FMI for sample bag testing & n/a & n/a & 1 & - & 250 & & Completed \\ \hline O5 & Travel to FMI for integration testing & n/a & n/a & 1 & - & 250 & & Completed \\ \hline O6 & Shipping costs & n/a & n/a & n/a & - & 430 & & n/a \\ \hline O7 & Error margin & n/a & n/a & n/a & 2400 & 220 & & n/a \\ \hline O8 & PTFE Tape Thread Sealant, 1/4" & Swagelok & MS-STR-4 & 1 & - & 1.9 & & Received \\ \hline O9 & Double-Sided Adhesive Tape & 3M & 180-89-682 & 8 & - & 78 & & Received \\ \hline O10 & PTFE Tape, 32.9 m & 3M & 60-1"X36YD & 1 & - & 68 & & Received \\ \hline O11 & Microfibre cloth & n/a & 180-63-478 & 1 & - & 9.4 & & Received \\ \hline O12 & IPA Cleaner Spray, 400 ml & RND Lab & RND 605-00129 & 3 & - & 12 & & Received \\ \hline O13 & IPA Cleaner fluid, 1000 ml & Electrolube & EIPA01L & 2 & - & 35 & & Received \\ \hline O14 & Disposable gloves L & Eurostat & 51-675-0032 & 1 & - & 12 & & Received \\ \hline O15 & Electronic Leak Detector & Restek & 22655-R & 1 & - & 1000 & & Received \\ \hline O16 & Thermal Adhesive & Fischer Elektonik & WLK 10 & 1 & - & 18 & & Received \\ \hline O17 & Flushing process (nitrogen or dry calibrated gas) & n/a & n/a & 2 & - & 200 & \DIFdelbegin \DIFdel{FMI will bring it on 11th Oct }\DIFdelend \DIFaddbegin \DIFadd{- }\DIFaddend & \DIFdelbegin \DIFdel{To be delivered }\DIFdelend \DIFaddbegin \DIFadd{Received }\DIFaddend \\ \hline \caption{Other Components Table} \label{tab:component-table-other} \end{longtable} \raggedbottom
\raggedbottom
\end{landscape}
\pagebreak
\subsection{Mechanical Design} \label{Mechanical_Design}
\label{sec:mechanical-design}
The experiment \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of two rectangular boxes, one stacked next to the other, shown in Figure \ref{dimensions}. The higher but narrower box (CAC box) \DIFdelbegin \DIFdel{allocates }\DIFdelend \DIFaddbegin \DIFadd{allocated }\DIFaddend the heaviest element, the CAC. The main box (AAC box) \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the pneumatic system with six sampling bags and the central command unit: The Brain. The Brain \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the general Electronic box (EB) as well as the pneumatic sampling system.
The two-box design \DIFdelbegin \DIFdel{will allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend ease of access and manipulation of both the CAC and AAC subsystems. In addition, the AAC sampling system is designed to be re-usable for future handover to the FMI, as such, it \DIFdelbegin \DIFdel{will be mountable }\DIFdelend \DIFaddbegin \DIFadd{can be mounted }\DIFaddend on any standard balloon flight without having to introduce major design changes. \DIFdelbegin \DIFdel{If a battery }\DIFdelend \DIFaddbegin \DIFadd{The experiment would only require its own batteries }\DIFaddend as a power unit\DIFdelbegin \DIFdel{was introduced, hence less bags could be carried (around five bags) in this potential future setup}\DIFdelend \DIFaddbegin \DIFadd{. In order to help balance the AAC box center of mass, they would be allocated in the corner opposite to the Brain}\DIFaddend , see Figure \ref{battery_distribution}. \DIFaddbegin \DIFadd{This also maintains the space for 6 bags inside the AAC box.
}\DIFaddend
\DIFdelbegin %DIFDELCMD < \smallskip
%DIFDELCMD < %%%
\DIFdelend \DIFaddbegin \begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Figure_15.png}
\caption{\DIFaddFL{Layout Including a Battery (in Red).}}
\label{battery_distribution}
\end{figure}
\pagebreak
\DIFaddend Since the CAC \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the heaviest component in the whole experiment its positioning and orientation inside the gondola \DIFdelbegin \DIFdel{will directly affect }\DIFdelend \DIFaddbegin \DIFadd{directly affected }\DIFaddend the stress analysis of the structure. In the worst case scenario, without a proper study of the aforesaid interface, shear in the screws could be produced after a violent landing stress or unexpected shaking. The larger the distance to the fixed points, the bigger the momentum produced by the component. For this reason, the CAC box \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend securely attached to the AAC box by means of six anchor points with four screws each. This fixing interface can be seen in red in Figure \ref{dimensions} to help the fast recovery. Taking into account also the two extra anchor points to the gondola, the fast recovery of CAC then \DIFdelbegin \DIFdel{will only require }\DIFdelend \DIFaddbegin \DIFadd{only required }\DIFaddend unscrewing $16$ screws and unplugging a D-Sub connector.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{4-experiment-design/img/Mechanical/Figure_14.png}
\caption{General Dimensions of the Experiment.}
\label{dimensions}
\end{figure}
\DIFdelbegin %DIFDELCMD < \begin{figure}[H]
%DIFDELCMD < \centering
%DIFDELCMD < \includegraphics[width=0.45\textwidth]{4-experiment-design/img/Mechanical/Figure_15.png}
%DIFDELCMD < %%%
%DIFDELCMD < \caption{%
{%DIFAUXCMD
\DIFdelFL{Layout Including a Battery (in Red).}}
%DIFAUXCMD
%DIFDELCMD < \label{battery_distribution}
%DIFDELCMD < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend The main mechanical characteristics of the experiment are summarized in Table \ref{table:experiment-summary}, where the values are based on the reference axis shown in Figure \ref{COG}. The Center Of Gravity for the whole experiment \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend determined to be located just on the plate of the third level of the Brain, which coincides with the location of the electronics PCB. This outcome \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend quite advantageous in terms of stability for one of the most sensitive subsystems of the experiment in terms of shakes and loads.
%DIF < It should also be noted that the weights of the table for the boxes and, therefore, the whole experiment, are increased by a safety margin of $10\%$.
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\scalebox{0.8}{
\begin{tabular}{c|c|c|c|}
\cline{2-4}
& CAC & AAC & TOTAL \\ \hline
\multicolumn{1}{|c|}{Experiment mass {[}kg{]}} & $11.95$ & $12.21$ & $24.16$ \\ \hline
\multicolumn{1}{|c|}{Experiment dimensions {[}m{]}} & $0.23\times0.5\times0.5$ & $0.5\times0.5\times0.4$ & $0.73\times0.5\times0.5$ \\ \hline
\multicolumn{1}{|c|}{Experiment footprint area {[}$m^2${]}} & $0.115$ & $0.25$ & $0.365$ \\ \hline
\multicolumn{1}{|c|}{Experiment volume {[}$m^3${]}} & $0.0575$ & $0.1$ & $0.1575$ \\ \hline
\multicolumn{1}{|c|}{Experiment expected COG position} & \begin{tabular}[c]{@{}l@{}}$X=23.51\ cm$\\ $Y=10\ cm$\\ $Z=22.57\ cm$ \end{tabular} & \begin{tabular}[c]{@{}l@{}} $X=29.04\ cm$\\ $Y=16.63\ cm$\\ $Z=16.2\ cm$ \end{tabular} &\begin{tabular}[c]{@{}l@{}} $X=26.31\ cm$\\ $Y=24.99\ cm$\\ $Z=19.35\ cm$ \end{tabular} \\ \hline
\end{tabular}}}
\caption{Experiment Summary Table.}
\label{table:experiment-summary}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{4-experiment-design/img/Mechanical/COG.jpg}
\caption{Reference Axis for the Total Center of Gravity.}
\label{COG}
\end{figure}
\pagebreak
\subsubsection{Structure}
\label{sec:4.4.1}
The main purpose of \DIFdelbegin \DIFdel{an }\DIFdelend \DIFaddbegin \DIFadd{the }\DIFaddend experiment box structure \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to provide overall mechanical integrity and maintain the system geometry. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend able to carry the loads of all the phases of the flight and ensure that all the components and subsystems \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend withstand them. Test 9 in Table \ref{tab:vibration-test} helped to confirm the frame \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend withstand these vibrations.
Moreover, other considerations such as electrical and, especially, thermal conductivity \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also a concern since the experiment \DIFdelbegin \DIFdel{will fly }\DIFdelend \DIFaddbegin \DIFadd{flown }\DIFaddend up to $25\ km$ in the Polar Circle \DIFdelbegin \DIFdel{in mid October }\DIFdelend and many critical subsystems \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend tight operative ranges values.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/structure.png}
\caption{Structure Overview.}
\label{fig:structure}
\end{figure}
For this purpose, two boxes built with straight frames \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend chosen as the best option as shown in Figure \ref{fig:structure}. The frame of these boxes \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend strut profiles made of aluminum, with a characteristic cross-section of $20\times20\ mm$, and with $M6$ thread at each side. The rails \DIFdelbegin \DIFdel{allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend an easy interface between bars and other elements. In turn, these profiles \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend joined together in each corner with aluminum cubic connectors of $20\times20\ mm$ (see Figure \ref{fig:corner_cube}) and $M6\times16$ bolts aligned with the bars axis. At the same time, these nodes \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend reinforced by three $20/20$ brackets (see Figure \ref{fig:corner_bracket}), each \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fixed to the frames with $M4\times8$ bolts and the corresponding $M4$ T-nut. All these components were manufactured by Bosch Rexroth.
\bigskip
\begin{figure}[H]
\noindent\makebox[\textwidth]{%
\begin{subfigure}{.35\textwidth}
\centering\includegraphics[width=1\textwidth]{4-experiment-design/img/Mechanical/corner_brackets.jpg}
\caption{Brackets Reinforcement.}
\label{fig:corner_bracket}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{.35\textwidth}
\centering\includegraphics[width=1\textwidth] {4-experiment-design/img/Mechanical/corner_cube.jpg}
\caption{Cubic Connector.}
\label{fig:corner_cube}
\end{subfigure}}
\caption{Strut Profiles Connections.}
\label{fig:profile_connection}
\end{figure}
\bigskip
Table \ref{table:profile_momentum} shows the main mechanical properties of the Bosch Rexroth $20/20$ strut profiles used in the structure. For further details see Table \ref{table:profile_material}.
\begin{longtable}{|m{0.2\textwidth}|m{0.14\textwidth}|m{0.25\textwidth}|m{0.3\textwidth}|}
\hline
\textbf{Section surface} & \textbf{Mass} & \textbf{Moment of Inertia ($I_x = I_y$)} & \textbf{Moment of resistance ($W_x = W_y$)} \\ \hline
$1.6\ cm^2$ & $0.4\ kg/m$ & $0.7\ cm^4$ & $0.7\ cm^3$ \\ \hline
\caption{Intrinsic Characteristics of the Strut Profiles.}
\label{table:profile_momentum}
\end{longtable}
\smallskip
\subsubsection{Walls and Protections}
\label{sec:4.4.2}
Since the experiment \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend placed close to the outside of the gondola, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend very exposed to both external elements impacts and also possible broken parts from other experiments in the gondola due to unexpected rapid movements, and a probable hard landing impact. Therefore, the experiment box \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend shielded with removable aluminum walls along with a thick layer of Styrofoam attached to each wall. This thickness \DIFdelbegin \DIFdel{varies }\DIFdelend \DIFaddbegin \DIFadd{varied }\DIFaddend from two to three centimeters in the AAC box, and five centimeters to protect the AirCore. Besides protection, the thickness of the styrofoam \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend also motivated by thermal control issues.
% which is explained more in detail in Section \ref{sec:4.6.6}.
To mount the experiment a combination of three different elements \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used, as shown in Figure \ref{fig:wall_attach}. The walls \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend screwed to the Variofix blocks by means of $M4\times8$ bolts. In between the aluminum walls and the bolts, a $M4$ retainer ring \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend placed to improve the fixation of each spot. Eight fixation points for each wall \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend considered sufficient to keep the experiment safe from any impact.
The styrofoam sheets \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend attached to the aluminum walls with double sided tape.
Tables \ref{table:wall_aluminum} and \ref{table:wall_styrofoam} show the main properties of the materials used to build the walls of the boxes.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{4-experiment-design/img/Mechanical/wall_attachment.jpg}
\caption{Exploit View of the Attachment of the Walls.}
\label{fig:wall_attach}
\end{figure}
\subsubsection{CAC Box}
The CAC subsystem \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend designed to fit a $300$ m stainless steel coiled tube, a solenoid valve provided by SMC controlling it, tube fittings manufactured by Swagelok, an air filter and three temperature sensors. A schematic of this subsystem can be seen in Figure \ref{fig:CAC-schematic}. The CAC \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of a combination of a $200$ m coiled tube of $1/8$ inches diameter and a $100$ m coiled tube of $1/4$ inches diameter. The outlet of the CAC \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend sealed with a quick connector provided by FMI. The inlet \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend sealed the same way but it \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be opened by another interface plugged into the quick connector. A custom made filter by FMI \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend placed between this orifice and the solenoid valve. The filter \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend magnesium perchlorate powder with stone wool at both ends of the tube. It \DIFdelbegin \DIFdel{ensures }\DIFdelend \DIFaddbegin \DIFadd{ensured }\DIFaddend that no moisture \DIFdelbegin \DIFdel{will enter }\DIFdelend \DIFaddbegin \DIFadd{entered }\DIFaddend the coil during any testing or sampling. A piece of stainless steel tube, manufactured by Silcotek, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend attached to the solenoid valve that goes outside the box, thus having a direct outside outlet and inlet for the whole CAC system, as seen in Figure \ref{fig:CAC-cad-model}.
A D-sub cable \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to connect the electrical components to the control unit in the AAC box. Both boxes \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend their own D-sub connector, which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend located on one of the box's walls.
% mechanical issues and gondola constraints
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/CAC_labels.jpg}
\caption{3D Model of the Aircore and its pneumatic fittings. The Numbers Correspond to the main components in Figure. \ref{fig:CAC-schematic}.}
\label{fig:CAC-cad-model}
\end{figure}
\begin{landscape}
\begin{figure}[H]
\centering
\includegraphics[width=1.3\textwidth]{4-experiment-design/img/Mechanical/CAC-schematic.PNG}
\caption{Schematic of CAC.}
\label{fig:CAC-schematic}
\end{figure}
\end{landscape}
\smallskip
\pagebreak
\subsubsection{AAC Box}\label{sec:aac-analysis}
The AAC box has been designed and manufactured to be as compact as possible. An analysis regarding the variation of the bags dimensions to different sampled volume, was made and is summarized in Appendix \ref{dimensions-bags}. From these results it was shown that the AAC subsystem \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend able to fit six $3\ L$ sampling bags, provided by RESTEK, together with The Brain that \DIFdelbegin \DIFdel{includes }\DIFdelend \DIFaddbegin \DIFadd{included }\DIFaddend the pneumatic system and the electronic box. Each bag \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a dedicated valve in the Valve Center (VC) to allow emptying and filling processes as well as to close the bag when needed. The bags \DIFdelbegin \DIFdel{hang }\DIFdelend \DIFaddbegin \DIFadd{hung }\DIFaddend from a bar that \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend attached to the structure frame by two anchor points on the top. The distribution layout can be seen in Figures \ref{iso_aac} and \ref{lateral_aac}. To ensure a properly built pneumatic system with the minimal leakage risk, all tubes manufactured by Silcotek in the system were exclusively connected to tube fittings which were provided by Swagelok.
%DIF < The AAC Subsystem consists of six $3\ L$ sampling bags. Each bag will have a dedicated valve in the Valve Center (VC) to allow emptying and filling processes as well as to close the bag when needed. The bags will be placed vertically and will have two anchor points: on the top through a multiple anchor interface (see Figure \ref{anchor_bags}) and on the bottom by means of the tubes connecting them to the valves.
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
%DIF < % Several Figures of the top box with its inside elements: isometric, top view and front view.
%DIFDELCMD <
%DIFDELCMD < %%%
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/Bags_Fixing_Interface.png}
%DIF < \caption{Sampling Bags with Fixing Interface and the AAC Box Handles.}
%DIF < \label{anchor_bags}
%DIF < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
%DIF < \underline{Distribution}
%DIFDELCMD <
%DIFDELCMD < %%%
%DIF < The AAC box has been designed to be as compact as possible, which was a challenging iterative process since the bags dimensions vary during the flight. The process led to a square base box that is able to fit six sampling bags together with a control center called The Brain that includes the pneumatic system and the electronic box. The distribution layout can be seen in Figures \ref{iso_aac} and \ref{lateral_aac}.
%DIFDELCMD <
%DIFDELCMD < %%%
%DIF < Figure: Top view to show the distribution of the AAC Box
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \begin{figure}[H]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Figure_22a.png}
\caption{Isometric View of the AAC Box.}
\label{iso_aac}
\end{subfigure}
~
\begin{subfigure}[b]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Figure_22b.png}
\caption{Lateral View of the AAC Box.}
\label{lateral_aac}
\end{subfigure}
\caption{Distribution Inside the AAC Box.}
\label{fig:Distribution-AAC}
\end{figure}
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/AAC_isometric_view.png}
%DIF < \caption{Isometric View of the AAC Box.}
%DIF < \label{iso_aac}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
%DIF < %\begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/AAC_lateral_view.png}
%DIF < \caption{Lateral View of the AAC Box.}
%DIF < \label{lateral_aac}
%DIF < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.7\textwidth]{4-experiment-design/img/Mechanical/AAC_front_view.png}
%DIF < \caption{Front View of the AAC Box.}
%DIF < \label{front_aac}
%DIF < \end{figure}
\DIFdel{In order to reach to all the bags from the Valve Center, the tubes have been properly sized }\DIFdelend \DIFaddbegin \DIFadd{The tubes going from the valve centre to the bags were sized as short as possible }\DIFaddend following science concerns regarding length.
%DIF < Since the AAC box is expected to be handed over to FMI, the design also takes into consideration the possibility to include a battery for power supply. This would be allocated next to The Brain and imply reducing the sampling bags down to five, see Figure \ref{battery_distribution}.
%DIF < Figure with a a top view without the 6th bag and with a red box simulating the battery
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \pagebreak
\underline{The Brain}
\label{subsec:brain}
\smallskip
The Brain \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend an essential part of the experiment. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a three-level structure containing both the pneumatic system and the electronics of the experiment, seen in Figure \ref{brain_isometric_open}. Its design \DIFdelbegin \DIFdel{aimed to make }\DIFdelend \DIFaddbegin \DIFadd{made }\DIFaddend it compact enough to both allow a proper thermal control and to fit into the space left next to the sampling bags. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend placed in a corner of the AAC box. Therefore, The Brain \DIFdelbegin \DIFdel{takes }\DIFdelend \DIFaddbegin \DIFadd{took }\DIFaddend advantage of the vertical space inside the AAC box. It \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend three different levels: Level 1 - Pump, Level 2 - Valve Center, and Level 3 - Electronics.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{4-experiment-design/img/Mechanical/Figure_23.png}
\caption{Inside view of The Brain.}
\label{brain_isometric_open}
\end{figure}
\DIFaddbegin
\DIFaddend Level 1 of The Brain \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend lying on the base wall styrofoam. It \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the beginning of the pneumatic sampling system. The inlet tube \DIFdelbegin \DIFdel{passes }\DIFdelend \DIFaddbegin \DIFadd{passed }\DIFaddend through the wall and \DIFdelbegin \DIFdel{interfaces }\DIFdelend \DIFaddbegin \DIFadd{interfaced }\DIFaddend with the filter. From here the system \DIFdelbegin \DIFdel{continues }\DIFdelend \DIFaddbegin \DIFadd{continued }\DIFaddend through the pump provided by KNF, and to Level 2. The reason for having the pump in Level 1 \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to have the minimum vibration transmitted to the other components. As explained in section \ref{subsec:vibration}, a piece of styrofoam \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend added between the pump and the level 1 plate to help mitigate its vibrations. The pump \DIFdelbegin \DIFdel{has two heaters close by that will be }\DIFdelend \DIFaddbegin \DIFadd{had two heaters on it that were }\DIFaddend used to regulate its temperature. %This can be seen as two black rectangular sheets underneath the pump in Figure \ref{level_1}.
The second level of The Brain \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for the distribution of the air to the selected sampling bag. From Level 1, the air \DIFdelbegin \DIFdel{passes }\DIFdelend \DIFaddbegin \DIFadd{passed }\DIFaddend through the airflow sensor and the static pressure sensor that \DIFdelbegin \DIFdel{allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend for monitoring the behavior of the system. The manifold with six solenoid valves, manufactured by SMC, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the main component. From here, the tubes \DIFdelbegin \DIFdel{connect }\DIFdelend \DIFaddbegin \DIFadd{were connected }\DIFaddend with the bags. A T-Union connection \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used just before the bag valve. This interface \DIFdelbegin \DIFdel{allows }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend the pre-flight flushing of the tubes connecting with the valves as \DIFaddbegin \DIFadd{well as the post-flight analysis as }\DIFaddend explained previously.
\smallskip
The flushing valve \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible to ensure a proper flushing of the system before each sampling period. From the flushing valve, an outlet tube \DIFdelbegin \DIFdel{reaches }\DIFdelend \DIFaddbegin \DIFadd{reached }\DIFaddend the outside environment. This can be seen in Figure \ref{level_2}.
The OBC and its external elements \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend allocated in the third level of The Brain. The PCB \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fixed to the aluminum plate by means of 10 standoffs. As shown in Figure \ref{level_3}, it \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a hole, as well as the level plate, to collect all the wires connecting with levels 1 and 2. This level \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend its own outside top wall which \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the electrical interfaces. The latter \DIFdelbegin \DIFdel{allows }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend the wall to be opened without having to remove all the sockets attached with screws and a female in the inside of the wall. The styrofoam shielding of The Brain \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a hole on top to allow the temperature sensors wires to reach the inside of the AAC Box.
A more detailed content of the components for each level is summarized in Appendix \ref{list-of-components-brain}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Figure_24a.png}
\caption{Isometric View of Level 1.}
\label{level_1}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Figure_24b.png}
\caption{Isometric View of Level 2.}
\label{level_2}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Figure_24c.png}
\caption{Isometric View of Level 3.}
\label{level_3}
\end{subfigure}
\caption{Distribution in Each Level.}
\label{fig:The-brain}
\end{figure}
%DIF < Level 1 is the base of the AAC box and together with Level 2 contain the main pneumatic system. This is commanded by the electronics in the top level. This distribution allows easy access to the PCB from the top and provides the physical desired separation between electronics and pneumatic circuit.
This distribution \DIFdelbegin \DIFdel{allows }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend easy access to the PCB from the top and \DIFdelbegin \DIFdel{provides }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend the physical desired separation between electronics and pneumatic circuit.
%DIF < This is shown in the dedicated section for each level which can be seen in Figure \ref{brain_lateral}.
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/Brain_Lateral.png}
%DIF < \caption{Lateral View of The Brain.}
%DIF < \label{brain_lateral}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \smallskip
The structure of The Brain \DIFdelbegin \DIFdel{provides }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend versatility in terms of implementation and construction. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend made out of strut profiles: four bars placed vertically and four bars placed horizontally. The railed bars \DIFdelbegin \DIFdel{allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend the possibility to fix all the pieces together and to provide the anchor points for the lateral and top styrofoam shield as well as to fix the whole unit to the box structure bars.
%DIF < This is seen in Figure \ref{brain_structure}.
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \begin{subfigure}[b]{0.48\textwidth}
%DIF < \centering
%DIF < \includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Brain_Structure.png}
%DIF < \caption{Structure of The Brain.}
%DIF < \label{brain_structure}
%DIF < \end{subfigure}
%DIF < ~
%DIF < \begin{subfigure}[b]{0.48\textwidth}
%DIF < \centering
%DIF < \includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Brain_Isometric.png}
%DIF < \caption{Isometric View of The Brain with Walls.}
%DIF < \label{brain_isometric}
%DIF < \end{subfigure}
%DIF < \caption{Design of The Brain.}
%DIF < \label{fig:The-brain}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Brain_Structure.png}
%DIF < \caption{Structure of The Brain.}
%DIF < \label{brain_structure}
%DIF < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \smallskip
The bulk dimensions of The Brain \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend 260 mm long, 150mm wide and 290 mm high. If the shielding styrofoam walls \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend taken into account, the dimensions \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend 290 mm long, 180 mm wide and 300 mm high.
Therefore, accounting for the space the column bars \DIFdelbegin \DIFdel{take}\DIFdelend \DIFaddbegin \DIFadd{took}\DIFaddend , each plate \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a surface of 258 mm x 158 mm. The distance between levels \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend variable depending on the components dimensions. Level 1 \DIFdelbegin \DIFdel{has a height if }\DIFdelend \DIFaddbegin \DIFadd{had a height of }\DIFaddend 7 cm, Level 2 \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a height of 9 cm and Level 3 \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend 8 cm to the top styrofoam shielding. The Brain with styrofoam shielding can be seen in Figure \ref{brain_isometric}.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{4-experiment-design/img/Mechanical/Figure_25_0.png}
\caption{Isometric View of The Brain.}
\label{brain_isometric}
\end{figure}
\smallskip
In order to allocate the electrical interfaces required (E-Link, Power Supply and D-Sub Connectors) as well as to allow the tubes of the sampling system to reach the outside environment, the outside facing wall and the top wall \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend divided in two pieces each. This \DIFdelbegin \DIFdel{makes }\DIFdelend \DIFaddbegin \DIFadd{made }\DIFaddend it easy to manipulate when having to open the box walls since the little pieces \DIFdelbegin \DIFdel{containing }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the interfaces and the tubes holes, \DIFdelbegin \DIFdel{will remain }\DIFdelend \DIFaddbegin \DIFadd{were remained }\DIFaddend attached. The bottom piece covers Level 1 and 2 while the other, which \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the electrical connections, \DIFdelbegin \DIFdel{sits }\DIFdelend \DIFaddbegin \DIFadd{sat }\DIFaddend above Level 3. These pieces \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend the same layout as the main wall.
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Level_1.png}
%DIF < \caption{Isometric View of Level 1.}
%DIF < \label{level_1}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Level_2.png}
%DIF < \caption{Isometric View of Level 2.}
%DIF < \label{level_2}
%DIF < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Level_3.png}
%DIF < \caption{Isometric View of Level 3.}
%DIF < \label{level_3}
%DIF < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \bigskip
\underline{Shielding and anchor points}
\smallskip
The most critical components in terms of required thermal control \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend inside The Brain. These \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend the pump and the valves. In order to provide a passive thermal shielding, 3 cm thick removable styrofoam walls \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend placed in the three walls (top, lateral, and rear) facing the interior of the AAC box, shown in Figure \ref{brain_isometric}. The lateral wall \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fixed by means of four bolts that \DIFdelbegin \DIFdel{penetrate }\DIFdelend \DIFaddbegin \DIFadd{penetrated }\DIFaddend inside the styrofoam. The top wall \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fixed to the rear wall and both \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend kept in place by means of a stoper. The larger lateral wall, where the tubes from the valves \DIFdelbegin \DIFdel{are, is }\DIFdelend \DIFaddbegin \DIFadd{were, was }\DIFaddend divided in two pieces so it \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be removed without having to disconnect the tubes.
\smallskip
The Brain structure \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend integrated in the AAC box structure to provide the required stiffness to this element.
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=1\textwidth]{4-experiment-design/img/Mechanical%/Panel_Front.png}
%DIF < \caption{Front View of The Brain Where the Styrofoam Walls can be Identified.}
%DIF < \label{brain_front}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \pagebreak
\subsubsection{Pneumatic Subsystem}
\label{sec:4.4.5}
In order to be able to collect separated samples of air, a pneumatic subsystem \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend developed and implemented. The schematics and components of this can be seen in Figure \ref{pneumatic_system}. The system \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend formed by almost $100$ components located inside The Brain and the AAC Box.
In between these components, the same Sulfinert-treated stainless steel tubing as the ones used for the Inlet/Outlet pipes explained in Section \ref{subsec:pipes} \DIFdelbegin \DIFdel{have been chosen.
%DIF < According to the datasheet, the minimum suggested bend radius for the chosen diameter tube is $10.2\ cm$. Any bend sharper than this may cause the tubing to stretch, potentially creating active sites as the coating layer density decreases. For this reason, $90^\circ$ elbow interfaces have been used instead of the previously considered straight ones, to perform all the required sharp curves to allocate all the components inside the volume-limited \emph{Brain}.
}\DIFdelend \DIFaddbegin \DIFadd{were chosen.
}\DIFaddend
The schematic for the pneumatic system can be seen in Figure \ref{pneumatic_system}. The air \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend sucked from the outside through the inlet tube (No.1), the lower tube in Figure \ref{pneumatic_system_cad}, and it \DIFdelbegin \DIFdel{goes }\DIFdelend \DIFaddbegin \DIFadd{went }\DIFaddend through the filter (No.2) inside the pump (No.9). From here, and changing to Level 2, it \DIFdelbegin \DIFdel{passes }\DIFdelend \DIFaddbegin \DIFadd{passed }\DIFaddend through the airflow sensor (No.15), which \DIFdelbegin \DIFdel{allows }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend the airflow rate to be monitored. Thereafter the air \DIFdelbegin \DIFdel{passes }\DIFdelend \DIFaddbegin \DIFadd{passed }\DIFaddend through the static pressure sensor (No.20) before getting to the six station manifold (No.23). It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend in here where the air \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend directed to the desired bag (No.36) thanks to its dedicated solenoid valve (No.30).
When flushing the pneumatic system before each sampling period, the flushing valve (No.27) \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend opened so that the outlet of the system \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend open and new air \DIFdelbegin \DIFdel{runs }\DIFdelend \DIFaddbegin \DIFadd{ran }\DIFaddend through the main part of the pneumatic system.
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \begin{subfigure}[b]{0.48\textwidth}
%DIF < \centering
%DIF < \includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/ Pneumatic_System_Top_View_Level_1.png}
%DIF < \caption{Top View of Level 1.}
%DIF < \label{level_1_pneumatic_system_top_view}
%DIF < \end{subfigure}
%DIF < ~
%DIF < \begin{subfigure}[b]{0.48\textwidth}
%DIF < \centering
%DIF < \includegraphics[width=\textwidth]{4-experiment-design/img/Mechanical/Pneumatic_System_Top_View_Level_2.png}
%DIF < \caption{Top View of Level 2.}
%DIF < \label{level_2_pneumatic_system_top_view}
%DIF < \end{subfigure}
%DIF < \caption{View Over the Pneumatic System.}
%DIF < \label{fig:Pneumatic-system}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Figure_26.png}
\caption{Pneumatic System Top View.}
\label{pneumatic_system_cad}
\end{figure}
%DIF < \begin{figure}[H]
%DIF < \centering
%DIF < \includegraphics[width=0.8\textwidth]{4-experiment-design/img/Mechanical/Pneumatic_System_Top_View_Level_2.png}
%DIF < \caption{Pneumatic System Top View of Level 2.}
%DIF < \label{level_2_pneumatic_system_top_view}
%DIF < \end{figure}
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
%DIF < Figure with the diagram
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \begin{landscape}
\begin{figure}
\centering
\includegraphics[width=1.45\textwidth,height=\textheight]{4-experiment-design/img/Mechanical/AAC_System.png}
\caption{AAC Pneumatic System Diagram and Components.}
\label{pneumatic_system}
\end{figure}
\end{landscape}
%\raggedbottom
\pagebreak
\subsection{Electrical Design}
\subsubsection{Block Diagram}
\label{sec:4.5.1}
The electronics design can be seen in Figure \ref{fig:electronics-block-diagram} which shows the connections, grounding, voltages, and signals.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=16cm]{4-experiment-design/img/Electrical_Block_diagram.png}
\end{align*}
\caption{Block Diagram for all Electronic Components Showing the Connection, Signal and Power Connections.}\label{fig:electronics-block-diagram}
\end{figure}
Most of the electronics \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located in the Brain inside the AAC box. However, there \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend six distinct areas:
\begin{enumerate}
\item The Brain level 3, where the PCB is located with the Arudino and shield, two 24 V DC-DC, two 12 V DC-DC, 11 MOSFETs and 16 LEDs.
\item The Brain level 2, where the valve manifolds with six sampling valves, the flushing valve, two heaters, airflow sensor, static pressure sensor and one temperature sensor \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located.
\item The Brain level 1, where the pump, two heaters and one temperature sensor \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located.
\item The AAC box, where 3 ambient temperature sensors \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located.
\item The CAC box, where the CAC valve and 3 ambient temperature sensors \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located.
\item Outside of the experiment box, where 3 ambient pressure sensors \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend located.
\end{enumerate}
From the PCB, on level 3, five D-sub connectors \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to connect to the other five areas. Fifteen pin connectors \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used for level 1 and level 2. For the CAC box, AAC box sampling bags area, and the external pressure sensors, nine pin connectors \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used. In addition there \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a connection to the gondola power and gondola E-link.
All of the power distribution \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done through the PCB using two 24 V DC-DC and two 12 V DC-DC converters in parallel with a forwarding diode.
\begin{itemize}
\item $28.8 \, V \Longrightarrow 24 \, V $ By DC-DC converters
\item $28.8 \, V \Longrightarrow 12 \, V$ By DC-DC converters
\end{itemize}
The heaters \DIFdelbegin \DIFdel{do }\DIFdelend \DIFaddbegin \DIFadd{did }\DIFaddend not require the voltage to be stepped down and so \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend powered directly from the gondola battery.
The Arduino \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was used to }\DIFaddend control all of the sensors, valves, heaters and the pump from the PCB. Sensors \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend directly connected to the Arduino. The valves, heaters and the pump \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected via a switching circuit.
The LEDs \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used as visual indicators that \DIFdelbegin \DIFdel{display }\DIFdelend \DIFaddbegin \DIFadd{displayed }\DIFaddend whether different parts of the circuit are active or not. They \DIFdelbegin \DIFdel{give }\DIFdelend \DIFaddbegin \DIFadd{gave }\DIFaddend indications on the status of the valves, pump, heaters, DC-DC converters and Arduino.
Grounding \DIFdelbegin \DIFdel{has been done to follow }\DIFdelend \DIFaddbegin \DIFadd{was done following }\DIFaddend a distributed single point grounding, with all ground connections meeting at a single star point \DIFdelbegin \DIFdel{to ensure there are }\DIFdelend \DIFaddbegin \DIFadd{ensuring there were }\DIFaddend no floating grounds. As not all components \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected via DC-DC converters the experiment \DIFdelbegin \DIFdel{will not be }\DIFdelend \DIFaddbegin \DIFadd{was not }\DIFaddend isolated from the gondola power supply therefore there \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a connection between the star point and the gondola ground. The star point \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend located on the main PCB board which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend then grounded to the experiment box. The grounding can be seen in Figure \ref{fig:electronics-block-diagram} where it is indicated by dashed lines labeled GND. The analog sensors that \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used on level 1 in the brain \DIFdelbegin \DIFdel{use }\DIFdelend \DIFaddbegin \DIFadd{used }\DIFaddend a separate grounding wire(AGND) onto the main PCB where there \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a separate trace connecting to the ground pins on the Arduino board. Furthermore the upper and lower level of the main PCB board \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend making use of the common grounding plane where possible.
\subsubsection{Miniature Diaphragm Air Pump}
The pump which \DIFdelbegin \DIFdel{has been selected is }\DIFdelend \DIFaddbegin \DIFadd{was selected was }\DIFaddend the 850.1.2. KNDC B, Figure \ref{fig:pumppic}, which is manufactured by KNF. One of the reasons this pump \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend selected is that it \DIFdelbegin \DIFdel{has successfully been }\DIFdelend \DIFaddbegin \DIFadd{was successfully }\DIFaddend flown on a similar flight in the past where it managed to pump enough air at 25 km altitude to have 180 mL remaining at sea level \cite{LISA}. However, to ensure the pump will operate as intended, several tests \DIFdelbegin \DIFdel{will still be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend carried out. These tests --- 4, 5, 18, 28 and 29, can be seen in Tables \ref{tab:vacuum-test}, \ref{tab:thermal-test}, \ref{tab:pump-low-pressure-test}, \ref{tab:pump-operation-test}, and \ref{tab:pump-current-pressure-test}.
\DIFdelbegin \DIFdel{The pump has already passed four of these tests and their results can be seen in Appendix \ref{sec:test28result} for Test 28, Appendix \ref{sec:test4results} for test 4, Appendix \ref{subsection:pumplowpressuretest} for Test 18 and Appendix \ref{sec:test29result} for Test 29.
}\DIFdelend
At sea level conditions the pump was tested and found to have a flow rate of 8.0 L/min and a current draw of 250 mA. The peak current draw was recorded as 600 mA which lasts for less than one second and occurs when the pump is switched on.
From the results of Test 18, in Section \ref{sec:ExpecterResults}, the flow rate \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend shown to be around 3.36 L/min at the lowest pressure that will be seen in flight. This \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend in line with requirement D23. The results found in Test 28, in Section \ref{sec:test28result}, \DIFdelbegin \DIFdel{appear }\DIFdelend \DIFaddbegin \DIFadd{appeared }\DIFaddend to be inline with the information given by the manufacturer, seen in Figure \ref{fig:pumpflowcur}. The highest continuous current draw expected from the pump \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 185 mA when the experiment is at 12 km altitude and \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend expected to decrease as we increase in altitude. While it \DIFdelbegin \DIFdel{appears the pump increases }\DIFdelend \DIFaddbegin \DIFadd{appeared that the pump increased }\DIFaddend in current draw at around 6 km there \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend no plan to sample below 12 km therefore the highest current draw \DIFdelbegin \DIFdel{can be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend taken from 12 km. As the pump \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a peak current of 600 mA when it switches on, the mosfet and DC-DC power have been chosen to be able to withstand this demand.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=6cm]{4-experiment-design/img/pump-850-1-2-kndc-b.png}
\end{align*}
\caption{KNF 850.1.2. KNDC B Miniature Diaphragm Pump.}\label{fig:pumppic}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=15cm]{4-experiment-design/img/pump-flow-rate-current-graph.png}
\end{align*}
\caption{KNF 850.1.2. KNDC B Flow Rate and Current Draw to Pressure Graph.}\label{fig:pumpflowcur}
\end{figure}
\subsubsection{Electromagnetically Controlled Valves}
Filling the sampling bags \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend controlled by solenoid valves. The solenoid valves \DIFdelbegin \DIFdel{which have been selected are }\DIFdelend \DIFaddbegin \DIFadd{selected were }\DIFaddend model VDW23-5G-1-H-Q, seen in Figure \ref{fig:valve}, manufactured by SMC. These valves \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend normally closed through out the experiment with zero power consumption and \DIFdelbegin \DIFdel{will open}\DIFdelend \DIFaddbegin \DIFadd{opened}\DIFaddend , when given power, to fill up the sampling bags at specific altitudes. In addition one valve \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend on the CAC, in order to seal the coil at the end of the flight and another at the end of the AAC tubing, flushing valve, in order to flush the system. The valves selected for these are model VDW22UANXB, Figure \ref{fig:valve}. The CAC valve \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend opened shortly after take off and \DIFdelbegin \DIFdel{remain }\DIFdelend \DIFaddbegin \DIFadd{remained }\DIFaddend open the whole flight. This valve \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend closed shortly before landing. The flushing valve \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend opened before sampling in order to ensure the air in the tubes \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend from the correct altitude.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=6cm]{4-experiment-design/img/valves.png}
\end{align*}
\caption{SMC Solenoid Valves, VDW22UANXB on the Left, VDW23-5G-1-H-Q on the Right.}
\label{fig:valve}
\end{figure}
The port size of the valves \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 1/8" which is compatible with the gas analyzer. The coil inside can withstand temperatures from -20 to \DIFdelbegin \DIFdel{50 }\DIFdelend \DIFaddbegin \DIFadd{110 }\DIFaddend $\degree{C}$ which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend suitable for flight operations at high altitudes. These valves can operate under a maximum pressure drop of 133 Pa. Valves from the same series \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend flown before to the stratosphere and provided successful results \cite{LISA} however, the valves \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend tested at low temperature and pressure to check they still operate as intended. The test results can be seen in Test 4, Table \ref{tab:vacuum-test} and Test 5, Table \ref{tab:thermal-test}.
\subsubsection{Switching Circuits}
The valves, pump and heaters \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend not powered by the Arduino but they \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend still controlled by it. In order to allow this control a connection \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend made for each component to the Arduino with a switching circuit. This switching circuit \DIFdelbegin \DIFdel{uses }\DIFdelend \DIFaddbegin \DIFadd{used }\DIFaddend eleven MOSFETs, model IRLB8748PBF, Figure \ref{fig:mosfet}, to control which components \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend turned on at which time.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=10cm]{4-experiment-design/img/mosfet.png}
\end{align*}
\caption{Figure Showing an Image of the 30V,78A,75W MOSFET, Model Number IRLB8748PBF on the Left and the Schematic for the Switching Circuit for One Heater on the Right.}\label{fig:mosfet}
\end{figure}
\subsubsection{Schematic}
The schematics show all the components and how they are connected, the full schematics can be seen in Figure \ref{fig:Schematic}. There are four requirements for the the power distribution given below:
\begin{itemize}
\item $28.8 \, V$ for the heaters.
\item $28.8 \, V \Longrightarrow 24 \, V$ for the pump and valves.
\item $28.8 \, V \Longrightarrow 12 \, V$ for the airflow sensor, static pressure sensor and Arduino due.
\item $3.3 \, V$ for the temperature and pressure sensors.
\end{itemize}
The voltage available from gondola power is 28.8 V, therefore the heaters \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected directly to the main power supply. For the rest of the components, two 24 V and two 12 V DC-DCs in parallel \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to make sure if one of them fails then the other can take over. The circuitry can be seen in Figure \ref{fig:dc-dc-redun}. All the valves and the pump \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend then powered through the 24 V DC-DCs. To step down the voltage from 28.8 V to 12 V to power the airflow sensor, static pressure sensor and the Arduino, two 12 V DC-DCs in parallel \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used for redundancy purposes. Finally, to power the temperature and external pressure sensors, 3.3 V is required which is supplied by the Arduino board.
To meet the requirements of the pneumatic subsystem, a static pressure sensor \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend chosen to measure the pressure inside the tubes and bags. This analogue pressure sensor \DIFdelbegin \DIFdel{operates }\DIFdelend \DIFaddbegin \DIFadd{operated }\DIFaddend on 12 V so \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend share the same power line as the airflow sensor and Arduino.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=16cm]{4-experiment-design/img/Schematics.png}
\end{align*}
\caption{Schematic for All of the Electronics on Board TUBULAR. This can also be Found at https://rexusbexus.github.io/tubular/img/electrical-design-schematics.png}\label{fig:Schematic}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=13cm]{4-experiment-design/img/DCDC-converter-redundancy.png}
\end{align*}
\caption{Schematic Showing the DC-DC Redundancy of Both 24 V and the 12 V DC-DC Converters.}\label{fig:dc-dc-redun}
\end{figure}
\subsubsection{PCB Layout}
All electronic control circuits \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend gathered on a single PCB on level 3 of the Brain. The PCB \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the Arduino due, switching circuits, indication LEDs, a temperature sensor, the power system and all necessary connectors. The connectors \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend divided so that each connector's wires goes to the same level of the Brain to improve cable management. Due to the relocation of components there are some components on level 2, Static pressure sensor and the airflow sensor, which \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected to the level 1 connector. Although this did not produce major problems since both those components have separate connectors on the cable going down to level 1 and does not share any connections with any components on level 1. Thus you \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend still unplug each level separately since the wires to these two components could just be broken of from the cable loom at the appropriate point. To further improve cable management the shared pins for I2C and SPI \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected to a single pin on each respective D-SUB connector and split up on the respective level. The PCB's components layout can be seen in Figure \ref{fig:PCB-Components-Layout}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{4-experiment-design/img/PCBLayout.png}
\caption{PCB Components Layout}
\label{fig:PCB-Components-Layout}
\end{figure}
The PCB was made using Eagle software and fully sponsored by the Eurocircuits for manufacturing. The traces \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a width designed to fit the IPC-2221 standards\cite{IPC-2221B} with extra width added. The PCB layout with traces can be seen in in Appendix \ref{sec:pcbSchematics}. On the main PCB the traces \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend 1.4mm wide for the nets containing components that consumes higher amounts of current and the ones with lower current requirements \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a trace width of 0.3mm. On the pressure sensor PCB all traces \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend 0.5mm wide.
\raggedbottom
\pagebreak
\subsection{Thermal Design} \label{Thermal_section}
\subsubsection{Thermal Environment}
\begin{centering}
The experiment \DIFdelbegin \DIFdel{will experience wide temperature fluctuations }\DIFdelend \DIFaddbegin \DIFadd{experienced a wide temperature range }\DIFaddend during the flight and it \DIFdelbegin \DIFdel{must be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend able to continue \DIFdelbegin \DIFdel{to operate }\DIFdelend \DIFaddbegin \DIFadd{operating }\DIFaddend despite these changes \DIFaddbegin \DIFadd{due to the incorporated thermal design}\DIFaddend . As seen in Figure \ref{fig:temperature-profile}\DIFaddbegin \DIFadd{, }\DIFaddend the coldest point of the flight \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend between 10 km and 15 km where the air temperature can drop to $-70\degree{C}$ outside. \DIFdelbegin \DIFdel{Past flights have recorded temperatures }\DIFdelend \DIFaddbegin \DIFadd{During the flight the coldest recorded temperature }\DIFaddend on the gondola \DIFdelbegin \DIFdel{as low as \mbox{%DIFAUXCMD
$-55\degree{C}$
}%DIFAUXCMD
}\DIFdelend \DIFaddbegin \DIFadd{was \mbox{%DIFAUXCMD
$-54\degree{C}$
}%DIFAUXCMD
}\DIFaddend during the Ascent and Descent\DIFdelbegin \DIFdel{Phases\mbox{%DIFAUXCMD
\cite{BexusManual}}\hspace{0pt}%DIFAUXCMD
.
Sampling with the AAC will begin when the balloon has risen to 18 km during the Ascent Phase and will last until the Float Phase. Sampling will resume when the gondola has fallen to 17.5 km during the Descent Phase. This means the experiment will be above the coldest part of the atmosphere and the critical components will have time to achieve their operating temperature before sampling time commences. }\DIFdelend \DIFaddbegin \DIFadd{.
%DIF > Phases\cite{BexusManual}. %Sampling with the AAC were planned to begin when the balloon had risen to 18 km during the Ascent Phase and will last until the Float Phase but since there were a failure no sampling were done . Sampling will resume when the gondola has fallen to 17.5 km during the Descent Phase. This means the experiment will be above the coldest part of the atmosphere and the critical components will have time to achieve their operating temperature before sampling time commences.
}\DIFaddend In addition, launching from Kiruna in late October \DIFdelbegin \DIFdel{means }\DIFdelend \DIFaddbegin \DIFadd{meant }\DIFaddend the temperature on the ground could be as low as $-10\degree{C}$ \DIFaddbegin \DIFadd{but the temperature at the time for launch ended up being around \mbox{%DIFAUXCMD
$0\degree{C}$
}%DIFAUXCMD
}\DIFaddend .
As the component with the \DIFdelbegin \DIFdel{highest }\DIFdelend \DIFaddbegin \DIFadd{warmest }\DIFaddend lower limit operating temperature \DIFdelbegin \DIFdel{must be }\DIFdelend \DIFaddbegin \DIFadd{had to be kept }\DIFaddend at a minimum of $5\degree{C}$ (E3 in Table\ref{tab:thermal-table}), \DIFdelbegin \DIFdel{heating may need }\DIFdelend \DIFaddbegin \DIFadd{this required the heaters }\DIFaddend to be switched on while the experiment \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend still on the ground.
\end{centering}
\begin{figure}[H]
\begin{align*}
\includegraphics[height=8cm]{4-experiment-design/img/temperature-profile.png}
\end{align*}
\caption{Diagram Showing the Temperature Profile of the Atmosphere \cite{jacob}.}\label{fig:temperature-profile}
\end{figure}
\subsubsection{The Critical Stages}
The flight \DIFdelbegin \DIFdel{will have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend the following critical stages:
\begin{itemize}
\item Launch pad
\item Early ascent
\item Sampling ascent
\item Float
\item Descent before sampling
\item Sampling descent
\item Shut down
\item Landed, waiting for recovery
\end{itemize}
These stages \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend accounted for in further calculations and simulations.
\subsubsection{Overall Design}
To protect the components against the cold, a thermal control system \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend designed. Insulation and internal heating \DIFdelbegin \DIFdel{will both come }\DIFdelend \DIFaddbegin \DIFadd{both came }\DIFaddend into play in keeping all the components functional throughout the duration of the flight. The two components with the most critical thermal ranges \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend the pump and the valve manifold system (E3 and E5 in Table \ref{tab:thermal-table}). Thermal regulation \DIFdelbegin \DIFdel{was mainly focused }\DIFdelend \DIFaddbegin \DIFadd{elements were designed with the main focus having been }\DIFaddend on the AAC, however a thermal analysis of the CAC can be found in Appendix \ref{sec:appI} under Section \ref{sssec:CAC-trial-flight} where \DIFaddbegin \DIFadd{in }\DIFaddend the CAC box the valve \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend identified as the critical component in terms of thermal regulation (refer to component E5 in Table \ref{tab:thermal-table}). It \DIFdelbegin \DIFdel{will have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a current through it throughout the flight, \DIFaddbegin \DIFadd{therefore }\DIFaddend heating it self up.
% New for the SED v2
The main protection against the cold environment in the stratosphere was a passive thermal design by means of insulating layers added to the walls of the experiment. It \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend comprised of two layers: one outer sheet of aluminum and a thicker sheet of Styrofoam. The main insulating factor \DIFdelbegin \DIFdel{is Styrofoamwhich will significantly reduce }\DIFdelend \DIFaddbegin \DIFadd{was Styrofoam, which significantly reduced }\DIFaddend the heat exchange between the otherwise exposed experiment box, and \DIFdelbegin \DIFdel{will also provide }\DIFdelend \DIFaddbegin \DIFadd{also provided }\DIFaddend shock absorption when the gondola \DIFdelbegin \DIFdel{lands }\DIFdelend \DIFaddbegin \DIFadd{landed }\DIFaddend after separating from the balloon.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{4-experiment-design/img/Thermal/higlighted-heater-pump.JPG}
\caption{Highlight of the Heater On the Pump.}
\label{fig:highlight-heater-on-pump}
\end{figure}
An active thermal control system consisting of four heaters \DIFdelbegin \DIFdel{has also been included}\DIFdelend \DIFaddbegin \DIFadd{was also implemented}\DIFaddend . Two heaters \DIFdelbegin \DIFdel{will regulate the pump's temperature }\DIFdelend \DIFaddbegin \DIFadd{were placed }\DIFaddend as seen in Figure \ref{fig:highlight-heater-on-pump} and a single heater \DIFdelbegin \DIFdel{will regulate }\DIFdelend \DIFaddbegin \DIFadd{was placed on }\DIFaddend the flushing valve temperature and one heater \DIFdelbegin \DIFdel{for }\DIFdelend \DIFaddbegin \DIFadd{was placed on }\DIFaddend the manifold. To control these heaters, two temperature sensors \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also on board\DIFdelbegin \DIFdel{in close proximity to the heaters}\DIFdelend \DIFaddbegin \DIFadd{, one attached to the pump and the other attached to the manifold}\DIFaddend . If the reading from one of the temperature sensors \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend lower than the predefined threshold, then the heater \DIFdelbegin \DIFdel{will turn on to warm up the cooling }\DIFdelend \DIFaddbegin \DIFadd{turned on and warmed the }\DIFaddend component. If it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend above the higher threshold the heater \DIFdelbegin \DIFdel{will turn }\DIFdelend \DIFaddbegin \DIFadd{turned }\DIFaddend off.
Simulations in MATLAB (code can be found in Appendix \ref{sec:appJ}) were used to determine the average uniform heat inside the experiment. The ANSYS thermal modelling platform was used to the simulate the thermal conditions inside the Brain.
Table {\DIFdelbegin \DIFdel{\ref{tab:thermal-table}}\DIFdelend \DIFaddbegin \DIFadd{\ref{tab:thermal-table-4}}\DIFaddend }, below, covers the thermal ranges of the components crucial to the experiment flight from those listed in Section \ref{sec:experiment-components}:
%Added this space to help us see all of the numbers. This "Track changes" sign blocks them every time!
\begin{longtable}{|m{1cm}|m{3.5cm}|m{1.3cm}|m{1.3cm}|m{1.4cm}|m{1.3cm}|m{1.3cm}|m{1.3cm}|}
\hline
\multirow{2}{*}{\textbf{ID}} & \multirow{2}{*}{\textbf{Components}} & \multicolumn{2}{l|}{\textbf{Operating (°C)}} & \multicolumn{2}{l|}{\textbf{Survivable (°C)}} & \multicolumn{2}{l|}{\textbf{Expected (°C)}} \\ \cline{3-8} & & Min. & Max. & Min. & Max. & Min. & Max. \\ \hline
E1 & Arduino Due & -40 & 85 & -60 & 150 & -15.7 & 54.0 \\ \hline
E2 & Ethernet Shield & -40 & 85 & -65 & 150 & -15.7 & 54.0 \\ \hline
E3 & Miniature diaphragm air pump & 5 & 40 & -10 & 40 & 10 & 34.9 \\ \hline
E4 & Pressure Sensor & -40 & 85 & -40 & 125 & -15.7 & 54.0 \\ \hline
E5 & Sampling Valve (inlet and outlet 1/8"" female) & -20 & 68 & -20\footnote{If survivable temperatures were not given, operating temperatures were used as survivable limits.\label{fn:erik}} & 68\textsuperscript{\ref{fn:erik}} & -15 & 20 \\ \hline
E6 & Airflow sensor AWM43300V & -20 & 70 & -20\textsuperscript{\ref{fn:erik}} & 70\textsuperscript{\ref{fn:erik}} & -8.8 & 34.9 \\ \hline
E7 & Heater ($12.7\times 50.8 mm$) & -200 & 200 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -20 & 36 \\ \hline
%E8 & Voltage Regulator & -40 & 125 & -40\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & -30.62 & 34.93 \\ \hline
E9 & Temperature Sensor & -55 & 125 & -65 & 150 & -19.7 & 43 \\ \hline
E10 & DCDC 24 V & -40 & 85 & -55 & 125 & -15.7 & 54.0 \\ \hline
E12 & Micro SD & -25 & 85 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E13 & Logic CAT5E & -55 & 60 & -55\textsuperscript{\ref{fn:erik}} & 60\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
% E14 & Resistors (33, 150 and 100 ohm) & -55 & 155 & (-55)\textsuperscript{\ref{fn:erik}} & (155)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
% E15 & Capacitors $(0.1 \mu$ F and $10 \mu$ F) & -30 & 85 & (-200)\textsuperscript{\ref{fn:erik}} & (200)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
E16 & MOSFET for current control & -55 & 175 & -55 & 175 & -15.7 & 54.0 \\ \hline
E17 & Diodes for DCDC converters & -65 & 175 & -65\textsuperscript{\ref{fn:erik}} & 175\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E18 & 3.3V LED & -40 & 85 & -40\textsuperscript{\ref{fn:erik}} & 85\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E19 & 15-pin D-SUB Female connector with pins & -55 & 120 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E20 & 9-pin D-SUB Female connector with pins & -55 & 120 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E21 & 9-pin D-SUB Female connector with soldering cups & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E22 & 9-pin D-SUB Male connector with soldering cups & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E23 & 15-pin D-SUB Male connector with soldering cups & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E24 & 9-pin D-SUB backing & -40 & 120 & -40\textsuperscript{\ref{fn:erik}} & 120 & -15.7 & 54.0 \\ \hline
%E25 & 15-pin D-SUB backing & -40 & 120 & -40\textsuperscript{\ref{fn:erik}} & 120 & -15.7 & 54.0 \\ \hline
% E26 & Wall mounting bolts & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
%E27 & D-SUB cable CAC to AAC & -40 & 85 & -55 & 125 & -40 & 40 \\ \hline
E28 & 3.3 Zener diode & -65 & 175 & -65\textsuperscript{\ref{fn:erik}} & 175\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E29 & Male connector on PCB & -40 & 85 & -40\textsuperscript{\ref{fn:erik}} & 85 & -15.7 & 54.0 \\ \hline
%E30 & Female connector from wall & -40 & 85 & -40\textsuperscript{\ref{fn:erik}} & 85 & - & - \\ \hline
%E31 & Grounding contact & -55 & 125 & -55\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & - & - \\ \hline
E32 & Logic CAT5 E-link for inside box &-55 & 60 & -55\textsuperscript{\ref{fn:erik}} & 60\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
%E33 & Signal Wires & -60 & 200 & -60\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & - & - \\ \hline
E34 & Flushing valve (inlet and outlet 1/8"" female) & -20 & 68 & -20\textsuperscript{\ref{fn:erik}} & 68 & -7.4 & \DIFdelbegin \DIFdel{68 }\DIFdelend \DIFaddbegin \DIFadd{25.8 }\DIFaddend \\ \hline
E35 & Valves manifold (outlet 1/8"" female) & -10 & 50 & -10\textsuperscript{\ref{fn:erik}} & 50\textsuperscript{\ref{fn:erik}} & 3 & 18 \\ \hline
%E36 & Power wire black & -60 & 200 & -60\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & - & - \\ \hline
% E44 & Heat shrinking tube 2.5 x 1mm & -55 & 125 & (-55)\textsuperscript{\ref{fn:erik}} & (125)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
%E45 & 25-pin D-SUB female connector with pins & -10 & 90 & -10\textsuperscript{\ref{fn:erik}} & 90\textsuperscript{\ref{fn:erik}} & -8.77 & 24.01 \\ \hline
%E46 & 25-pin D-SUB male connector with soldering cups & -10 & 90 & -10\textsuperscript{\ref{fn:erik}} & 90\textsuperscript{\ref{fn:erik}} & -8.77 & 24.01 \\ \hline
%E47 & 25-pin D-SUB backing & -10 & 90 & -10\textsuperscript{\ref{fn:erik}} & 90\textsuperscript{\ref{fn:erik}} & -8.77 & 24.01 \\ \hline
%E48 & Power wire red & -60 & 200 & -60\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & - & - \\ \hline
% E49 & Potentiometer 1k ohm & -55 & 125 & (-55)\textsuperscript{\ref{fn:erik}} & (120)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
%E50 & 6-pin male & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -8.8 & 24.0 \\ \hline
%E51 & 8-pin male single row header& -40 & 105 & -40\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -8.8 & 24.0 \\ \hline
%E52 & 10-pin male single row header & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -8.8 & 24.0 \\ \hline
%E53 & 36-pin male double row header & -40 & 105 & -40 & 125 & -8.8 & 24.0 \\ \hline
%E54 & 12 V DC/DC converter & -40 & 85 & -55 & 125 & -15.7 & 54.0 \\ \hline
%E55 & 50 k$\Omega$ Potentiometer & -55 & 125 & -55\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
%E56 & Static pressure sensor & -40 & 120 & -40\textsuperscript{\ref{fn:erik}} & 120\textsuperscript{\ref{fn:erik}} & -8.8 & 34.9 \\ \hline
%E57 & Connector for static pressure sensor & -25 & 80 & -25\textsuperscript{\ref{fn:erik}} & 80\textsuperscript{\ref{fn:erik}} & -8.8 & 34.9 \\ \hline
E58 & PCB & -50 & 110 & -50\textsuperscript{\ref{fn:erik}} & 110\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E59 & Pressure Sensor PCB & -50 & 110 & -50\textsuperscript{\ref{fn:erik}} & 110\textsuperscript{\ref{fn:erik}} & -50 & 39 \\ \hline
\caption{Table of Component Temperature Ranges.}
\DIFdelbegin %DIFDELCMD < \label{tab:thermal-table}
%DIFDELCMD < %%%
\DIFdelend \DIFaddbegin \label{tab:thermal-table-4}
\DIFaddend \end{longtable}
\raggedbottom
\raggedbottom
A complete table of component temperature ranges, which includes static entities (such as wires and connectors) can be found in Appendix \ref{sec:appI}.
\subsubsection{Internal Temperature}
An enclosed partition of the experiment model was reserved in the corner of the AAC section. This partition took the shape of a rectangular section and was to house all of the electronic components not required to be situated in specified locations throughout the experiment setting, such as the Arduino boards and some of the sensors.
The pump \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend the most critical temperature range as it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the only component in the experiment that \DIFdelbegin \DIFdel{cannot }\DIFdelend \DIFaddbegin \DIFadd{could not }\DIFaddend operate below freezing temperatures. Failure of the pump \DIFdelbegin \DIFdel{will mean }\DIFdelend \DIFaddbegin \DIFadd{meant }\DIFaddend failure of the \DIFaddbegin \DIFadd{entire }\DIFaddend AAC system. It's data sheet \DIFdelbegin \DIFdel{states }\DIFdelend \DIFaddbegin \DIFadd{stated }\DIFaddend that it must always \DIFdelbegin \DIFdel{operate }\DIFdelend \DIFaddbegin \DIFadd{start }\DIFaddend above $5\degree{C}$, or the EPDM diaphragm may \DIFdelbegin \DIFdel{not be able to expand and contract sufficiently to maintain the desired airflow of 8 L/min}\DIFdelend \DIFaddbegin \DIFadd{be too stiff to start}\DIFaddend . However, as this \DIFdelbegin \DIFdel{pump has been }\DIFdelend \DIFaddbegin \DIFadd{type of pump was }\DIFaddend used successfully on previous high altitude flights, \cite{LISA}, tests were conducted on the pump to find its true performance at lower temperatures and in a low vacuum environment. The AAC valves \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also crucial to the experiment's function, as they \DIFdelbegin \DIFdel{enable }\DIFdelend \DIFaddbegin \DIFadd{enabled }\DIFaddend each and every sampling bag on board to be used. For this reason, while the valves \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend operate down to $-20\degree{C}$, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend desirable to be keep them above this limit whenever in use. The manifold valves in the brain \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a minimum operating temperature of only $-10\degree{C}$, but simulations \DIFdelbegin \DIFdel{have proven they will }\DIFdelend \DIFaddbegin \DIFadd{proved they would }\DIFaddend be kept above $0\degree{C}$.
As the most temperature-sensitive equipment \DIFdelbegin \DIFdel{will all be }\DIFdelend \DIFaddbegin \DIFadd{was all }\DIFaddend housed within the Brain, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend important to know what heat \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be lost through the different heat transfer mechanisms as this \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend affect the amount of time the heaters \DIFdelbegin \DIFdel{will need }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend to be active. This \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend addressed through calculations and simulations to find required insulation. All calculations concerning heat transfer can be found in Appendix \ref{sec:appI}. As a worst-case scenario for heat distribution, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend assumed that \textit{all} of the power dissipated through resistance in the electrical components \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend reach the marked boundaries of the experiment's walls.
Aluminum sheeting \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used as the outer layer of insulation for the experiment and Styrofoam \DIFdelbegin \DIFdel{brand foam will be used as }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the inner layer. Aluminum may have among the highest of thermal conductivities, but its arrangement around the Styrofoam, creating one large heat bridge with the inner layer, \DIFdelbegin \DIFdel{would provide }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend a useful thermoregulatory mechanism \cite{EngTool}. The high ratio between the absorptivity (0.3) and emissivity (0.09) of the material \DIFdelbegin \DIFdel{may be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to its advantage \cite{EngTool}. Because the ratio for polished aluminum is higher than 1.0, the element \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend get hotter as it \DIFdelbegin \DIFdel{gets }\DIFdelend \DIFaddbegin \DIFadd{got }\DIFaddend exposed to the radiation from the sun and the power-dissipating components \cite{RedRok}. The low emissivity coefficient for the aluminum cover \DIFdelbegin \DIFdel{means it will }\DIFdelend \DIFaddbegin \DIFadd{meant it would }\DIFaddend not get significantly hotter than the surrounding ambient temperature, but its increased temperature may \DIFdelbegin \DIFdel{negate }\DIFdelend \DIFaddbegin \DIFadd{have negated }\DIFaddend some of the heat being lost from the experiment's interior via some of the heat from the aluminum propagating into the experiment, reducing the net heat loss by a small amount. As conservation of power \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend imperative, the heaters \DIFdelbegin \DIFdel{should be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used sparingly, and instead methods like the use of aluminum for shielding \DIFdelbegin \DIFdel{should be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend employed as passive heating. The aluminum layer \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend be 0.5 mm in thickness, while the Styrofoam layer beneath it \DIFdelbegin \DIFdel{would }\DIFdelend span 20 \DIFaddbegin \DIFadd{to 30 }\DIFaddend mm in thickness. The Styrofoam, in contrast to the aluminum \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a low thermal conductivity even when compared to similar polymer structures \cite{EngTool}. The Styrofoam \DIFdelbegin \DIFdel{would handle }\DIFdelend \DIFaddbegin \DIFadd{handled }\DIFaddend the bulk of the thermal resistance in keeping the experiment from losing the heat it would have obtained prior to being moved to the launchpad. The aluminum \DIFdelbegin \DIFdel{would come }\DIFdelend \DIFaddbegin \DIFadd{came }\DIFaddend into play as the experiment \DIFdelbegin \DIFdel{rises }\DIFdelend \DIFaddbegin \DIFadd{rose }\DIFaddend into colder altitudes and encounters increased sun exposure. While the warmed aluminum \DIFdelbegin \DIFdel{will have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend little impact on the experiment's heat loss, this also \DIFdelbegin \DIFdel{means }\DIFdelend \DIFaddbegin \DIFadd{meant }\DIFaddend that the experiment's internal temperatures \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be prevented from rising to the upper allowed operating limit of the experiment made possible because of the aluminum's low absorptivity of sunlight.
Another heat bridge that \DIFdelbegin \DIFdel{is needed is }\DIFdelend \DIFaddbegin \DIFadd{was needed was }\DIFaddend the fastening of the experiment to the gondola. The aluminum frame of the gondola \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be colder than the experiment and with normal screws there \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be a lot of heat transfer. In this case rubber bumper screws, suggested at CDR, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used to fasten the experiment to the gondola and \DIFdelbegin \DIFdel{will reduce }\DIFdelend \DIFaddbegin \DIFadd{reduced }\DIFaddend the heat transfer between the experiment and the gondola.
\subsubsection{Calculations and Simulation Reports}
\label{sec:4.6.5}
The temperature ranges \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend vary for the different stages but the most critical moment \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend during the Ascent Phase.
\DIFdelbegin \DIFdel{While ascending, the main source of heat to maintain the pump and manifold's operating temperature is the heaters. During the Float and Descent Phases, heat originating from the sun will be enough to maintain these component's operational temperatures. }\DIFdelend %DIF > During ascent, the main source of heat to maintain the pump and manifold's operating temperature was the heaters. During the Float and Descent Phases, heat originating from the sun will be enough to maintain these component's operational temperatures.
According to the thermal analysis, the heaters would not be required during \DIFdelbegin \DIFdel{descent but there is power reserved in the worst case scenario that they need to be run during the Descent Phase. }\DIFdelend \DIFaddbegin \DIFadd{float and descent. During the flight the heaters were still operating for some intervals during the Ascent and Float Phases. %DIF > It was calculated to be on during whole ascent but were only on for half.
%DIF > but there is power reserved in the worst case scenario that they need to be run during the Descent Phase.
}\DIFaddend All simulation equations and their details can be seen in Appendix \ref{sec:appI}.
An estimate of the temperature in the Brain at the sampling times during the Ascent Phase is visualized in Figure \ref{fig:Air-in-brain-4-6}. The higher temperature \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend in the lower right corner where the pump is located. A cooler area exists around the middle of the left edge where no heaters are applied. The legend in the Figure shows the temperature in Celsius.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{4-experiment-design/img/Thermal/air-sampling-with-box}
\caption{Cross Section of the Air in the Brain at the Time to Start Sample During Ascent.}
\label{fig:Air-in-brain-4-6}
\end{figure}
In Figure \ref{fig:test-flight-AAC-4-6} the average temperature of the pump with data from ANSYS is presented. One \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend simulated with no air in the Brain and the other has air with the same density as sea level. In between the vertical dotted line is when the experiment is above 15km. At 4h in the figure the experiment is launched. It can be seen that the pump \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{should }\DIFaddend have an average temperature over 5 degrees during the flight.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{4-experiment-design/img/Thermal/pump-temperature-air-no-air.jpg}
\caption{Temperature of the Pump Over a Simulated Flight.}
\label{fig:test-flight-AAC-4-6}
\end{figure}
The following two figures in Figure \ref{fig:Pump-Valve-ascent-sample-4-6} \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend a visualization of the pump and the manifold at the time in which the AAC sampling begins during ascent.
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.46\linewidth]{4-experiment-design/img/Thermal/Pump-sampling-with-box}}
\hifll
\subfloat{\includegraphics[width=0.45\linewidth]{4-experiment-design/img/Thermal/valve-sampling-with-box}}
\caption{Pump and Manifold at Sampling Time During Descent With No Air in the Brain.}
\label{fig:Pump-Valve-ascent-sample-4-6}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.5\linewidth]{appendix/img/Thermal/flushing-valve-no-air-ascent.JPG}
\end{align*}
\caption{Flushing Valve Prior to Sampling Commencing With No Air in the Brain.}
\label{fig:flushing-valve-4-6}
\end{figure}
During a worst case simulation that is shown in Figure \ref{fig:Pump-Valve-ascent-sample-4-6}, the four heaters were used for 26.66 Wh in total together over the course of the simulation the figures are from. Only the pump heaters \DIFdelbegin \DIFdel{might require }\DIFdelend \DIFaddbegin \DIFadd{required }\DIFaddend more time if \DIFdelbegin \DIFdel{it is colder outside and there is dedicated }\DIFdelend \DIFaddbegin \DIFadd{the outside were colder there was }\DIFaddend 80 Wh in the power budget table \DIFdelbegin \DIFdel{, Table \ref{tab:power-design-table}. So there is a margin }\DIFdelend \DIFaddbegin \DIFadd{dedicated to thermal control (shown in Table \ref{tab:power-design-table}). There was therefore an incentive }\DIFaddend to keep the pump heaters on for a longer time if needed.
Based on the calculations and thermal simulations, it \DIFdelbegin \DIFdel{can be concluded }\DIFdelend \DIFaddbegin \DIFadd{was concluded that }\DIFaddend the thermal designed passive and active thermal control mechanisms detailed in this section \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend ensure that the AAC's pump and manifold \DIFdelbegin \DIFdel{will operate nominally }\DIFdelend \DIFaddbegin \DIFadd{were in their operating temperature }\DIFaddend during the entire flight flight. It has been \DIFdelbegin \DIFdel{concluded }\DIFdelend \DIFaddbegin \DIFadd{shown }\DIFaddend that the CAC has a sufficiently adequate thermal design to operate throughout the whole flight.
Thermal testing (Test 5, section \ref{Test-5}) \DIFdelbegin \DIFdel{shows }\DIFdelend \DIFaddbegin \DIFadd{showed }\DIFaddend that the heaters and the subsequent internal temperature responded as expected from simulations and work as required when heating the critical components. A full 4h test at \DIFdelbegin \DIFdel{\mbox{%DIFAUXCMD
$-40\degree{C}$
}%DIFAUXCMD
will be completed when the issue of }\DIFdelend \DIFaddbegin \DIFadd{\mbox{%DIFAUXCMD
$-50\degree{C}$
}%DIFAUXCMD
was done after the }\DIFaddend temperature sensor issue \DIFdelbegin \DIFdel{has been resolved }\DIFdelend \DIFaddbegin \DIFadd{had been resolved and it concluded that the experiment would be able to operate thermal wise during the whole flight}\DIFaddend .
%\subsubsection{Passive Thermal Control}
%\label{sec:4.6.6}
% Description of the insulation applied (size, distribution, type of material, implementation, etc)
%It is in the appendix the thermal insulation applied - Erik
% How to deal with gaps?
% Heat sinks? (to spread high temperatures in hot spots)
%\subsubsection{Active Thermal Control}
%\label{sec:4.6.7}
% Description of the heaters performance (pictures, characteristics, locations, goal temperatures, when they work? for how long? etc.)
\pagebreak
\subsection{Power System}
\subsubsection{Power System Requirements}
\begin{centering}
The Gondola \DIFdelbegin \DIFdel{provides }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend a 28.8 V, 374 Wh or 13 Ah battery with a recommended maximum continuous current draw of 1.8 A . However, more typical values \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{which were given were }\DIFaddend 196 Wh or 7 Ah \cite{BexusManual}.The experiment \DIFdelbegin \DIFdel{must }\DIFdelend \DIFaddbegin \DIFadd{should have been able to }\DIFaddend run on (gondola) battery for \DIFaddbegin \DIFadd{more than }\DIFaddend two hours before launch during the countdown phase and for the entire flight duration, lasting approximately \DIFdelbegin \DIFdel{six }\DIFdelend \DIFaddbegin \DIFadd{four }\DIFaddend hours. As a factor of safety, in case of unexpected delays, the experiment \DIFdelbegin \DIFdel{should be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend able to run for an additional \DIFdelbegin \DIFdel{two }\DIFdelend \DIFaddbegin \DIFadd{four }\DIFaddend hours. Therefore the experiment \DIFdelbegin \DIFdel{must }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be able to run on (gondala) power for a total of 10 hours. For this reason, all the calculations \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend done using a 10 hour total time\cite{BexusManual}.
\end{centering}
\begin{longtable}{|m{0.05\textwidth}| m{0.3\textwidth} |m{0.14\textwidth} |m{0.16\textwidth}|m{0.13\textwidth}| m{0.14\textwidth} |}
\hline
\textbf{ID} & \textbf{Component} & \textbf{Voltage {[}V{]}} & \textbf{Current {[}mA{]}} & \textbf{Power {[}W{]}} & \textbf{Total {[}Wh{]}} \\ \hline
E1 & Arduino Due & 12& 30 & 0.36 & \DIFdelbegin \DIFdel{36 }\DIFdelend \DIFaddbegin \DIFadd{4 }\DIFaddend \\ \hline
E3 & Miniature Diaphragm air Pump & 24 & 200 & 7.68 & 7.68 \\ \hline
E4 & Pressure Sensor & 3.3 & 1.4 & 0.032 & 0.32 \\ \hline
E5 & Solenoid Valves & 24 & 125 & 24 & 39 \\ \hline
E56 & Static Pressure Sensor & 12 & 8 & 0.1 & 1 \\ \hline
E6 & Airflow Sensor & 12 & 8.3 & 0.1 & 1 \\ \hline
E7 & Heaters & 28 & 180 & 21 & 84 \\ \hline
E54 & 12 V DC-DC converter & 28.8 & 8 (1670 output) & 0.1 (20 output) & 1 \\ \hline
E9 & Temperature sensor & 3.3 & 0.28 & 0.011 & 0.11 \\ \hline
E10 & 24 V DC-DC converter & 28.8 & 37 (2500 output) & 2 (60 output) & 11.69 \\ \hline
\multicolumn{1}{|c|}{-} & \textbf{Total} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{1100} & \multicolumn{1}{c|}{38} & 181 \\ \hline
\multicolumn{1}{|c|}{-} & \textbf{Available from gondola} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & 374 \\ \hline
\caption{Power Design Table.}
\label{tab:power-design-table}
\end{longtable}
\raggedbottom
% $16.5\times10^{blah}$
The \DIFdelbegin \DIFdel{estimated }\DIFdelend total power consumption 181 Wh, Table \ref{tab:power-design-table}, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend within the limits of the available power. Other calculations for the average, peak, and minimum power values \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend 24 W, 38 W, and 16 W respectively. In addition the different expected current consumption for the average, peak, and minimum values \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend 0.64 A, 1.1 A, and 0.22 A respectively.
The 24 V DC-DC converters \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend 2.5 A output current and 60 W output power with the efficiency of 93\%. This \DIFdelbegin \DIFdel{fulfills }\DIFdelend \DIFaddbegin \DIFadd{fulfilled }\DIFaddend the peak requirements for both power and current. Moreover, the dissipated power and current across the DC-DCs \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend calculated as 12.69 Wh and 45 mA respectively and have been added to the total power budget.
\raggedbottom
\pagebreak
\subsection{Software Design}
\subsubsection{Purpose}
The purpose of the software \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to automate control of the valves so that they will be opened/closed at the target altitude. Moreover, the software \DIFdelbegin \DIFdel{will store }\DIFdelend \DIFaddbegin \DIFadd{stored }\DIFaddend housekeeping data from sensors, pump, and valves states to the on-board memory storage device. Logging sensor data \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend necessary in order to determine a vertical profile of the analyzed samples:
\begin{quote}
In order to determine the vertical profiles of CO$_2$, CH$_4$, and CO from the analysis of sampled air, measurements of several atmospheric parameters \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend needed [...]. The two most important parameters \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend the ambient pressure and the mean coil temperature. These parameters \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend be recorded by the AirCore-HR (High Resolution) electronic data package. Mean coil temperature \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend obtained by taking the mean of three temperatures recorded by independent probes located at different positions along the AirCore-HR.\cite{Membrive}
\end{quote}
Both the ambient pressure and the sampling container temperature \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also essential for AAC sampling bags. The temperature data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend collected by the sensors near the sampling bags.
The software shall also transmit data to the ground so that the team can monitor the conditions of the experiment in real time. Telecommand \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend also needed to overwrite pre-programmed sampling scheduled in case of automation failure or to mitigate unexpected changes in the flight path and reached altitudes. It \DIFdelbegin \DIFdel{will also be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to test the system, especially valves and heaters.\par
\subsubsection{Design} \label{sec:4.8.2}
\begin{enumerate}[label=(\alph*)]
\item{Process Overview}\\
The software which \DIFdelbegin \DIFdel{run }\DIFdelend \DIFaddbegin \DIFadd{ran }\DIFaddend on the Arduino \DIFdelbegin \DIFdel{reads }\DIFdelend \DIFaddbegin \DIFadd{read }\DIFaddend from the sensors through the analog, I2C, and SPI interfaces. The sensors \DIFdelbegin \DIFdel{provides }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend temperature, pressure and airflow data. The acquired data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend time-stamped and stored in the on-board SD card and transmitted via the E-Link System to the ground station. Then according to the pressure/altitude, the software \DIFdelbegin \DIFdel{controls }\DIFdelend \DIFaddbegin \DIFadd{controlled }\DIFaddend the valves which \DIFdelbegin \DIFdel{will allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend the air to be pumped inside the bags. Figure \ref{processOverview} visually explain the process flow.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{4-experiment-design/img/Process-overview-V0-3.png}
\caption{The Process Overview of the Experiment.}
\label{processOverview}
\end{figure}
\item{General and Safety related concepts}\\
The watchdog timer, which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend an electronic countdown timer that causes an interrupt when it reaches 0, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to avoid failure because of a possible freezing problem in the software. During normal operations, the software \DIFdelbegin \DIFdel{will }\DIFdelend set flags when done with their task. When all the flags \DIFdelbegin \DIFdel{have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend been set the watchdog \DIFdelbegin \DIFdel{gets }\DIFdelend \DIFaddbegin \DIFadd{got }\DIFaddend reset. If any task fails to set the their flag before the watchdog elapses, the system resets. \DIFdelbegin \DIFdel{Telecommands will }\DIFdelend \DIFaddbegin \DIFadd{Telecommand was }\DIFaddend also be used as backup in case the automation fails or otherwise become unresponsive. Telemetry \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend utilized to transmit housekeeping data and the state of the valves to get confirmation of operation. Rigorous testing \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend performed during the development of the project and before the launch phase to insure that that the software \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend capable to control the experiment.
\item{Interfaces}\\
Table \ref{tab:comIntpro} demonstrates how the components \DIFdelbegin \DIFdel{will interact }\DIFdelend \DIFaddbegin \DIFadd{interacted }\DIFaddend with the onboard computer (OBC). Components that \DIFdelbegin \DIFdel{use SPI, will share }\DIFdelend \DIFaddbegin \DIFadd{used SPI, shared }\DIFaddend MISO, MOSI, and CLK pins on the Arduino board. Each of them \DIFdelbegin \DIFdel{will also be }\DIFdelend \DIFaddbegin \DIFadd{was also }\DIFaddend connected to general pins input output (GPIO) for slave select. Furthermore, components using I2C protocol, \DIFdelbegin \DIFdel{will share }\DIFdelend \DIFaddbegin \DIFadd{shared }\DIFaddend Serial Data pin (SDA) and Serial Clock pin (SCL).
\begin{table}[H]
\centering
\begin{tabular}{lll}
\textbf{Components interacting} & \textbf{Communication protocol} & \textbf{Interface} \\ \hline
Pressure sensors-OBC & SPI & Arduino SPI and Digital Pins \\
Temperature sensors-OBC & I2C & Arduino I2C \\
Airflow sensor-OBC & Analog & Arduino analog pin \\
Heaters-OBC & Digital & GPIO pins \\
Air pump-OBC & Digital & GPIO pins \\
Valve-OBC & Digital & GPIO pins \\
OBC-microSD Storage & SPI & Arduino Ethernet shield \\
OBC - E-Link & Ethernet & Ethernet port
\end{tabular}%Tabular dude
\caption{Communication and Interface Protocols.}
\label{tab:comIntpro}
\end{table}
Every transmission to/from the ground \DIFdelbegin \DIFdel{will utilize }\DIFdelend \DIFaddbegin \DIFadd{utilized }\DIFaddend the E-link connection. The data packet which \DIFdelbegin \DIFdel{will be used is }\DIFdelend \DIFaddbegin \DIFadd{was used was an }\DIFaddend Ethernet Packet with a header \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{containing }\DIFaddend the address of destination, followed by the data, and at the end there \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a frame check sequence (FCS). The up-linked data packet \DIFdelbegin \DIFdel{will have }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend the same structure, with header followed by commands and ended with FCS.\\
\\
The protocol that \DIFdelbegin \DIFdel{has been chosen are }\DIFdelend \DIFaddbegin \DIFadd{had been chosen was }\DIFaddend UDP for telemetry and TCP for telecommand. The UDP \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to prevent software getting stuck waiting for handshake from the ground if the connection \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend temporarily lost.\\
\\
The telecommand \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the following services:
\begin{itemize}
\item Changing instrument modes
\item Manually control valves, pump, and heaters
\item Change sampling schedule
\end{itemize}
Furthermore, telemetry \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the services below:
\begin{itemize}
\item Data from temperature, pressure and airflow sensor
\item Current instrument modes
\item Instrument housekeeping data (valve, pump, and heater states)
\end{itemize}
\item{Data Acquisition and Storage}\\
Data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend stored on the SD memory card on the Arduino Ethernet Shield using the FAT16 and FAT32 file systems. To minimize data loss in the event of \DIFdelbegin \DIFdel{an resetwe will only write to }\DIFdelend \DIFaddbegin \DIFadd{a reset, }\DIFaddend the same file \DIFaddbegin \DIFadd{was written only }\DIFaddend in a set amount of time before closing it and \DIFdelbegin \DIFdel{open }\DIFdelend \DIFaddbegin \DIFadd{opened }\DIFaddend a new file. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend estimated that for the entire flight, all the sensors \DIFdelbegin \DIFdel{will produce }\DIFdelend \DIFaddbegin \DIFadd{produced }\DIFaddend less than $5$ MB of data. The sampling rate \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fixed at 1 sampling per second.\\
\\
The data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend collected and presented as a matrix, where the first column \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the time frame, the following columns \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend the sensors data. After the sensors data, there \DIFdelbegin \DIFdel{will also be }\DIFdelend \DIFaddbegin \DIFadd{was also }\DIFaddend housekeeping data, that \DIFdelbegin \DIFdel{keeps }\DIFdelend \DIFaddbegin \DIFadd{kept }\DIFaddend track of the valves, and heaters states. However, the size of the housekeeping data \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend not expected to surpass 20 bits per sampling.\\
\\
Data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend continuously down-linked two times per second and the total telemetry size \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend less than $4$ MB for 10 hours of flight. The telecommand size \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend on the other hand \DIFdelbegin \DIFdel{vary }\DIFdelend \DIFaddbegin \DIFadd{varied }\DIFaddend based on how many \DIFdelbegin \DIFdel{subcommand is }\DIFdelend \DIFaddbegin \DIFadd{subcommands were }\DIFaddend sent each time. If all of the \DIFdelbegin \DIFdel{subcommand are }\DIFdelend \DIFaddbegin \DIFadd{subcommands were }\DIFaddend enabled, the total size \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 128 bytes. Considering the telecommand \DIFdelbegin \DIFdel{will not be }\DIFdelend \DIFaddbegin \DIFadd{was not }\DIFaddend sent more than once per second, the telecommand data rate \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 126 bytes/sec.
\item{Process Flow}\\
The process flow can be explained with the mode diagram in Figure \ref{fig:modediag}. The software \DIFdelbegin \DIFdel{will start }\DIFdelend \DIFaddbegin \DIFadd{started }\DIFaddend with Standby Mode, in which the software \DIFdelbegin \DIFdel{will get }\DIFdelend \DIFaddbegin \DIFadd{got }\DIFaddend samples from all sensors. The on-board memory card \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the default sampling schedule parameters (when the sampling will start and stop), which \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend read by the software during initialization of the OBS. This \DIFdelbegin \DIFdel{will allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend users to change the sampling schedule without changing the internal code. When the software \DIFdelbegin \DIFdel{receives }\DIFdelend \DIFaddbegin \DIFadd{received }\DIFaddend negative increment of pressure changes, it \DIFdelbegin \DIFdel{will change }\DIFdelend \DIFaddbegin \DIFadd{changed }\DIFaddend to Normal - Ascent mode, where the software \DIFdelbegin \DIFdel{will trigger }\DIFdelend \DIFaddbegin \DIFadd{triggered }\DIFaddend emptying of the CAC's coiled tube by opening the valves. Then, at certain altitudes, air sampling \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend conducted during Ascent Phase. During Float Phase, no sampling \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend conducted. The software \DIFdelbegin \DIFdel{will go }\DIFdelend \DIFaddbegin \DIFadd{went }\DIFaddend to Normal - Descent mode when it \DIFdelbegin \DIFdel{detects }\DIFdelend \DIFaddbegin \DIFadd{detected }\DIFaddend the increment of pressure \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend considerably big at which point the software \DIFdelbegin \DIFdel{will sample }\DIFdelend \DIFaddbegin \DIFadd{sampled }\DIFaddend the air by opening the valves for each bag in their designated altitude. Considering that the gondola might not have smooth ascent/descent, the mode changes \DIFdelbegin \DIFdel{will only happen }\DIFdelend \DIFaddbegin \DIFadd{only happened }\DIFaddend if the changes \DIFdelbegin \DIFdel{exceed }\DIFdelend \DIFaddbegin \DIFadd{exceeded }\DIFaddend a certain threshold. After analysis and testing, $\SI{-20}{h\pascal}$ and $\SI{20}{h\pascal}$ \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend considered as the threshold. The experiment \DIFdelbegin \DIFdel{goes }\DIFdelend \DIFaddbegin \DIFadd{went }\DIFaddend to SAFE mode approximately \SI{1200}{\meter} before the landing, and \DIFdelbegin \DIFdel{triggers }\DIFdelend \DIFaddbegin \DIFadd{triggered }\DIFaddend all the valves to be closed. The manual mode \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend entered with a telecommand and left with another one. If no telecommand \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend received by the OBC within a certain amount of time it \DIFdelbegin \DIFdel{leaves }\DIFdelend \DIFaddbegin \DIFadd{left }\DIFaddend manual mode and \DIFdelbegin \DIFdel{enters }\DIFdelend \DIFaddbegin \DIFadd{entered }\DIFaddend into standby mode.
\begin{figure}[H]
\begin{align*}
\includegraphics[scale=0.55]{4-experiment-design/img/state-diagram-V1-3.png}
\end{align*}
\caption{Process Diagram for the Modes.}\label{fig:modediag}
\end{figure}
In the sampling algorithm, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend necessary to keep track of the time because the bag \DIFdelbegin \DIFdel{cannot }\DIFdelend \DIFaddbegin \DIFadd{could not }\DIFaddend be filled fully (it might burst). A simple library \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to keep track of the time from the start of the experiment.\par
\item{Modularization and Pseudo Code}\\
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{4-experiment-design/img/sw_design_v1-8.png}
\end{align*}
\caption{Onboard Software Design Tree.}\label{fig:obtree}
\end{figure}
The software design \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend produced by using object oriented approach. The functionality of the experiment \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend divided into several objects and their children. The design tree is shown in Figure \ref{fig:obtree}.\\
\\
The Telemetry object \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible to format the sensor/housekeeping data, and to transmit it. MODE \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for controlling the five modes of software. INIT \DIFdelbegin \DIFdel{will initialize }\DIFdelend \DIFaddbegin \DIFadd{initialized }\DIFaddend the necessary software. COMMANDS \DIFdelbegin \DIFdel{reads }\DIFdelend \DIFaddbegin \DIFadd{read }\DIFaddend the telecommands and \DIFdelbegin \DIFdel{execute }\DIFdelend \DIFaddbegin \DIFadd{executed }\DIFaddend their commands. The AIR SAMPLING CONTROL object \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend the four children objects. The first child \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for controlling the pump. The second child \DIFdelbegin \DIFdel{contains }\DIFdelend \DIFaddbegin \DIFadd{contained }\DIFaddend the parameters for the valves and pump. The third child \DIFdelbegin \DIFdel{reads }\DIFdelend \DIFaddbegin \DIFadd{read }\DIFaddend the data from the sensors, a fourth child \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for manipulating the valves.\\
\\
The SENSOR object \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend two children objects. One for sampling the sensors and another for recording and storing the housekeeping data. The HEATER object \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend three children objects. One for reading the temperature sensor data, another for deciding if the heaters should be turn on/off. And the third child for turning it on/off.\\
\\
The MONITOR object \DIFdelbegin \DIFdel{utilizes }\DIFdelend \DIFaddbegin \DIFadd{utilized }\DIFaddend a watchdog timer that \DIFdelbegin \DIFdel{causes }\DIFdelend \DIFaddbegin \DIFadd{caused }\DIFaddend an interrupt when it reaches 0. The watchdog \DIFdelbegin \DIFdel{does }\DIFdelend \DIFaddbegin \DIFadd{did }\DIFaddend not get fed directly from by the end of the different tasks. Instead the tasks \DIFdelbegin \DIFdel{sets }\DIFdelend \DIFaddbegin \DIFadd{set }\DIFaddend a flag, if all the flags \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend set the watchdog \DIFdelbegin \DIFdel{gets }\DIFdelend \DIFaddbegin \DIFadd{got }\DIFaddend reset and the countdown \DIFdelbegin \DIFdel{starts }\DIFdelend \DIFaddbegin \DIFadd{started }\DIFaddend from the beginning. If the watchdog \DIFdelbegin \DIFdel{times }\DIFdelend \DIFaddbegin \DIFadd{timed }\DIFaddend out before all the flags \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend set the monitor object \DIFdelbegin \DIFdel{resets }\DIFdelend \DIFaddbegin \DIFadd{reset }\DIFaddend the board.\\
\\
Each of the objects \DIFdelbegin \DIFdel{interacts }\DIFdelend \DIFaddbegin \DIFadd{interacted }\DIFaddend with each others fulfilling mutually exclusive interaction. It \DIFdelbegin \DIFdel{means }\DIFdelend \DIFaddbegin \DIFadd{meant }\DIFaddend that any shared variables \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend only be accessed by one object at time. This \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend important considering the program \DIFdelbegin \DIFdel{is be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend fully automatic and to prevent unnecessary data lost. The objects interface diagrams and their sequence diagrams can be found in Appendix \ref{sec:appB} and \ref{sec:appC}.
\end{enumerate}
%\begin{figure}[H]
% \centering
% \includegraphics[width=1\textwidth]{4-experiment-design/img/hood-diagram-v1-0.png}
% \caption{Hierarchic Object-Oriented Design of the software}
% \label{fig:hood}
%\end{figure}
\subsubsection{Implementation}\label{sec:4.8.3}
The C/C++ programming language \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used when programming the platform. Instead of Arduino IDE, PlatformIO IDE \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used, other software \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used if necessary. The software \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend functioning autonomously using a real-time operating system. FreeRTOS \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend chosen as the real-time operating system, which \DIFdelbegin \DIFdel{provides }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend a feature to split functionality into several mutual exclusive tasks. These tasks \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend \begin{itemize}
\item The Sampler task (periodic)
\item The Reading task (periodic)
\item heaterTask task (periodic)
\item telecommand task (sporadic)
\end{itemize}
Several libraries that \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used:
\begin{itemize}
\item FreeRTOS\_ARM.h (FreeRTOS specially port for ARM microprocessor like Due)
\item ArduinoSTL.h (allows standard C++ functionality)
\item RTCDue.h (keeps track of the time from the software start)
\item Necessary Arduino libraries.
\item DS1631.h (self made library)
\item MS5607.h (self made library)
\item Sensors libraries.
\end{itemize}
\raggedbottom
\pagebreak
\subsection{Ground Support Equipment}\label{sec:4.9}
The purpose of the ground station \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to monitor in real-time the experiment and provide manual override capability in case the experiment failed functioning autonomously. The manual override \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend able to control all the valves, pump, and heaters. It also \DIFdelbegin \DIFdel{provide }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend a service to change the sampling schedule while in flight. \par
One personal computer \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to connect to the E-Link through the Ethernet port. A GUI \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend created to display the sensors data and valves, pump states during the experiment. MATLAB GUIDE \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used for the development. \par
The design of the ground station \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend responsible for receiving and transmitting data over the provided Ethernet connection. Using GUIDE to create a GUI and respective functions as a skeleton, the necessary functionality to receive, transmit and display \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend built accordingly. The functions \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend defined for each GUI element.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{4-experiment-design/img/GS-GUI-final.png}
\caption{GUI Design for Ground Station Version 2.}
\label{fig:guiDesign}
\end{figure}
Figure \ref{fig:guiDesign} shows the design of ground station GUI. Telemetry data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend shown in several tables based on the data type. The data \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend recorded and stored on the computer. The experiment status panel \DIFdelbegin \DIFdel{represents }\DIFdelend \DIFaddbegin \DIFadd{represented }\DIFaddend the real-time status of the experiments, the red indicator \DIFdelbegin \DIFdel{will change }\DIFdelend \DIFaddbegin \DIFadd{changed }\DIFaddend to green if the pump or valves \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend open later on. On the bottom side, the telecommand control panel \DIFdelbegin \DIFdel{provides }\DIFdelend \DIFaddbegin \DIFadd{provided }\DIFaddend command generation for the experiment. On its right side, the connection control panel \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend full control of the connections.
\raggedbottom
\raggedbottom
\pagebreak
\section{Experiment Verification and Testing}
\subsection{Verification Matrix}
The verification matrix \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend made following the standard of \textit{ECSS-E-10-02A}. \cite{ECSSSecretariat}. This section does not list obsolete requirements. For a complete list of requirements that include obsolete ones, refer to Appendix \ref{sec:appFullListOfRequirements}.
\textit{There are four established verification methods:}
\newline \textit{A - Verification by analysis or similarity}
\newline \textit{I - Verification by inspection}
\newline \textit{R - Verification by review-of-design}
%\newline \textit{S - Verification by similarity}
\newline \textit{T - Verification by testing}
\makeatletter
\renewcommand\@makefntext[1]{\leftskip=3em\hskip-1em\@makefnmark#1}
\makeatother
\begin{longtable}[]{|m{0.06\textwidth}| m{0.48\textwidth} |m{0.13\textwidth} |m{0.1\textwidth}|m{0.15\textwidth}|}
\hline
\textbf{ID} & \textbf{Written requirement} & \textbf{Verification} & \textbf{Test number} & \textbf{Status} \\ \hline
F.2 & The experiment \textit{shall} collect air samples by the CAC.& A, R & - & Pass \\ \hline % by similarity \cite{AircoreFlights}
F.3 & The experiment \textit{shall} collect air samples by the AAC. & A, T& 2, 16 & Pass\\ \hline %Analysis passed, see Section \ref{sec:aac-analysis}
F.9 & The experiment \textit{should} collect data on the air intake flow to the AAC. & A, T & 24, 31, 32 & Pass \footnote{sensor libraries are available online and used by many users\label{fn:sensor-libraries}}\\ \hline
F.10 & The experiment \textit{shall} collect data on the air pressure. & A, T& 24, 31, 32 & Pass \textsuperscript{\ref{fn:sensor-libraries}}\\ \hline
F.11 & The experiment \textit{shall} collect data on the temperature. & A, T& 24, 31, 32 & Pass \textsuperscript{\ref{fn:sensor-libraries}}\\ \hline
P.12 & The accuracy of the ambient pressure measurements \textit{shall} be -1.5/+1.5 hPa for 25$\degree{C}$. & R & - & Pass \\ \hline %from data sheet
P.13 & The accuracy of the temperature measurements \textit{shall} be +3.5/-3$\degree{C}$(max) for condition of -55$\degree{C}$ to 150$\degree{C}$. & R & - & Pass \\ \hline %from data sheet
P.23 & The sampling rate of the temperature sensor \textit{shall} be 1 Hz. & A,T & 10 & Pass \\ \hline %Section \ref{sec:4.8.2}
P.24 & The temperature of the Pump \textit{shall} be between 5$\degree{C}$ and 40$\degree{C}$. & A, T & 5 & \DIFdelbegin \DIFdel{Analysis passed, see Figure \ref{fig:test-flight-AAC-4-6}, Test ongoing. }\DIFdelend \DIFaddbegin \DIFadd{Pass }\DIFaddend \\ \hline
P.25 & The minimum volume of air in the sampling bags for analysis \textit{shall} be 0.18 L at ground level. & A, T & 16, 17 & Pass \\ \hline
P.26 & The equivalent flow rate of the pump \textit{shall} be between 8 to 3 L/min from ground level up to 24 km altitude. & T & 18 & Pass \\ \hline
P.27 & The accuracy range of the sampling time, or the resolution, \textit{shall} be less than 52.94 s, or 423.53 m. & T & 16 & Pass \\ \hline
P.28 & The sampling rate of the pressure sensor \textit{shall} be 1 Hz. & A,T & 10 & Pass \\ \hline
P.29 & The sampling rate of the airflow sensor \textit{shall} be 1 Hz. & A,T & 10 & Pass \\ \hline
P.30 & The accuracy of the pressure measurements inside the tubing and sampling bags \textit{shall} be -0.005/+0.005 bar for 25$\degree{C}$. & R & - & Pass \\ \hline %from data sheet
D.1 & The experiment \textit{shall} operate in the temperature profile of the BEXUS vehicle flight and launch.\cite{BexusManual} & A, T & 5 & \DIFdelbegin \DIFdel{Simulations passed see section \ref{Thermal_section}, Test ongoing. }\DIFdelend \DIFaddbegin \DIFadd{Pass }\DIFaddend \\ \hline
D.2 & The experiment \textit{shall} operate in the vibration profile of the BEXUS vehicle flight and launch.\cite{BexusManual} & A, T & 9 & Pass \\ \hline %Analysis passed, see Section \ref{sec:4.4.1}
D.3 & The experiment \textit{shall} not have sharp edges or loose connections to the gondola that can harm the launch vehicle, other experiments, and people. & R, I & - & Pass \\ \hline %\textsuperscript{\ref{fn:unnecessary-requirement}}
D.4 & The experiment's communication system \textit{shall} be compatible with the gondola's E-link system with the RJF21B connector over UDP for down-link and TCP for up-link. & A, T & 8 & \DIFdelbegin \DIFdel{Analysis passed, see Section \ref{sec:4.8.2} }\DIFdelend \DIFaddbegin \DIFadd{Pass }\DIFaddend \\ \hline%DIF > Analysis passed, see Section \ref{sec:4.8.2} \\ \hline
D.5 & The experiment's power supply \textit{shall} have a 24v, 12v, 5v and 3.3v power output and be able to take 28.8v input through the Amphenol PT02E8-4P connector supplied from the gondola. & A & - & Pass \\ \hline %, see Sections \ref{sec:4.2.2} and \ref{sec:4.5.1}
D.7 & The total DC current draw \textit{should} be below 1.8 A. & A, T & 10, 19, 20, 29, 33 & Pass \\ \hline
D.8 & The total power consumption \textit{should} be below 374 Wh.& A & - & Pass \\ \hline %, Table \ref{tab:power-design-table}
D.16 & The experiment \textit{shall} be able to autonomously turn itself off just before landing. & R, T & 7, 10, 31, 32 & Pass \\ \hline
D.17 & The experiment box \textit{shall} be placed with at least one face exposed to the outside. & R, A & - & Pass
\\ \hline %Section \ref{sec:4.2.1}
D.18 & The experiment \textit{shall} operate in the pressure profile of the BEXUS flight.\cite{BexusManual} & A, T & 4, 18, 30 & Pass
\\ \hline
D.19 & The experiment \textit{shall} operate in the vertical and horizontal acceleration profile of the BEXUS flight.\cite{BexusManual} & A, T & 9, 25, 27 & Pass
\\ \hline
D.21 & The experiment \textit{shall} be attached to the gondola’s rails. & R & - & Pass
\\ \hline %Section \ref{sec:4.2.1}
D.22 & The telecommand data rate \textit{shall} not be over 10 kb/s. & A, R & - & Pass
\\ \hline%Analysis passed, see Section \ref{sec:4.8.2}.
D.23 & The air intake rate of the air pump \textit{shall} be equivalent to a minimum of 3 L/min at 24 km altitude. & A, T & 4, 18 & Pass \\ \hline
D.24 & The temperature of the Brain \textit{shall} be between -10$\degree{C}$ and 25$\degree{C}$. & A, T & 5 & \DIFdelbegin \DIFdel{Analysis passed, see Section \ref{sec:4.6.5}, Test ongoing. }\DIFdelend \DIFaddbegin \DIFadd{Pass }\DIFaddend \\ \hline
D.26 & The AAC air sampling \textit{shall} filter out all water molecules before filling the sampling bags. & A, T & 17 & Pass \\
\hline
D.27 & The total weight of the experiment \textit{shall} be less than 28 kg.
& R, T & 3 & Pass \\\hline %Review of design passed, explained in Section \ref{sec:3.2.2}
D.28 & The AAC box \textit{shall} be able to fit at least 6 air sampling bags. & R & - & Pass \\\hline %Review of design passed, explained in Section \ref{sec:4.4.5}
D.29 & The CAC box \textit{shall} take less than 3 minutes to be removed from the gondola without removing the whole experiment.
& R, T & 12 & Pass\\\hline
D.30 & The AAC \textit{shall} be re-usable for future balloon flights. & R, T & 7, 16 & Pass \\
\hline %Review of design passed, explained in Section \ref{Mechanical_Design}
D.31 & The altitude from which a sampling bag will start sampling \textit{shall} be programmable. & A,T& 10, 14 & Pass\\ \hline
D.32 & The altitude from which a sampling bag will stop sampling \textit{shall} be programmable.& A,T & 10 & Pass\\ \hline
O.13 & The experiment \textit{should} function automatically. & R, T & 7, 8, 10 & Pass \DIFdelbegin \DIFdel{test 7 and 10, test 8 to take place at campaign. }\DIFdelend \\ \hline %DIF < Review of design passed, explained in Section \ref{sec:4.8.3}
%DIF > Review of design passed, explained in Section \ref{sec:4.8.3} test 7 and 10, test 8 to take place at campaign.
O.14 & The experiment's air sampling mechanisms \textit{shall} have a manual override. & R, T & 8, 10 & Pass \DIFdelbegin \DIFdel{test 10, test 8 to be completed at campaign. }\DIFdelend \\ \hline %DIF < Review of design passed, explained in Section \ref{sec:4.9}
%DIF > Review of design passed, explained in Section \ref{sec:4.9} test 10, test 8 to be completed at campaign.
C.1 & Constraints specified in the BEXUS User Manual. & I & - & Pass \\ \hline
\caption{Verification Matrix.}
\label{tab:var-mat}
\end{longtable}
\raggedbottom
\pagebreak
\subsection{Test Plan}
\subsubsection{Test Priority} \label{sec:5.2.1-testpriority}
As shown in Table \ref{tab:classification}, tests \DIFdelbegin \DIFdel{have been }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend split into three different levels of priority, low, medium and high. The priority given to each test \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend dependent on several factors including complexity, amount of external help required and time taken.
\begin{table}[H]
\centering
\begin{tabular}{|p{0.1\linewidth}|p{0.1\linewidth}|p{0.7\linewidth}|}
\hline
\textbf{Priority Level} & \textbf{Test Number} & \textbf{Classification} \\ \hline
High & 4, 5, 7, 10, 17 & \begin{itemize}
\item Requires the use of external facilities which must be booked in advance and could have limited availability.
\item If a re-test is required the wait time could be on the order of weeks or months.
\item Testing could potentially break a non-spare component with a long re-order time.
\end{itemize}\\ \hline
Medium & 2, 8, 9, 12, 16, 18, 24, 27, 29, 30 & \begin{itemize}
\item Requires internal cooperation or multiple parts of the experiment completed to a minimum standard.
\item If a re-test is required the wait time could be on the order of days.
\item Testing could potentially break a critical component that would require re-ordering or replacing.
\end{itemize} \\ \hline
Low & 3, 13, 14, 15, 19, 20, 25, 28, 31, 32 & \begin{itemize}
\item Can be performed by a single department.
\item If a re-test is required the wait time could be on the order of hours.
\item Have low or no risk of breaking components.
\end{itemize} \\ \hline
\end{tabular}
\caption{Table Showing the Classification of the Tests.}
\label{tab:classification}
\end{table}
\raggedbottom
\subsubsection{Planned Tests}
The planned tests \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend as follows:
\begin{enumerate}
\item \st{Valves test}.\footnote{\DIFdelbegin \DIFdel{Has been }\DIFdelend \DIFaddbegin \DIFadd{Was }\DIFaddend combined with Tests 4, 5 and 24.\label{fn:test-combined}}
\item Data collection test in Table \ref{tab:data-coll-test}.
\item Weight verification in Table \ref{tab:weight-test}.
\item Low pressure test in Table \ref{tab:vacuum-test}.
\item Thermal test in Table \ref{tab:thermal-test}.
\item \st{Experiment assembly and disassembly test}.\footnote{Unnecessary test.\label{fn:test-removed}}
\item Bench test in Table \ref{tab:bench-test}.
\item E-Link test in Table \ref{tab:e-link-test}.
\item Vibration test in Table \ref{tab:vibration-test}.
\item Software operation test in Table \ref{tab:software-op-test}.
\item \st{Power systems test.}\footnote{\DIFdelbegin \DIFdel{Has been }\DIFdelend \DIFaddbegin \DIFadd{Was }\DIFaddend combined with Test 10.\label{fn:test-combined10}}
\item Experiment removal test in Table \ref{tab:removal-test}.
\item \st{Ground station - OBC connection test} \textsuperscript{\ref{fn:test-combined10}}
\item Ground station - OBC parameters reprogram test in Table \ref{tab:software-reprogram-test}
\item \st{Ground station invalid commands test}\textsuperscript{\ref{fn:test-removed}}
\item Sampling test in Table \ref{tab:sampling-system-test}.
\item Samples' condensation test in Table \ref{tab:samples-condensation-test}.
\item Pump low pressure test in Table \ref{tab:pump-low-pressure-test}.
\item PCB operations test in Table \ref{tab:pcb-test}.
\item Switching circuit testing and verification in Table \ref{tab:switching-test}.
\item \st{Arduino sensor operation test.}\footnote{\DIFdelbegin \DIFdel{Has been }\DIFdelend \DIFaddbegin \DIFadd{Was }\DIFaddend combined with Test 24.\label{fn:test-combined24}}
\item \st{Arduino, pump and valves operation test}.\textsuperscript{\ref{fn:test-combined24}}
\item \st{Pump thermal test.}\footnote{\DIFdelbegin \DIFdel{Has been }\DIFdelend \DIFaddbegin \DIFadd{Was }\DIFaddend combined with Test 5.\label{fn:test-combined5}}
\item Software and electronics integration testing in Table \ref{tab:soft-elec-integ-test}.
\item Mechanical structural testing in Table \ref{tab:structural-test}.
\item \st{Insulating foam low pressure test.}\footnote{\DIFdelbegin \DIFdel{Has been }\DIFdelend \DIFaddbegin \DIFadd{Was }\DIFaddend combined with Test 4.\label{fn:test-combined4}}
\item Shock test in Table \ref{tab:shock-test}.
\item Pump operation test in Table \ref{tab:pump-operation-test}.
\item Pump current in low pressure test in Table \ref{tab:pump-current-pressure-test}.
\item Sampling bag bursting test in Table \ref{tab:bag-burst}.
\item On-board software unit test in Table \ref{tab:onboard-software-unit-test}.
\item Software failure test in Table \ref{tab:software-failure}.
\item Electrical component test in Table \ref{tab:scomponent-test}
% \item test in Table \ref{tab:}.
\end{enumerate}
\subsubsection{Test Descriptions}
If a non-destructive test \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend not proceeding as expected \textit{and} it \DIFdelbegin \DIFdel{is thought there is }\DIFdelend \DIFaddbegin \DIFadd{was thought there was }\DIFaddend a risk to components it \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend aborted. If a test \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend aborted for this reason an investigation must \DIFdelbegin \DIFdel{be }\DIFdelend \DIFaddbegin \DIFadd{have been }\DIFaddend completed to discover why it did not proceed as expected and the issue resolved before a re-test \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend occur.
Tests \DIFdelbegin \DIFdel{will take }\DIFdelend \DIFaddbegin \DIFadd{took }\DIFaddend place on the flight model due to budget and time restrictions which \DIFdelbegin \DIFdel{prevent }\DIFdelend \DIFaddbegin \DIFadd{prevented }\DIFaddend a test model from being created. However, if a component \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend broken during testing spares \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend available. Tests 4 and 5 \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{did }\DIFaddend not use the entire model due to size restrictions in the chambers. Instead only critical components \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend tested.
\DIFdelbegin \DIFdel{All test procedures and durations are subject to change.
}\DIFdelend %DIF > All test procedures and durations are subject to change.
%\input{5-experiment-verification-and-testing/tables/test-tables/01-valves-test.tex}
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 2 \\ \hline
\textbf{Test Type} & Software \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Arduino, sensors, valves and pump \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Run software for full flight duration and ensure data collection proceeds as expected. Particularly watch for error handling and stack overflow. \\ & Test duration: 5 hours. Based on previous BEXUS flight durations.\\ \hline
\textbf{Test Campaign Duration} & 2 days (1 day build-up, 1 day testing) \\ \hline
\textbf{Test Campaign Date} & August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 2: Data Collection Test Description.}
\label{tab:data-coll-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 3 \\ \hline
\textbf{Test Type} & Weight Verification \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & The entire experiment \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Use scales to measure the weight of the entire experiment. \\ & Test duration: 1 minute\\ \hline
\textbf{Test Campaign Duration} & 1 day \\ \hline
\textbf{Test Campaign Date} & October \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 3: Weight Verification Description.}
\label{tab:weight-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 4 \\ \hline
\textbf{Test Type} & Vacuum \\ \hline
\textbf{Test Facility} & IRF, Kiruna \\ \hline
\textbf{Tested Item} & Sampling System \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Take sampling system down to 5 hPa and verify all systems work. If the size of the vacuum chamber is restrictive testing just the pump with the airflow and pressure sensors, one valve and one bag will suffice. Ensure valves and pump still perform as expected by checking the flow rate with the airflow sensor and visually observing the bag inflating. In addition the insulating foam will be checked to ensure it does not deform when exposed to low pressures.\\ & Test duration: 5 hours \\ \hline
\textbf{Test Campaign Duration} & \DIFdelbeginFL \DIFdelFL{1 week }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{3 weeks }\DIFaddendFL \\ \hline
\textbf{Test Campaign Date} & 18th July, 20th July\DIFdelbeginFL \DIFdelFL{and August}%DIFDELCMD < \tablefootnote{Testing date dependent on valve arrival. A problem arose with the order which we are in contact with the company about.%DIFDELCMD < \label{fn:testingthevalve}%%%
}%%%
\DIFdelendFL \DIFaddbeginFL \DIFaddFL{, August, September}\DIFaddendFL \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
%\footnotetext{Testing date dependent on valve arrival. A problem arose with the order which we are in contact with the company about.\label{fn:testingthevalve}}
\caption{Test 4: Low Pressure Test Description.}
\label{tab:vacuum-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 5 \\ \hline
\textbf{Test Type} & Thermal \\ \hline
\textbf{Test Facility} & FMI, Finland\DIFaddbeginFL \DIFaddFL{, Esrange, Kiruna }\DIFaddendFL \\ \hline
\textbf{Tested Item} & The entire experiment \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Place experiment in thermal chamber and take the temperature down to at least $-40\degree{C}$ but preferably $-80\degree{C}$ and verify all systems still work. Make sure that the Brain stays between $-10\degree{C}$ and 25$\degree{C}$.\\ & Test duration: 5 hours \\ \hline
\textbf{Test Campaign Duration} & 1 week \\ \hline
\textbf{Test Campaign Date} & 3rd-7th September, 29th September, 5th October \\ \hline
\textbf{Test Completed} & \DIFdelbeginFL \DIFdelFL{ONGOING }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{YES }\DIFaddendFL \\ \hline
\end{tabular}
\caption{Test 5: Thermal Test Description.}
\label{tab:thermal-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/06-assembly-test.tex}
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 7 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & The entire experiment \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Assemble entire experiment and ensure all testing points and/or monitors are in place. Run through simulated countdown. Run through simulated launch and flight, include simulated e-link drop outs. Potentially run experiment for longer to simulate wait time before recovery. \\ & Test duration: 10 hours \\ \hline
\textbf{Test Campaign Duration} & 2 days (1 day build-up, 1 day testing) \\ \hline
\textbf{Test Campaign Date} & September \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 7: Bench Test Description.}
\label{tab:bench-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 8 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & Esrange Space Centre TBC \\ \hline
\textbf{Tested Item} & The entire experiment \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Assemble experiment and set up any desired monitoring sensors. Run through simulated countdown. Run through simulated launch and flight, include simulated E-link drop outs. Potentially run experiment for longer to simulate wait time before recovery.\\ & Test duration: 5 hours \\ \hline
\textbf{Test Campaign Duration} & 2 days \\ \hline
\textbf{Test Campaign Date} & October (during launch campaign) \\ \hline
\textbf{Test Completed} & \DIFdelbeginFL \DIFdelFL{NO }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{YES }\DIFaddendFL \\ \hline
\end{tabular}
\caption{Test 8: E-link Test Description.}
\label{tab:e-link-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 9 \\ \hline
\textbf{Test Type} & Vibration \\ \hline
\textbf{Test Facility} & IRF/LTU, Kiruna \\ \hline
\textbf{Tested Item} & Entire experiment \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Mount the experiment on the back of a car/trailer in the same way it will be mounted on the gondola and drive over a bumpy or rough terrain. Afterwards, check the experiment for functionality and structural integrity.\\ & Test duration: 2 hours \\ \hline
\textbf{Test Campaign Duration} & 1 week \\ \hline
\textbf{Test Campaign Date} & 3rd - 7th September \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 9: Vibration Test Description.}
\label{tab:vibration-test}
\end{table}
\raggedbottom
%Use a shake table in the university facilities to test both random and sinusoidal vibrations. The boxes will be tested individually and attached together. In order to inspect the response of the inside elements, the test will also be done without the walls.
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 10 \\ \hline
\textbf{Test Type} & Software and Electronics \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Electronics and sampling systems \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: First ensure communication between ground station and OBC work. Ensure software and electronics responds well to all possible commands for all phases of the flight. Check the electronic currents, voltages at the different stages. Ensure experiment can be shut down manually. Perform simulated flight using previous BEXUS flight data.\\ & Test duration: 10 hours\\ \hline
\textbf{Test Campaign Duration} & 2 days (1 day build up, 1 day test) \\ \hline
\textbf{Test Campaign Date} & August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 10: Software and Electronics Operation Test Description.}
\label{tab:software-op-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/11-electronics-test.tex}
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 12 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Entire experiment \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Mount the experiment as it would be mounted in the gondola. Using only the instructions that will be given to the recovery team a volunteer from outside of the team will remove the CAC box. A timer will be run to check how long it takes, this time should not exceed three minutes. The procedure should be simple and fast and the instructions clear. \\
& Test duration: 5 minutes \\ \hline
\textbf{Test Campaign Duration} & 1 hour\\ \hline
\textbf{Test Campaign Date} & September \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 12: Experiment Removal Test Description.}
\label{tab:removal-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/13-gs-obc-connection-test.tex}
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 14 \\ \hline
\textbf{Test Type} & Software \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Ardunio, ground station \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Ensure ground station can reprogram some parameters on OBC. Perform parameter changes.\\ & Test duration: 15 minutes\\ \hline
\textbf{Test Campaign Duration} & 1 day \\ \hline
\textbf{Test Campaign Date} & 25th August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 14: Ground Station-OBC Parameters Reprogram Test Description.}
\label{tab:software-reprogram-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/15-gs-obc-invalidcommand-test.tex}
\renewcommand\thempfootnote{\arabic{mpfootnote}}
\begin{table}[H]
\centering
\begin{minipage}{\textwidth}
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 16 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Sampling System \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Once the sampling system has been connected, including the bags, lay or hang the system out on the bench. The valves will be opened and closed in series and the pump switched on and off using the Arduino to control them. The Arduino should be supplied simulated pressure sensor readings so that the system will run the sampling points as it would during flight. The bags will be monitored to check that they are inflating as expected. Airflow and static pressure readings that give the pressure from inside the bags will be used to verify that sampling is occurring properly. \\ & Test duration: 3 hours. \\ \hline
\textbf{Test Campaign Duration} & 2 days (1 day build-up, 1 day testing)\\ \hline
\textbf{Test Campaign Date} & August \DIFdelbeginFL \DIFdelFL{\textsuperscript{\ref{fn:testingthevalve}} }\DIFdelendFL \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 16: Sampling System Verification.}
\label{tab:sampling-system-test}
\end{minipage}
\end{table}
\raggedbottom
%
%\renewcommand\thempfootnote{\arabic{mpfootnote}}
\begin{table}[H]
\centering
\begin{minipage}{\textwidth}
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 17 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & FMI \\ \hline
\textbf{Tested Item} & Sampling bags \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: All valves, bags and tubes had to be connected. Then the entire system was flushed the same way it will be for the flight. After flushing, the sampling bags were filled with a gas of known concentration. The bags were then left outside for 6, 14, 24 and 48 hours. In total 8 sampling bags were used with two bags for each time duration. After each time duration two bags were removed and analyzed using the Picarro analyzer. The second time the test was repeated 6 sampling bags were tested and left outside for 15, 24, and 48 hours. The concentration of gases found inside the bags were compared to the initial concentration of the air placed in the bags. If the concentration changes then the sampling bags must be retrieved and analyzed before that amount of time has elapsed for the samples to be preserved. \\ & Test duration: 3 days. \\ \hline
\textbf{Test Campaign Duration} & 5 days \\ \hline
\textbf{Test Campaign Date} & 7th-9th May AND 3rd-7th September \\ \hline
\textbf{Test Completed} & YES\\ \hline
\end{tabular}
\caption{Test 17: Sampling Bags' Holding Times.}
\label{tab:samples-condensation-test}
\end{minipage}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 18 \\ \hline
\textbf{Test Type} & Vacuum \\ \hline
\textbf{Test Facility} & IRF, Kiruna \\ \hline
\textbf{Tested Item} & Pump \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Pump shall be placed in a low pressure testing chamber and a bag with a known volume attached to its output. The pump shall then be run at several different pressures that will be encountered during flight. The time taken to fill the bag will be recorded and the flow rate extrapolated.\\ & Test duration: 1 day \\ \hline
\textbf{Test Campaign Duration} & 2 days (1 day build-up, 1 day testing) \\ \hline
\textbf{Test Campaign Date} & 1st - 2nd May \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 18: Pump Low Pressure Test.}
\label{tab:pump-low-pressure-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 19 \\ \hline
\textbf{Test Type} & Electronics \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Electronics PCB \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: As PCB board is soldered check using a multimeter for shorts. Check that the circuit operates as intended by checking the voltages and currents at test points using a multimeter. \\ & Test duration: 1 hour \\ \hline
\textbf{Test Campaign Duration} & \DIFdelbeginFL \DIFdelFL{recurrent }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{Recurrent }\DIFaddendFL \\ \hline
\textbf{Test Campaign Date} & July \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 19: PCB Board Operations Check.}
\label{tab:pcb-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 20 \\ \hline
\textbf{Test Type} & Electronics \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Valves, Arduino, Switching Circuit \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Beginning on a bread board the switching circuit will be set up connecting one end to a 3.3 V supply and another to a 24 V supply. It will be checked that turning the 3.3 V supply on and off also turns the valve/heater/pump on and off. The current draws during switching will also be monitored to check that they are in line with what the DC-DC/gondola power that can be provided. Once the circuit is working in this configuration the 3.3V supply will be switched for the Arduino and the 24 V supply to the DC-DC and the test repeated. When the circuit is working on bread board it can then be soldered onto the PCB. As it is soldered onto the PCB each switch should be checked. Finally once all switches are soldered onto the PCB a check should be made on the whole switching system that it turns on and off all components on command. \\ & Test duration: Recurrent \\ \hline
\textbf{Test Campaign Duration} & 2 months \\ \hline
\textbf{Test Campaign Date} & July and August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 20: Switching Circuit Testing and Verification.}
\label{tab:switching-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/21-arduino-sensor-op-test.tex}
%\input{5-experiment-verification-and-testing/tables/test-tables/22-arduino-pump-valves-test.tex}
%\input{5-experiment-verification-and-testing/tables/test-tables/23-pump-thermal-test.tex}
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 24 \\ \hline
\textbf{Test Type} & Verification and integration \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & All electronics, ground station and Arduino \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Once the electronics is at minimum in a breadboard state it will be tested with the software. This will begin with sensor checks. The Arduino will be connected to the sensors and performance checked. Once the switching circuits have been completed for the valves, pump, and heaters the software which controls how these components turn on and off will be tested. If any of the responses from the electronics are not what was expected from the input from the software then the electronic connections will be checked and the software refined and the test will repeat. These tests will begin on bread board electronics and continue as the electronics are fixed into their final positions. In addition as the software will continue to be developed until 15th September these tests will repeat to ensure that performance continues to be as expected. \\ & Test duration: Recurrent \\ \hline
\textbf{Test Campaign Duration} & Until 15th September \\ \hline
\textbf{Test Campaign Date} & Recurrent \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 24: Software and Electronics Integration Testing.}
\label{tab:soft-elec-integ-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 25 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} <U, Kiruna \\ \hline
\textbf{Tested Item} & Mechanical box structure \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: The mechanical structure will be tested under different loads to ensure it can withstand the expected stresses and strains during flight regarding different g-loads. This test will consist in a non-destructive static stress test with progressive loads located at the top of the CAC and AAC boxes. \\ & Test duration: 2 days \\ \hline
\textbf{Test Campaign Duration} & 1 weeks \\ \hline
\textbf{Test Campaign Date} & August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 25: Structural Test.}
\label{tab:structural-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/26-insulation-low-pressure-test.tex}
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 27 \\ \hline
\textbf{Test Type} & Mechanical \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Mechanical interfaces \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: The mechanical interfaces will be tested under different loads to ensure they can withstand the expected stresses and strains during flight. This is done by dropping the whole box from a certain height with a mattress or soft surface underneath it. Maximum height 1 m. \\ & Test duration: 2 hours \\ \hline
\textbf{Test Campaign Duration} & 1 day \\ \hline
\textbf{Test Campaign Date} & 20th September \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 27: Shock Test.}
\label{tab:shock-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 28 \\ \hline
\textbf{Test Type} & Electrical \\ \hline
\textbf{Test Facility} & LTU, Kiruna\\ \hline
\textbf{Tested Item} & Pump \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: The pump will be tested to check its current draw under normal, turn on, entrance covered and exit covered conditions. \\ & Test duration: 1 hour \\ \hline
\textbf{Test Campaign Duration} & 1 day \\ \hline
\textbf{Test Campaign Date} & 24th April \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 28: Pump Operation Test.}
\label{tab:pump-operation-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 29 \\ \hline
\textbf{Test Type} & Electrical \\ \hline
\textbf{Test Facility} & IRF, Kiruna \\ \hline
\textbf{Tested Item} & Pump \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: The pump will be tested to check its current draw as the outside air pressure is changed. \\ & Test duration: 2 hours \\ \hline
\textbf{Test Campaign Duration} & 1 day \\ \hline
\textbf{Test Campaign Date} & 4th May \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 29: Pump Current in Low Pressure Test.}
\label{tab:pump-current-pressure-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 30 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & IRF, Kiruna \\ \hline
\textbf{Tested Item} & Sampling Bags \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Continuously pump air into the sampling bags until the sampling bags burst. If the tested sampling bag does not burst after 3 minutes of continuous pumping, remove the sampling bag from the pressure chamber and leave at rest to check if it will burst within 48 hours. If bursting occurs in the chamber while the sampling bag is being pump then observe and characterize its impact to assess whether a similar bursting risks damaging the sampling bag's surrounding in the experimental setup. If the bursting occurs during the 48 hours rest period then observe and characterize the damage/rupture on the sampling bag to assess whether a similar bursting risks damaging the sampling bag's surrounding in the experimental setup. \\ & Test duration: 3 minutes to 48 hours. \\ \hline
\textbf{Test Campaign Duration} & 3 days \\ \hline
\textbf{Test Campaign Date} & 1st, 2nd and 4th May \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 30: Sampling Bag Bursting Test Description.}
\label{tab:bag-burst}
\end{table}
\raggedbottom
% \begin{table}[H]
% \centering
% \begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
% \hline
% \textbf{Test Number} & 31 \\ \hline
% \textbf{Test Type} & Verification \\ \hline
% \textbf{Test Facility} & IRF / Kiruna Space Campus \\ \hline
% \textbf{Tested Item} & Sampling Bags \\ \hline
% \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\textsuperscript{\ref{fn:testing}}\end{tabular}}} & Continuously pump air into the sampling bags at lowest and highest predicted air pressures until the sampling bags burst. If the tested sampling bag does not burst after 3 minutes of continuous pumping, remove the sampling bag from the pressure chamber and leave at rest to check if it will burst within 48 hours. If bursting occurs in the chamber while the sampling bag is being pump then observe and characterize its impact to assess whether a similar bursting risks damaging the sampling bag's surrounding in the experimental setup. If the bursting occurs during the 48 hours rest period then observe and characterize the damage/rupture on the sampling bag to assess whether a similar bursting risks damaging the sampling bag's surrounding in the experimental setup. \\ & Test duration: 3 minutes to 48 hours. \\ \hline
% \textbf{Test Campaign Duration} & 3 days \\ \hline
% \textbf{Test Campaign Date} & May \\ \hline
% \textbf{Test Completed} & YES \\ \hline
% \end{tabular}
% \caption{Test 31: Bag bursting test description}
% \label{tab:vacuum-test}
% \end{table}
% \raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 31 \\ \hline
\textbf{Test Type} & Verification \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & On-board software \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Unit test cases are build to test the functionality of the software.\\ & Test duration: Not Applicable. \\ \hline
\textbf{Test Campaign Duration} & Until software freeze date. \\ \hline
\textbf{Test Campaign Date} & May-September \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 31: On-board Software Unit Test Description.}
\label{tab:onboard-software-unit-test}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 32 \\ \hline
\textbf{Test Type} & Software \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & On-board software and Arduino \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Test failure possibilities in the software. Micro-controller re-sets during auto-mode. Communication loss at inconvenient moments such as when changing mode, when sending a command, when receiving data whilst sampling. Simulate loss of SD card during flight. \\ & Test duration: 1 hour\\ \hline
\textbf{Test Campaign Duration} & Recurrent\\ \hline
\textbf{Test Campaign Date} & July and August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 32: Software Failure Test}
\label{tab:software-failure}
\end{table}
\raggedbottom
\begin{table}[H]
\centering
\begin{tabular}{|m{0.3\textwidth}| m{0.7\textwidth} |}
\hline
\textbf{Test Number} & 33 \\ \hline
\textbf{Test Type} & Electrical \\ \hline
\textbf{Test Facility} & LTU, Kiruna \\ \hline
\textbf{Tested Item} & Electrical Components \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Test Level/ Procedure \\ and Duration\end{tabular}}} & Test procedure: Connect components on breadboard as part of the schematic and test them partwise. Check the resistances required \\ & Test duration: 3 hour \\ \hline
\textbf{Test Campaign Duration} & 2 weeks \\ \hline
\textbf{Test Campaign Date} & 21st-22nd July and 4th-5th August \\ \hline
\textbf{Test Completed} & YES \\ \hline
\end{tabular}
\caption{Test 33: Electrical Component Testing.}
\label{tab:scomponent-test}
\end{table}
\raggedbottom
%\input{5-experiment-verification-and-testing/tables/test-tables/XX-aac-reusability.tex}
\pagebreak
\subsection{Test Results} \label{sec:5.3}
The results shown here provide the key information obtained from testing. A full report for each test can be found in Appendix \ref{sec:apptestres}.
\subsubsection{Test 28: Pump Operations}
\label{sec:test28result}
It was found that when the power supply was switched on the current went up to 600 mA for less than one second. It then settled to 250 mA. By covering the air intake, simulating air intake from a lower pressure, the current drops to 200 mA. By covering the air output, simulating pushing air into a higher pressure, the current rises to 400 mA.
Therefore the power for each of these conditions \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 14.4 W at turn on, 6 W in normal use, 4.8 W when sucking from low pressure, 9.6 W when pushing to high pressure.
\subsubsection{Test 18: Pump Low Pressure}\label{subsection:pumplowpressuretest}
The pump was tested at low pressure using a small vacuum chamber down to \SI{10}{\hecto\pascal}. Flow rates were recorded from \SI{75}{\hecto\pascal}, the expected highest sampling altitude.
The results can also be seen in Table \ref{tab:flowratetest} and Figure \ref{fig:pump-performance-lowpressue}.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
& \multicolumn{1}{l|}{\textbf{Sampling Altitudes}} & \multicolumn{1}{l|}{\textbf{Ambient Pressure}} & \multicolumn{1}{l|}{\textbf{Actual Flow rate}} \\ \hline
\multirow{2}{*}{\textbf{Ascent Phase}} & 18 km & 75.0 hPa & $\sim$3.78 L/min \\ \cline{2-4}
& 21 km & 46.8 hPa & $\sim$3.36 L/min \\ \hline
\multirow{4}{*}{\textbf{Descent Phase}} & 17.5 km & 81.2 hPa & $\sim$3.77 L/min \\ \cline{2-4}
& 16 km & 102.9 hPa & $\sim$3.99 L/min \\ \cline{2-4}
& 14 km & 141.0 hPa & $\sim$4.18 L/min \\ \cline{2-4}
& 12 km & 193.3 hPa & $\sim$4.71 L/min \\ \hline
\end{tabular}
\caption{Sampling Altitudes as well as the Corresponding Ambient Pressures According to the 1976 US Standard Atmosphere and the Normal Flow Rates at Each Altitude.}
\label{tab:flowratetest}
\end{table}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=11cm]{5-experiment-verification-and-testing/img/pump-flow-rate-new.png}
\end{align*}
\caption {Obtained Pump Performance at Low Pressure.} \label{fig:pump-performance-lowpressue}
\end{figure}
\subsubsection{Test 30: Sampling Bag Bursting}
\label{sec:test30result}
A sampling bag was placed in a small vacuum chamber connected to the pump and the pump was run for 3 minutes with a full bag to see how the bag reacted.
It was found that there are two potential failure modes. The first is a slow leakage caused by damage to the bag seal and the second is a rapid failure of the bag seal leading to total loss of the sample.
It \DIFdelbegin \DIFdel{can be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend concluded that, as long as the bags are well secured to the valves at the bottom and through the metal ring at the top, bag bursting during flight would not cause damage to any other components on board. Even during the more energetic burst that occurs from continuous pumping the bag remained fixed to the valve connection and experienced no fragmentation. The consequences of a single bag burst would be limited to loss of data and a disturbance to audio frequencies.
\subsubsection{Test 29: Pump Current under Low Pressure}
\label{sec:test29result}
In general it was found that decreasing the pressure, or increasing the altitude, lead to a decrease in pump current draw. The full results can be seen in Table \ref{tab:pumpcurrentpressure}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Altitude (km)} & \textbf{Pressure (hPa)} & \textbf{Into Bag Current (mA)} & \textbf{Into Seal Current (mA)} \\ \hline
20 & 57 & 140 & 138 \\ \hline
18 & 68 & 150 & 141 \\ \hline
16 & 100 & 161 & 146 \\ \hline
12 & 190 & 185 & 175 \\ \hline
9 & 300 & - & 200 \\ \hline
6 & 500 & - & 242 \\ \hline
0 & 1013 & - & 218 \\ \hline
\end{tabular}
\caption{Table Showing How the Current Draw of the Pump Changed With Outside Air Pressure for Two Different Conditions. The First Pumping Into a Sampling Bag and the Second Pumping Into a Sealed Tube.}
\label{tab:pumpcurrentpressure}
\end{table}
From the table it can be seen that the current draw is higher during the bag filling than during the sealed case. As the experiment will sample between 11 km and 24 km it can be concluded that the highest current draw will occur during the 11 km altitude sample and can be expected to be around 200 mA.
\subsubsection{Test 17: Sampling bags' holding times and samples'}
\label{sec:test17result}
The main objective of this test was to flush eight 1 L sampling bags with nitrogen, the same way it will be done for the flight. After the flushing was done, filled them with a dry gas and placed them outside for 6, 14, 24 and 48 hours. Then analyzed two sampling bags after each time duration and saw if the concentration of gases inside has changed.
The test was done twice as the first test did not give conclusive results.
The general outcome of this test the first time was that the team realized that the flushing of the sampling bags is a very delicate process. This test was also useful to decide that the flushing of the sampling bags should be done with dry gas instead of nitrogen in order to minimize the effects of the nitrogen diluting in the samples.
This test had to and was repeated, using the set-up described in Section 4, with some differences. This time 3L bags were flushed with dry gas and left outside for 15, 24, 48 hours. After the flushing was done, two bags for each time were filled with 0.5 L and 1L of dry gas and left outside. Then they were analyzed and checked if the sample concentrations were the same or close enough with the reference values of the filled dry gas.
The obtained results are shown in Figure \ref{fig:test17-resultsSEP}. The blue points represent the sampling bags with the 0.5L sample, while the red points show the sampling bags with the 1L sample. Sampling Bag No1 with the sampling bag No4 were analyzed after 15 hours. The pair of sampling bags No2 and No5 were analyzed after 24 hours and the last pair of sampling bags, No4 and No6 after 48 hours.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/test17resultsSEP.jpg}
\end{align*}
\caption {Obtained Variation in Concentration for (a) $CO_2$ in ppm, (b) $CO$ in ppb, (c) $CH_4$ in ppb and (d) $H_2O$ in $\%$.} \label{fig:test17-resultsSEP}
\end{figure}
The results were very good in general with the $CO_2$ concentration differences not higher than 2 ppm. The bags with the 0.5L sample gave bigger $CO_2$ concentration differences and higher humidity for all the tested times. For the bags that analyzed after 48 hours, the humidity was two times higher for the 0.5L sample compared to the 1L sample. If water goes through the walls of the bags at the same rate for both bags then it is normal that sampling bags with larger amounts of sampled air have lower humidity concentrations. Therefore, for better results, the air left in the sampling bags at sea level pressure must be the maximum possible.
\textbf{Efficiency of the flushing procedure.}
While testing the holding times of the sampling bags, some other relevant tests were performed. A sampling bag was flushed the way it will be flushed before flight, and then analyzed immediately. The Picarro readings were very close with the reference values, which means that the flushing procedure is sufficient. From the results obtained when flushing in May, it was also decided to use dry gas for flushing and not nitrogen which has now been confirmed to work better. \\
The humidity levels inside the AAC system were also tested and found to be acceptable.
\textbf{Flushing over night.}
In addition to the tests performed for the holding times, it was also tested if flushing a sampling bag the night before and sampling it the next day was affecting the samples. A sampling bag was flushed following the same procedure as before and left sealed over night. The next day it was sampled and left outside for almost four hours. Then it was analyzed and the results were compared with a sampling bag that was being flushed and then immediately sampled. The results were good enough with $CO_2$ concentrations being higher in the sampling bag which was flushed the night before. A reasonable result since the $CO_2$ concentration inside a room is higher than outside. Therefore, the team has decided that the flushing of the bags shall be done as late as possible moment before the flight.
% Flushing with nitrogen may alter the results.
\subsubsection{Test 4: Low Pressure}\label{lowpressure}
\textbf{Styrofoam}
The same vacuum chamber was used as in Tests 18 and 29. The Styrofoam was measured on each side before it was placed in the chamber. It was then taken down to 5 hPa and held there for 75 minutes. It was then removed and the sides were measured again. It was found that there was no significant change in dimensions. The results can be seen in Table \ref{tab:styrofoam-test-result-2}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|}
\hline
Side & Before (cm) & After (cm) \\ \hline
A & 9.610 & 9.580 \\ \hline
B & 9.555 & 9.550 \\ \hline
C & 9.560 & 9.565 \\ \hline
D & 9.615 & 9.610 \\ \hline
E & 9.615 & 9.615 \\ \hline
F & 9.555 & 9.550 \\ \hline
G & 9.605 & 9.605 \\ \hline
H & 5.020 & 5.020 \\ \hline
I & 5.025 & 5.025 \\ \hline
J & 5.015 & 5.015 \\ \hline
K & 5.020 & 5.025 \\ \hline
\end{tabular}
\caption{Styrofoam Size Before and After Vacuum.}
\label{tab:styrofoam-test-result-2}
\end{table}
\textbf{Airflow}
After the first airflow in vacuum test failed due to datalogging errors the airflow test was repeated. In this repeated test all of the Brain was placed into the vacuum chamber and one bag attached. It was not possible to attach more than one bag due to space restrictions.
The flow rate seemed to be too low for the rate the bag was inflating in the chamber. It was concluded that the airflow rate displayed is the equivalent airflow at sea level.
\textbf{Software}
With the same set-up as the airflow low pressure testing the software was tested to verify if it was operating as intended and that the conditions for stopping sampling were working.
The software was found to be operating as intended and the stoppers were working.
\textbf{Temperatures}
As it is not possible to complete a thermal vacuum test in addition to the thermal testing temperatures were also monitored inside the vacuum chamber.
The temperature of the CAC flushing valve, pressure sensor, PCB, Pump and Manifold was also monitored with continuous use. After one hour and 48 minutes during the same test as the valve temperature the CAC flushing valve was found to reach 68$\degree{C}$ and the pressure sensor was found to reach 39$\degree{C}$. After one hour and 24 minutes during the flow rate monitoring test where the sensors, pump, and one manifold valve were on continuously the PCB temperature sensor was at 43$\degree{C}$, the pump at 42$\degree{C}$ and the manifold at 33$\degree{C}$. As the pump will never be on for more than a few minutes at a time there is not any concern that this temperature will ever be reached during flight.
\subsubsection{Test 20: Switching Circuit Testing and Verification}
The switching circuit has been continuously tested from breadboard to PCB and verified to work at each different step. All valves, heaters and pump can be controlled both manually and automatically by the Arduino. For further details on this test see Appendix \ref{sec:test33result}.
\subsubsection{Test 32: Software Failure}
it was found that losing the SD card does not interrupt ground station data, it just means no data will be written to the SD card. However, if you reconnect the SD after removing it will not connect back to the SD card and it as if the SD card has been permanently lost.\par
The second failure test is how the software handles unexpected reset. The most concerning problem is which bag that will be sampled after the reset. It has been tested that the software could read the current sampling status from SD card and continue where it left off.
\subsubsection{Test 31: Unit Test}
Unit test was used to test several software non-hardware dependent functionality e.g. translating telecommand, storing measurements data to buffer. The functions were tested for several expected cases and a few bugs were discovered and fixed.
\subsubsection{Test 10: Software and Electronics Operation}
OBS transmits data to ground station continuously. If connection is lost and later re-established the ground station will continue to receive data from the onboard computer. To send a telecommand after a drop in connection requires a restart to the TCP connection.
\subsubsection{Test 14: Ground Station-OBC Parameters Reprogram Test}
After the scheduler, a command to change the sampling parameters, is implemented, the scheduler was tested and can change the sampling schedule from ascent to descent. The previous parameters were $56.8$ and $36.8$. They were successfully changed to $30$ and $70$. However, the user needs to be careful and has to do the correct calculation for the new parameters.
\subsubsection{Test 24: Software and Electronics Integration}
The different type of sensors were integrated one at time with the Arduino. The result was all the sensors working without interfering with each other.
\subsubsection{Test 5: Thermal Test} \label{Test-5}
\DIFdelbegin \DIFdel{From the thermal test completed at FMI it can be concluded that the regulation for keeping the critical components above lower operating temperature is working. As seen in Figure \ref{fig:test-5-thermal-chap5} both }\DIFdelend \DIFaddbegin \DIFadd{A AAC test box out of Styrofoam were put into a freezer at Esrange. Only the bag area were reduced in order to fit. The test were on for 4h and 40min. For 3h 30min the temperature were \mbox{%DIFAUXCMD
$-50\degree{C}$
}%DIFAUXCMD
. During that span it were tested if the heating system worked between the thresholds, flush, sample and try run the pump and let it fall bellow zero while operating. From Figure \ref{fig:thermal-test-esrange-5-3} it can be seen that }\DIFaddend the pump (\DIFdelbegin \DIFdel{purple}\DIFdelend \DIFaddbegin \DIFadd{pink}\DIFaddend ) and the \DIFdelbegin \DIFdel{manifold (blue) have their heaters turning on when the temperature gets to the lower threshold.
The dotted vertical line is where the temperature sensors got error in communication.
}\DIFdelend \DIFaddbegin \DIFadd{valve (black) is heating and in their thresholds. It were concluded that everything worked as it should. After 3h 30min the freezer were slowly put to go down to \mbox{%DIFAUXCMD
$-60\degree{C}$
}%DIFAUXCMD
to try the experiments heating system more.
}\DIFaddend
\begin{figure}[H]
\centering
\DIFdelbeginFL %DIFDELCMD < \includegraphics[width=\linewidth]{5-experiment-verification-and-testing/img/Thermal-Test-3.jpg}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \includegraphics[width=\linewidth]{5-experiment-verification-and-testing/img/Thermal-test-esrange.jpg}
\DIFaddendFL \caption{Thermal Chamber Test.}
\DIFdelbeginFL %DIFDELCMD < \label{fig:test-5-thermal-chap5}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \label{fig:thermal-test-esrange-5-3}
\DIFaddendFL \end{figure}
\DIFdelbegin \DIFdel{Further testing in the university freezer down to \mbox{%DIFAUXCMD
$-20\degree{C}$
}%DIFAUXCMD
was also undertaken for 6 and 8 hours with no problems seen. Final testing will take place at Esrange on the 5th October as a final verification.
}%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend \subsubsection{Test 27: Shock test}
The entire pneumatic system and electrical system was mounted in the AAC box along with the walls and Styrofoam attached. It was then dropped from a height of approximately one meter three times. Nothing came loose or was damaged after this drop test. All electronics were verified to still work.
\subsubsection{Test 9: Vibration test}
The entire experiment was placed in the tailgate of a car, while the test was carried out on a 18 km long rough terrain. An emergency brake was also implemented during the test. The experiment's functionality and structural integrity were capable of handling the vibrations and the stopping force.
No damages or issues were detected after this test.
\subsubsection{Test 25: Structure test}
A team member was placed on top of each box's structure. Both the CAC and AAC box was able to fully support the member's weight without showing any instability or deflections. No damages or issues were detected after this test.
\subsubsection{Test 33: Electrical Component Testing}
\label{sec:Test33Test-Electronical-Component-Testing}
The components were tested separately, one by one to double check and determine their power consumption and their functionality. Some tests were also run to determine specific resistances on voltage bridges and pull down resistors for LEDs running at different voltages. These tests gave further insight to the PCB design and the power design. The test results were according to expectations and the design and assembly could continue as planned. There were also some test run for the PCB, which showed that some connections were not done in the way that was planned due to design issues. Theses were solved by adding wires to the PCB instead of redesigning the PCB and order a new one due to time and budget limitations. For further details on these tests see Section \ref{sec:test33result}.
\subsubsection{Test 12: Removal test}
For a non team member to perform the removal of the CAC box based on the given instructions, it took that person 6 min and 25 sec. This time is expected to be lower for the recovery team as the items to be unscrewed were not yet clearly marked. One problem that occurred during this test was that the person had problems to distinguishing the CAC from the AAC box. To resolve this the boxes now have clear labels on them.
\subsubsection{Test 2: Data collection test}
The full software was run in auto mode to check everything operated as expected over a full test flight. At the end of the simulated flight the experiment was to shutdown automatically. This was tested both on the bench and in the vacuum chamber. In the vacuum chamber tests, see Section \ref{lowpressure}, the bench test, see Appendix \ref{benchtest} and the thermal test, see Appendix \ref{thermaltestresults} data collection was also monitored. It was found that the physical samples were being collected properly and all the sensors were returning expected data.
\subsubsection{Test 7: Bench test}\label{benchtest}
The experiment was run for 5 hours simulating 1 hour on ground, 1.5 hours in ascent, 2 hours in float and 0.5 hours in descent. The experiment was found to be operating as intended at all points. Additionally the temperature sensors have been tested at ambient conditions for over 6 hours and the pressure sensors for over 8 hours. No problems were found with the temperature sensors or pressure sensors on the bench.
\subsubsection{Test 16: Sampling test}
The system was tested while already mounted as this test was pushed back due to the late arrival of the static pressure sensor.
The Arduino successfully controlled all valves and the pump and through the static pressure and airflow sensor readings alone it could be confirmed if a bag was sampling.
\pagebreak
\section{Launch Campaign Preparations}
\subsection{Input for the Campaign / Flight Requirements Plans}
The TUBULAR experiment \DIFdelbegin \DIFdel{consists }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of two boxes with one air sampling system inside each of them. It \DIFdelbegin \DIFdel{shall be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend positioned with at least one side exposed to the outside.
\subsubsection{Dimensions and Mass}
\label{sec:dim-mass}
The data shown in Table \ref{tab:dim-mass-tab} below \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend based on the design presented in Section \ref{Mechanical_Design}.
%DIF < The mass for the electronics is estimated to be 1.5 kg.
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\DIFdelbeginFL %DIFDELCMD < \scalebox{0.8}{
%DIFDELCMD < \begin{tabular}{c|c|c|c|}
%DIFDELCMD < \cline{2-4}
%DIFDELCMD < & CAC & AAC & TOTAL \\ \hline
%DIFDELCMD < \multicolumn{1}{|c|}{Experiment mass {[}kg{]}} & 11.95 & 12.21 & 24.16 \\ \hline
%DIFDELCMD < \multicolumn{1}{|c|}{Experiment dimensions {[}m{]}} & $0.23\ x\ 0.5\ x\ 0.5$ & $0.5\ x\ 0.5\ x\ 0.4$ & $0.73\ x\ 0.5\ x\ 0.5$ \\ \hline
%DIFDELCMD < \multicolumn{1}{|c|}{Experiment footprint area {[}$m^2${]}} & $0.115$ & $0.25$ & $0.365$ \\ \hline
%DIFDELCMD < \multicolumn{1}{|c|}{Experiment volume {[}$m^3${]}} & $0.0575$ & $0.1$ & $0.1575$ \\ \hline
%DIFDELCMD < \multicolumn{1}{|c|}{Experiment expected COG position} & \begin{tabular}[c]{@{}l@{}}$X=23.51\ cm$\\ $Y=10\ cm$\\ $Z=22.57\ cm$ \end{tabular} & \begin{tabular}[c]{@{}l@{}} $X=29.04\ cm$\\ $Y=16.63\ cm$\\ $Z=16.2\ cm$ \end{tabular} &\begin{tabular}[c]{@{}l@{}} $X=26.31\ cm$\\ $Y=24.99\ cm$\\ $Z=19.35\ cm$ \end{tabular} \\ \hline
%DIFDELCMD < \end{tabular}}%%%
\DIFdelendFL \DIFaddbeginFL \scalebox{0.8}{
\begin{tabular}{c|c|c|c|}
\cline{2-4}
& CAC & AAC & TOTAL \\ \hline
\multicolumn{1}{|c|}{Experiment mass {[}kg{]}} & 11.95 & 12.21 & 24.16 \\ \hline
\multicolumn{1}{|c|}{Experiment dimensions {[}m{]}} & \mbox{%DIFAUXCMD
$0.23\ x\ 0.5\ x\ 0.5$
}%DIFAUXCMD
& \mbox{%DIFAUXCMD
$0.5\ x\ 0.5\ x\ 0.4$
}%DIFAUXCMD
& \mbox{%DIFAUXCMD
$0.73\ x\ 0.5\ x\ 0.5$
}%DIFAUXCMD
\\ \hline
\multicolumn{1}{|c|}{Experiment footprint area {[}\mbox{%DIFAUXCMD
$m^2$
}%DIFAUXCMD
{]}} & \mbox{%DIFAUXCMD
$0.115$
}%DIFAUXCMD
& \mbox{%DIFAUXCMD
$0.25$
}%DIFAUXCMD
& \mbox{%DIFAUXCMD
$0.365$
}%DIFAUXCMD
\\ \hline
\multicolumn{1}{|c|}{Experiment volume {[}\mbox{%DIFAUXCMD
$m^3$
}%DIFAUXCMD
{]}} & \mbox{%DIFAUXCMD
$0.0575$
}%DIFAUXCMD
& \mbox{%DIFAUXCMD
$0.1$
}%DIFAUXCMD
& \mbox{%DIFAUXCMD
$0.1575$
}%DIFAUXCMD
\\ \hline
\multicolumn{1}{|c|}{Experiment COG position} & \begin{tabular}[c]{@{}l@{}}\mbox{%DIFAUXCMD
$X=23.51\ cm$
}%DIFAUXCMD
\\ \mbox{%DIFAUXCMD
$Y=10\ cm$
}%DIFAUXCMD
\\ \mbox{%DIFAUXCMD
$Z=22.57\ cm$
}%DIFAUXCMD
\end{tabular} & \begin{tabular}[c]{@{}l@{}} \mbox{%DIFAUXCMD
$X=29.04\ cm$
}%DIFAUXCMD
\\ \mbox{%DIFAUXCMD
$Y=16.63\ cm$
}%DIFAUXCMD
\\ \mbox{%DIFAUXCMD
$Z=16.2\ cm$
}%DIFAUXCMD
\end{tabular} &\begin{tabular}[c]{@{}l@{}} \mbox{%DIFAUXCMD
$X=26.31\ cm$
}%DIFAUXCMD
\\ \mbox{%DIFAUXCMD
$Y=24.99\ cm$
}%DIFAUXCMD
\\ \mbox{%DIFAUXCMD
$Z=19.35\ cm$
}%DIFAUXCMD
\end{tabular} \\ \hline
\end{tabular}}\DIFaddendFL }
\caption{Experiment Summary Table.}
\label{tab:dim-mass-tab}
\end{table}
\subsubsection{Safety Risks}
Table \ref{tab:safrisk} contains the risks of all stages of the whole campaign and project.
\DIFaddbegin
\DIFaddend \begin{longtable}{|m{0.12\textwidth}|m{0.4\textwidth}|m{0.4\textwidth}|}
\hline
\textbf{Risk} & \textbf{Key Characteristics} & \textbf{Mitigation} \\ \hline
\st{Flammable substances} & \st{Styrofoam Brand Foam is oil based and is highly flammable}\footnote{Styrofoam has been found to only pose a flammable hazard when heated to at least 346$\degree{C}$.\cite{dowsverige}\label{fn:keychar}} & \st{Extensive testing will be performed to make sure there is no heat/fire source} \\ \hline
Sharp or cutting edges & Edges along the experiment & File down edges and cover them with tape \\ \hline
Chemical substances & Chemicals could be exposed after a hard landing & Magnesium Perchlorate filter mechanism is sealed and has been used before without any problem. In case of exposure after a hard impact, use protective goggles and gloves to avoid contact with the eyes and skin. The small quantities used for the experiment will not be a threat for the environment. Magnesium Perchlorate alone is not flammable but may cause or intensify fire in case of contact with combustible material. Therefore, the filter is made of stainless steel, which has high durability. \\ \hline
Pressure Vessels & Compressed fluid containers can pose a risk of exploding if damaged & Pressurised gas will be used to flush the system before flight and to calibrate the sensors before analysing our samples after landing. NO pressurised vessels will fly.
Three gas cylinders will be brought to Esrange by FMI. The cylinders will contain compressed dry air: \newline
Flush gas for the bag sampler: 20L at 140 bar \newline
Calibration gas for Picarro: 14L at 130 bar \newline
Flush/fill gas for AirCore: 26.8L at 110 bar (there will be 13 ppm CO in the cylinder) \\ \hline
\caption{Experiment Safety Risks.}
\label{tab:safrisk}
\end{longtable}
\raggedbottom
\subsubsection{Electrical Interfaces}
Please refer to Table \ref{tab:electrical-interface-table} for details on the electrical interfaces with the gondola.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{\textbf{BEXUS Electrical Interfaces}} \\ \hline
\multicolumn{3}{|c|}{\textbf{E-link Interface: Yes}} \\ \hline
\multirow{4}{*}{} & Number of E-link interfaces & 1 \\ \cline{2-3}
& Data rate - Downlink & 1.58 kbps \\ \cline{2-3}
& Data rate - Uplink & 1.08 kbps \\ \cline{2-3}
& Interface type (RS232, Ethernet) & Ethernet \\ \hline
\multicolumn{3}{|c|}{\textbf{Power system: Gondola power required? Yes}} \\ \hline
\multirow{2}{*}{} & Peak power (or current) consumption: & 38 W \\ \cline{2-3}
& Average power (or current consumption) & 24 W \\ \hline
\multicolumn{3}{|l|}{\textbf{Power system: Experiment includes batteries? No}} \\ \hline
\end{tabular}
\caption{Electrical Interface Table.}
\label{tab:electrical-interface-table}
\end{table}
\raggedbottom
\subsubsection{Launch Site Requirements}
The experiment \DIFdelbegin \DIFdel{needs }\DIFdelend \DIFaddbegin \DIFadd{needed }\DIFaddend some preparations before the flight. For that reason, the team \DIFdelbegin \DIFdel{will need }\DIFdelend \DIFaddbegin \DIFadd{needed }\DIFaddend a room with a big table to place the Picarro analyzer, with some extra space for all the interfaces between the analyzer and the CAC system, as well as the AAC system.
A laptop PC \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to monitor the experiment. Therefore, a desk and a chair \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend needed for this station. A total of 16 chairs need to be rented: 13 chairs for all members of the TUBULAR Team and an additional three for visiting collaborators from FMI. One power outlet and one Ethernet cable for E-link connection \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend also essential for the laptop PC.
\subsubsection{Flight Requirements}
Floating altitude \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend desired to be as high as possible in order to sample air from the stratosphere both in ascent and Decent Phase. The duration of the Float Phase \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend not relevant for the experiment performance.
\smallskip
No conditions for visibility \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend required for this experiment.
\smallskip
With respect to a swift recovery and transport for fast data analysis, a launch time in the early morning hours \DIFdelbegin \DIFdel{would be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend favorable.
\pagebreak
\subsubsection{Accommodation Requirements}
The experiment \DIFdelbegin \DIFdel{involves }\DIFdelend \DIFaddbegin \DIFadd{involved }\DIFaddend two rectangular boxes inside the gondola environment. The only requirement \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to allocate the box with at least one face exposed to the outside. The latter \DIFdelbegin \DIFdel{will also facilitate }\DIFdelend \DIFaddbegin \DIFadd{also facilitated }\DIFaddend the fast experiment recovery for the later analysis of the collected samples. The design \DIFdelbegin \DIFdel{allows }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend full adaptability regarding the interface with the gondola's rails, for more details see Section \ref{Mechanical_Design}. The current location of the experiment in Figure \ref{goldola_accommodation} is the one arranged with REXUS/BEXUS Coordinators during the Training Week in Esrange.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{6-launch-campaign-preparation/img/Figure_49_Gondola.png}
\caption{Example of Experiment Box Accommodation Inside the Gondola.}
\label{goldola_accommodation}
\end{figure}
\pagebreak
\subsection{Preparation and Test Activities at Esrange}\label{prep_for_Esrange}
The ground station laptop PC \DIFdelbegin \DIFdel{will need to be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend put in place and set up so it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend operational. The communication through E-link with the experiment \DIFdelbegin \DIFdel{shall be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend tested. The air sampling schedule on the SD card \DIFdelbegin \DIFdel{has to be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend checked before flight.
In the preparation phase magnesium filters \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend prepared. These \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend short (7 cm) lengths of stainless steel tubing that \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend filled with 2 mg of fresh magnesium perchlorate powder \cite{Karion}. One \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend attached to the inlet of the CAC tubing, to ensure that no moisture \DIFdelbegin \DIFdel{enters }\DIFdelend \DIFaddbegin \DIFadd{entered }\DIFaddend the tubing during testing or sampling. The magnesium perchlorate powder \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend loosely packed to make sure that the air flow \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend not blocked. Stone wool \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend placed at both ends of the tube to prevent the powder escaping from the filter.
The same set-up \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used for the AAC. As stratospheric air is dry the risk of moisture entering the system during sampling \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend very low, however, the team decided to use one to reduce the risk of condensation in the samples after landing.
\DIFdelbegin \DIFdel{One or two }\DIFdelend \DIFaddbegin \DIFadd{A few }\DIFaddend days before the flight \DIFdelbegin \DIFdel{the CAC will go through }\DIFdelend \DIFaddbegin \DIFadd{while in Finland, the CAC was left inside an oven at 110 \mbox{%DIFAUXCMD
$\degree$
}%DIFAUXCMD
C for 5 hours. At the same time, nitrogen was running through the CAC at a flow rate of 110 ml/min. This was necessary, in order to remove humidity sufficiently through evaporation. Due to the high temperature of the oven and having nitrogen running through the system, it was made sure to remove the humidity from the coil.
}
\DIFadd{Two days before the flight, the CAC went through }\DIFaddend some preparations. \DIFdelbegin \DIFdel{The coiled tube will be flushed and filled with a fill gas }\DIFdelend \DIFaddbegin \DIFadd{At 19:44 on Sunday 14 of October, the flushing of the coiled tube with fill gas started}\DIFaddend . A fill gas is air with a spike of a known gas, for example CO. During the flushing process the coiled tube, solenoid valve, the exit tube as well as the magnesium perchlorate filter \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend flushed separately. In the flushing process the quick connectors at outlet and inlet \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend connected to the fill gas bottle and Picarro analyzer respectively. \DIFdelbegin \DIFdel{A fill gas will then flow }\DIFdelend \DIFaddbegin \DIFadd{The fill gas with a flow rate of 2 L/min was then flown }\DIFaddend through the coiled tube all the way to the Picarro \DIFdelbegin \DIFdel{. It will be flushed }\DIFdelend \DIFaddbegin \DIFadd{for 10 minutes. Then, it was left flushing }\DIFaddend over night at a flow rate of \DIFdelbegin \DIFdel{40 }\DIFdelend \DIFaddbegin \DIFadd{40.8 }\DIFaddend ml/min to ensure unknown gases inside the tube \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be removed.
\DIFaddbegin
\DIFadd{After approximately 11 hours, the flushing of the CAC was over at 07:03 on Monday 15 of October. When the flushing procedure was over, Picarro was disconnected while the fill gas remained connected in order to over-pressure the CAC. This ensured that when the other parts were connected ambient air would not enter the system, as the fill gas would be exiting. Leaving the CAC over-pressured also ensured that if the quick connectors were leaking it would be fill gas leaking our and not ambient air leaking in. This was important as there were two days in between flushing and flight. }\DIFaddend Meanwhile on other hand the solenoid valve and the exit tube \DIFdelbegin \DIFdel{will be flushed manually . The }\DIFdelend \DIFaddbegin \DIFadd{were flushed manually for approximately 5 minutes at a flow rate of 2L/min. As a last step, the }\DIFaddend outlet and inlet \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend sealed while the gas \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend still running through the CAC and therefore the CAC \DIFdelbegin \DIFdel{will be filled }\DIFdelend \DIFaddbegin \DIFadd{was filled and over-pressured}\DIFaddend . Thereafter it \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend attached to the remaining components such as the magnesium perchlorate filter, solenoid valve and exit tube. At this stage the CAC \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend ready for the flight.
A pre-launch checklist in Appendix \ref{sec:appL}, was made to assure that the flight preparations will be done thoroughly. This includes a step by step flushing procedure.
For the AAC system, the \DIFdelbegin \DIFdel{tubing, and the manifold will be }\DIFdelend \DIFaddbegin \DIFadd{manifold was }\DIFaddend cleaned by flushing \DIFdelbegin \DIFdel{them }\DIFdelend \DIFaddbegin \DIFadd{it }\DIFaddend with a dry gas as soon as all the pre-flight testing \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done. The dry gas is extracted from the fill gas and has slightly different concentrations from the fill gas. The dry gas bottle, the vacuum pump and the AAC system, \DIFdelbegin \DIFdel{will all be }\DIFdelend \DIFaddbegin \DIFadd{were all }\DIFaddend connected as a system to the central valve. \DIFdelbegin \DIFdel{When the central valve is }\DIFdelend \DIFaddbegin \DIFadd{The pump and the valves of the AAC system were cleaned by this procedure.
The night of the flight, Tuesday 16 of October at 23:00, the flushing of the AAC started. The plugs from the inlet and outlet tubes of the AAC were unscrewed and a male thread quick connector was screwed in to the inlet tube. The dry gas, vacuum pump, and central valve system were then connected to the inlet tube. Flushing started when the central valve was }\DIFaddend open to dry gas, \DIFdelbegin \DIFdel{dry gas will start }\DIFdelend \DIFaddbegin \DIFadd{and dry gas started }\DIFaddend flowing into the AAC manifold while the flushing valve \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend open with the rest of the valves closed. The \DIFdelbegin \DIFdel{pump and the valves of the AAC system will be cleaned by this procedure. }\DIFdelend \DIFaddbegin \DIFadd{flow rate was at 2L/min and the flushing procedure was going on for approximately 15 minutes. When the central valve was closed, and dry gas stopped flowing into the AAC, the flushing valve was closed. The dry gas bottle, the vacuum pump, and the central valve system were disconnected from the inlet tube. The plug was screwed in to the inlet tube.
}\DIFaddend
In a second phase, using the AAC valves this time, the \DIFaddbegin \DIFadd{bags and consequently the }\DIFaddend tubes between the bags and the manifold\DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{, were }\DIFaddend flushed, again after the pre-flight testing \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done. Only one \DIFdelbegin \DIFdel{tube will be }\DIFdelend \DIFaddbegin \DIFadd{bag was }\DIFaddend flushed at a time, using the central valve, the \DIFdelbegin \DIFdel{T-union}\DIFdelend \DIFaddbegin \DIFadd{flushing valve}\DIFaddend , and the solenoid valve that \DIFdelbegin \DIFdel{matches the tube }\DIFdelend \DIFaddbegin \DIFadd{matched the bag }\DIFaddend to control which \DIFdelbegin \DIFdel{tube is }\DIFdelend \DIFaddbegin \DIFadd{bag was }\DIFaddend being flushed. The \DIFdelbegin \DIFdel{tubes have to be flushed until ten times their volume has passed through, this will be monitored with the air flow sensor. A quick connector with stem will be connected at the T-Union, so that air can exit the system. When flushing is complete the quick connector with stem will be removed from the T-union and the T-union will be automatically closed (quick connector interface).
The corresponding solenoid valve shall be closed at the same time the quick connector with stem is disconnected. The stem quick connector is then }\DIFdelend \DIFaddbegin \DIFadd{flushing had to be done three times for each bag to ensure the bags were properly cleaned. It was also important to flush the manifold again, between the flushing of each bag.
The dry gas, the vacuum pump, and the central valve system were }\DIFaddend connected to the \DIFdelbegin \DIFdel{next tube's T-union and the same procedure follows. }%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdel{Finally, once the AAC manifold and the tubes have been cleaned, the bags have to be flushed with the dry gas as well. This will happen the night before the flight. One bag will be flushed }\DIFdelend \DIFaddbegin \DIFadd{outlet tube, while the inlet tube was sealed. Next, the bags manual valves were opened. The flushing valve was kept open during the whole procedure. Only one solenoid valve that matched the bag which was being flushed was opened }\DIFaddend at a time\DIFdelbegin \DIFdel{by connecting the dry gas bottle, and the vacuum pump system with the T-union of the corresponding bag. It is important to make sure that the bags manual valve is open. The central valve's position will determine whether the dry gasor the vacuum is open. When the valve is open to dry gas the sampling bags are being filled and when it is open to vacuum they are being emptied. The flushing has to be done three times for each bag to ensure the bags are properly cleaned. A flow sensor will be placed close to the central valve}\DIFdelend \DIFaddbegin \DIFadd{.
}
\DIFadd{Flushing started when the central valve was open to dry gas}\DIFaddend , \DIFdelbegin \DIFdel{making sure that all bags will be filled with }\DIFdelend \DIFaddbegin \DIFadd{and dry gas started flowing through the AAC manifold, tubes, into the bag. The flow rate was set at 2 L/min. It took 1.5 minutes to fill each bag with 3L of dry gas. Then, the central valve was turned open to the vacuum allowing the bag to empty, for approximately 4.5 minutes. This procedure was repeated for all six bags. After the flushing of one bag was completed, the dry gas, vacuum pump, and central valve system was disconnected from the outlet tube and connected to the inlet tube, allowing the manifold to be flushed, before flushing the next bag, as described above. Then the dry gas, vacuum pump, and central valve system were connected to the outlet tube again, and the next bag was flushed. The whole flushing procedure took approximately }\DIFaddend 3 \DIFdelbegin \DIFdel{L of dry gas}\DIFdelend \DIFaddbegin \DIFadd{hours}\DIFaddend . After the end of \DIFdelbegin \DIFdel{this procedure}\DIFdelend \DIFaddbegin \DIFadd{the flushing}\DIFaddend , when the bags \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend empty again, the \DIFdelbegin \DIFdel{dry gas bottle and the vacuum pumpwill be }\DIFdelend \DIFaddbegin \DIFadd{flushing valve was closed. The dry gas, vacuum pump, and central valve system was }\DIFaddend disconnected from the \DIFdelbegin \DIFdel{corresponding bag's T-union while the vacuum pump is open}\DIFdelend \DIFaddbegin \DIFadd{outlet tube and the plug was screwed in}\DIFaddend . At this point, the AAC \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend ready for flight.
\DIFdelbegin \DIFdel{It is anticipated this process will take around 3 hours to complete.
}\DIFdelend
%DIF < Once the sampling bags have been cleaned and sealed, the system of tubes between the bags and the manifold have to be flushed with the dry gas as well. Again, only one tube will be flushed at a time, using the central valve, the T-union and the solenoid valve that matches the tube to control which tube is being flushed. The tubes have to be flushed until ten times their volume has passed through, this will be monitored with the air flow sensor. When flushing is complete the dry gas connection will be removed from the T-union and the T-union will be automatically closed (quick connector interface). The corresponding solenoid valve shall be closed at the same time the fill gas is disconnected. The dry gas is then connected to the next tube and the same procedure follows. After the flushing of all the tubes is complete, and the system is sealed, the pump will be started with the flushing valve open, to flush the AAC system. At the end, the pump will be shut off, the flushing valve will be closed, and the AAC will be ready for flight. Note that the manual valves of the sampling bags have to be opened before flight.
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend The pre-launch checklist in Appendix \ref{sec:appL} \DIFdelbegin \DIFdel{will again make }\DIFdelend \DIFaddbegin \DIFadd{was again made }\DIFaddend sure that all the steps \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend done correctly and in the right order.
In a laboratory phase, tests under monitored conditions \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend done to evaluate the overall consistency of the CAC and the AAC. In particular, the CAC and the AAC \DIFdelbegin \DIFdel{shall be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend tested for leaks at the junctions and at the valves.
Furthermore, the team \DIFdelbegin \DIFdel{has }\DIFdelend decided to clean the rest of the experiment's components, such as the Brain, as well as the structure. Doing so, any unwanted particles released during the experiment's construction, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend removed avoiding these particles to enter the pneumatic system and thus contaminating the collected samples.
\DIFdelbegin \DIFdel{For that reason, an appropriate device for small and sensitive components, such as a vacuum cleaner or a machine that blows air, will be used. If this is not possible, then the experiment will be }\DIFdelend \DIFaddbegin \DIFadd{The system was }\DIFaddend cleaned manually with a dust cloth\DIFaddbegin \DIFadd{, using gloves and IPA, }\DIFaddend given that this cleaning procedure is not of high need as the cleaning of the coil or the bags. Considering that the building of the experiment \DIFdelbegin \DIFdel{will take }\DIFdelend \DIFaddbegin \DIFadd{took }\DIFaddend place in a lab, which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a clean environment, this action \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done once before the flight\DIFdelbegin \DIFdel{, and the procedure may change if another, more effective way of cleaning is found}\DIFdelend . This procedure \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend done just after EAR.
%DIF > For that reason, an appropriate device for small and sensitive components, such as a vacuum cleaner or a machine that blows air, will be used. If this is not possible, then the experiment will be cleaned manually with a dust cloth given that this cleaning procedure is not of high need as the cleaning of the coil or the bags. Considering that the building of the experiment will take place in a lab, which is a clean environment, this action will be done once before the flight, and the procedure may change if another, more effective way of cleaning is found. This procedure will be done just after EAR.
\DIFaddbegin
\DIFaddend \pagebreak
\subsection{Timeline for Countdown and Flight}
Table \ref{tab:countflight} \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the estimated timeline during countdown and flight.
The desired altitudes in which air samples \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend to be collected with the sampling bags \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend associated with specific air pressure values. Thus, the valve operations to sample air during the balloon Ascent and Descent Phases \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend to be triggered by readings from the ambient pressure sensor. The time values presented in Table \ref{tab:countflight} merely \DIFdelbegin \DIFdel{serve }\DIFdelend \DIFaddbegin \DIFadd{served }\DIFaddend as an indicative estimate of when the sampling will take place as sampling \DIFdelbegin \DIFdel{will not be }\DIFdelend \DIFaddbegin \DIFadd{was not }\DIFaddend programmed based on flight time.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{1}{|c|}{\textbf{Time}} & \multicolumn{1}{c|}{\textbf{Altitude}} & \multicolumn{1}{c|}{\textbf{Events}} \\ \hline
\multicolumn{1}{|c|}{T-1/2DAYS} & \multicolumn{1}{c|}{0} & Start flushing the CAC system overnight for 8H \\ \hline
\multicolumn{1}{|c|}{T-7H} & \multicolumn{1}{c|}{0} & Start flushing the AAC system for 3H \\ \hline
\multicolumn{1}{|c|}{T-3H} & \multicolumn{1}{c|}{0} & Experiment is switched on external power \\ \hline
\multicolumn{1}{|c|}{T-3H} & \multicolumn{1}{c|}{0} & Experiment goes to Standby mode \\ \hline
\multicolumn{1}{|c|}{T-1H} & \multicolumn{1}{c|}{0} & Experiment switches to internal power \\ \hline
\multicolumn{1}{|c|}{T=0} & \multicolumn{1}{c|}{0} & Lift-off \\ \hline
\multicolumn{1}{|c|}{T+1s} & \multicolumn{1}{c|}{$\sim$5 meter} & Experiment goes to Normal - Ascent mode \\ \hline
\multicolumn{1}{|c|}{T+15 min} & \multicolumn{1}{c|}{1 km} & Experiment starts to empty the CAC's tube\\ \hline
%T+45 min & 15 km & Experiment stops emptying the tubes \\ \hline
\multicolumn{1}{|c|}{T+$\sim$1H} & \multicolumn{1}{c|}{$\sim$18 km} & Take air samples with AAC until $\sim$24 km \\ \hline
T+$\sim$1.5H & $\sim$25 km & Float Phase \\ \hline
T+$\sim$2.5H & $\sim$25 km & Cut-off \\ \hline
T+$\sim$2.6H & $\sim$25 km & Experiment goes to Normal - Descent mode \\ \hline
T+$\sim$2.75H & $\sim$20 km & Parachute is deployed \\ \hline
T+$\sim$2.8H & $\sim$19 km & Take air samples with AAC and CAC until 10 km above ground \\ \hline
T+3.5H & $\sim$10 km & Experiment goes to SAFE mode (all valves are closed) \\ \hline
\end{tabular}
\caption{Countdown and Flight Estimated Timeline.}
\label{tab:countflight}
\end{table}
\raggedbottom
\DIFaddbegin
\DIFadd{Table \ref{tab:LaunchTimelineActuall} shows the actual timeline which occurred during flight. The in-flight pump startup failure that occurred is shown together with the relevant actions taken during the in flight analysis of what the problem might have been. The procedure of differential pressure difference is also shown. After attempting to start the pump several times the team recognised already that a likely cause of failure was related to the pump getting enough current, therefore several different procedures were attempted to start the pump. The first was attempting to turn the pump on when everything other than the Arduino was switched off. The second was heating the pump up until the temperature readings showed that the pump was near the top of its operating range and then attempting to turn it on, still with all other components except the Arduino turned off. Neither of these attempts worked. A third idea was to try and start it by creating a pressure difference during the descent, however this was not attempted as it risked the CAC sample. Instead during descent the valves were opened to attempt passive sampling of the bags. However due to the lack of pressure difference between the bag and the ambient pressure this had a low probability of success.
}
\begin{longtable}{|m{0.1\textwidth}|m{0.15\textwidth}|m{0.7\textwidth}|}
\hline
\textbf{\DIFadd{Time}} & \textbf{\DIFadd{Altitude}} & \textbf{\DIFadd{Event}}\\ \hline
\DIFadd{T-03:54 }& \DIFadd{0 }& \DIFadd{Go to manual mode for 6 seconds then back to standby}\\ \hline
\DIFadd{T-03:07 }& \DIFadd{0 }& \DIFadd{Restart Ground Station}\\ \hline
\DIFadd{T-03:02 }& \DIFadd{0 }& \DIFadd{Groundstation Back Online and reciving data}\\ \hline
\DIFadd{T-00:00 }& \DIFadd{0 }& \DIFadd{Liftoff and automatic Accent mode }\\ \hline
\DIFadd{T+00:31 }& \DIFadd{10.1-10.3 km }& \DIFadd{Pump heater on 12 minutes}\\ \hline
\DIFadd{T+00:58 }& \DIFadd{18.8-19.1 km }& \DIFadd{First flushing was suposed to start, instead the Arduino resets and resulting in CAC valve closing }\\ \hline
\DIFadd{T+00:58 }& \DIFadd{18.8-19.1 km }& \DIFadd{Enter Manual Mode}\\ \hline
\DIFadd{T+01:01 }& \DIFadd{19.7-20 km }& \DIFadd{Flushing Valve opens for 20 seconds}\\ \hline
\DIFadd{T+01:05 }& \DIFadd{20.9-21.2 km }& \DIFadd{CAC Valve Reopened}\\ \hline
\DIFadd{T+01:06 }& \DIFadd{21.2-21.6 km }& \DIFadd{Pump Heater is on for 6 minutes }\\ \hline
\DIFadd{T+01:11 }& \DIFadd{22.8-23.1 km }& \DIFadd{Board Resets Due to attempting to start pump}\\ \hline
\DIFadd{T+01:12 }& \DIFadd{23.1-23.5 km }& \DIFadd{Pump Heater is on for 4 minutes }\\ \hline
\DIFadd{T+01:17 }& \DIFadd{24.4-24.5 km }& \DIFadd{Board resets due to attempt to start pump and open flushing Valve}\\ \hline
\DIFadd{T+01:20 }& \DIFadd{24.4-24.4 km }& \DIFadd{Flush Valve and Valve 1 is turned on to check if it would induce an error }\\ \hline
\DIFadd{T+01:21 }& \DIFadd{24.4-24.5 km }& \DIFadd{Float Phase Entered}\\ \hline
\DIFadd{T+01:32 }& \DIFadd{24.5-24.5 km }& \DIFadd{Change scheduler to 1 - 2 mbar for all bags as an atempt to make sure the system would not attempt to automaticaly sample }\\ \hline
\DIFadd{T+01:36 }& \DIFadd{24.4-24.4 km }& \DIFadd{Pump and Valve Heater on }\\ \hline
\DIFadd{T+02:10 }& \DIFadd{24.3-24.4 km }& \DIFadd{Pump and Valve Heater off}\\ \hline
\DIFadd{T+02:10 }& \DIFadd{24.3-24.4 km }& \DIFadd{Attempt to start Pump, fails and resets board}\\ \hline
\DIFadd{T+02:12 }& \DIFadd{24.3-24.3 km }& \DIFadd{CAC Valve opens and prepare for decent}\\ \hline
\DIFadd{T+02:55 }& \DIFadd{24.1-24.1 km}& \DIFadd{Decent Phase Entered}\\ \hline
\DIFadd{T+03:04 }& \DIFadd{14.9-14.1 km }& \DIFadd{Valve 2 is opened in an attempt to fill bags with ambient pressure difference }\\ \hline
\DIFadd{T+03:09 }& \DIFadd{11.0-10.4 km }& \DIFadd{Valve 2 is closed }\\ \hline
\DIFadd{T+03:10 }& \DIFadd{10.4-9.8 km }& \DIFadd{Valve 3 is opened in an attempt to fill bags with ambient pressure difference }\\ \hline
\DIFadd{T+03:14 }& \DIFadd{7.9-7.3 km }& \DIFadd{Valve 3 i closed }\\ \hline
\DIFadd{T+03:15 }& \DIFadd{7.3-6.7 km }& \DIFadd{Valve 4 i opened in an attempt to fill bags with ambient pressure difference }\\ \hline
\DIFadd{T+03:16 }& \DIFadd{6.7-6.2 km }& \DIFadd{Valve 4 is closed }\\ \hline
\DIFadd{T+03:16 }& \DIFadd{6.7-6.2 km }& \DIFadd{Accidental closing of CAC valve to early }\\ \hline
\DIFadd{T+03:17 }& \DIFadd{6.2-5.7 km }& \DIFadd{CAC Valve accidentally reopens for 1 second. }\\ \hline
\caption{\DIFadd{TUBULAR BEXUS 26 Launch Timeline}}
\label{tab:LaunchTimelineActuall}
\end{longtable}
\DIFaddend \pagebreak
\subsection{Post Flight Activities}
\subsubsection{CAC Recovery}
It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend important that the CAC \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend recovered as quickly as possible. The experiment \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend been designed so that the recovery team \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend easily remove the AirCore in the CAC box from the gondola without having to remove the entire experiment. This \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to facilitate possible transportation back to Esrange via helicopter.
This quick recovery \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend important to minimize the length of time in which mixing of the gas \DIFdelbegin \DIFdel{occurs }\DIFdelend \DIFaddbegin \DIFadd{occured }\DIFaddend in the collected CAC sample. The sample should be analyzed within five to six hours after the experiment lands. At PDR it was discussed that the CAC box could be brought back to Esrange on the helicopter instead of the truck. This situation \DIFdelbegin \DIFdel{would be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend preferable for TUBULAR Team.
The FMI team \DIFdelbegin \DIFdel{will arrive }\DIFdelend \DIFaddbegin \DIFadd{arrived }\DIFaddend at Esrange on the 12th of October with all the necessary equipment for pre-flight flushing and post-flight analysis. Having the FMI team at Esrange \DIFdelbegin \DIFdel{will give }\DIFdelend \DIFaddbegin \DIFadd{gave }\DIFaddend additional time for them to install and calibrate their lab equipment and also \DIFdelbegin \DIFdel{allow }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend them to proceed faster with the analysis process as soon as the CAC \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend returned to Esrange.
Detailed instructions \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend provided on how to remove the CAC box. In addition, instructions \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend provided to ensure that the system \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend completely shut down and the valves secured. Shutdown \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend automated, however, a manual shutdown mechanism \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend included should the automation fail.
\subsubsection{Recovery Checklist}
\label{sec:recovery-checklist}
FAST RECOVERY OF CAC
\begin{itemize}
\item Check no damage exists to outer structure and no white paste seen in inlet tubes, this confirms no leak and chemicals are SAFE.
\item Screw on the three metal plugs provided to the inlet and outlet tubes.
\item Unplug the gondola power cord from the AAC box. Circled with RED paint.
\item Unplug the E-Link connection from the AAC box. Circled with RED paint.
\item Unplug the D-Sub connector from the CAC Box. Circled with RED paint.
\item Unscrew 6 screws in the outside face of the experiment. Painted in RED.
\item Unscrew 6 screws in the inside face of the experiment. Painted in RED.
\item Unscrew 2 gondola attachment points from the CAC
\item Remove the CAC Box from the gondola. Handles located at the top of the box.
\end{itemize}
The regular recovery of the AAC and the non nominal recovery of the experiment is listed in Section \ref{ssec:RecoveryCheck} of Appendix \ref{sec:Checklist}.
%The analysis results that will then be used for the post flight meeting. Further analysis will then be carried out to fully understand the data. Once a full analysis of the data has been completed there is the potential for publication of research findings.
\subsubsection{Analysis Preparation}
In order to efficiently remove ambient air moisture from the analyzer, a calibration gas \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend to run through the Picarro analyzer, after the gondola cut-off phase until the CAC analysis. The reason this \DIFdelbegin \DIFdel{is done is because it is }\DIFdelend \DIFaddbegin \DIFadd{was done was because it was }\DIFaddend necessary that the readings of the calibrating gas \DIFdelbegin \DIFdel{stabilize }\DIFdelend \DIFaddbegin \DIFadd{stabilized }\DIFaddend before starting the analysis and the presence of moisture \DIFdelbegin \DIFdel{makes }\DIFdelend \DIFaddbegin \DIFadd{would have made }\DIFaddend this stabilization slower. Having the analyzer running for a few hours before the CAC recovery, \DIFdelbegin \DIFdel{saves }\DIFdelend \DIFaddbegin \DIFadd{saved }\DIFaddend precious time as it \DIFdelbegin \DIFdel{makes }\DIFdelend \DIFaddbegin \DIFadd{made it }\DIFaddend possible to start the analysis as soon as the CAC \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend recovered.
% After completing the flushing of the CAC, the picarro analyzer has to keep working. For that reason a calibrating gas will be connected to the analyzer and run through it, until the analysis of the CAC, preventing ambient air moisture to enter into the analyzer.
% It is necessary that the readings of the calibating gas running through the picarro analyzer have been stabilized before analyzing the CAC sample. This will distinguish where the collected air sample starts and where it stops.
% So, having the analyzer running during the flight will save precious time, and Using a gas with known concentrations will make easy the distinguish between the collected samples and By the time the CAC is recovered, the readings of the picarro will have stabilize and the analysis will start immediately.
\pagebreak
\section{Data Analysis and Results}
\subsection{Data Analysis Plan}\DIFaddbegin \label{sec:data-analysis-plan}
\DIFaddend
\subsubsection{Picarro G2401}
The analyzer that \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used is the model Picarro G2401. It uses near-infrared Cavity Ring Down Spectroscopy (CRDS) technology and is capable of measuring four atmospheric trace gases simultaneously and continuously ($CO, CO_2, CH_4, H_2O$).
The CRDS technique's basic principle is shown in Figure \ref{fig:CRDS}. Light from a semiconductor diode laser is used. There is an optical cavity filled with the gas that has to be analyzed and the aim is to determine the decay time of the diode laser light. As it can be seen in Figure \ref{fig:CRDS}, the sample gas is introduced in a cavity with three high-reflectivity mirrors. When the laser is shut off, the light that was circulating in the cavity decays with a characteristic time which is measured. If the wavelength of the injected light does not match any absorption feature of any gas in the cavity, the decay time is dominated by mirror loss and it is very long. On the other side, when the wavelength of the injected light is resonant with an absorption feature of a species in the cavity, the decay time is short and decreases as the reciprocal of the species concentration.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/CRDS.png}
\end{align*}
\caption{Schematics of CRDS Analyzer Showing Optical Cavity and Sample Gas Flow \cite{Picarro}.\label{fig:CRDS}}
\end{figure}
Figure \ref{fig:Picarro-interfaces} shows the back of the analyzer with gas supply, electrical and computer connections. The analyzer can be configured to deliver data in different formats: digital or analogue. When the main power is turned on the analyzer \DIFdelbegin \DIFdel{will automatically start}\DIFdelend \DIFaddbegin \DIFadd{automatically starts}\DIFaddend , including the Graphical User Interface (GUI).
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/Picarro-interfaces.png}
\end{align*}
\caption{Back of Picarro G2401 Analyzer Showing Gas Supply, Electrical and Computer Connections \cite{Picarrouserguide}.\label{fig:Picarro-interfaces}}
\end{figure}
Before the Picarro analyzer \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend ready for analysis, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend necessary to run a calibrating gas through it in order to remove moisture inside and to have stable measurements to compare with. Figure \ref{fig:picarro-connections} shows the Picarro set up \DIFdelbegin \DIFdel{at FMI in Sodankyl\"{a}}\DIFdelend \DIFaddbegin \DIFadd{in Esrange}\DIFaddend . A three way valve \DIFdelbegin \DIFdel{controls which is }\DIFdelend \DIFaddbegin \DIFadd{controlled which was }\DIFaddend the gas flowing into the analyzer. The tube labelled as "AIRCORE" \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the one to be connected to the sample, either sampling bags or CAC. The tube labelled as "PICARRO" \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the one that \DIFdelbegin \DIFdel{goes }\DIFdelend \DIFaddbegin \DIFadd{was going }\DIFaddend to the Picarro's inlet and the third tube, without a label, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend connected to the calibrating gas bottle.This set up \DIFdelbegin \DIFdel{allows }\DIFdelend \DIFaddbegin \DIFadd{allowed }\DIFaddend easy changing between the samples, dry gas and fill gas with the calibrating gas without getting moisture inside.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/picarro-connections.jpeg}
\end{align*}
\caption{Picarro Set-up Connections at FMI in Sodankyl\"{a}. \label{fig:picarro-connections}}
\end{figure}
Figure \ref{fig:picarro-GUI} shows the Picarro GUI during analysis. From top to bottom: $CO_2$ ppm, $CO$ ppm, $CH_4$ ppm and cavity pressure. These options \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be changed during analysis as it only means that those \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend the ones being displayed. Figure \ref{fig:picarro-GUI} was taken minutes after a change between \DIFdelbegin \DIFdel{dry }\DIFdelend \DIFaddbegin \DIFadd{calibrating }\DIFaddend gas-sample had been done so a change in the concentrations of $CO_2$ and $CH_4$ can be easily appreciated.
The Picarro analyzer \DIFdelbegin \DIFdel{does }\DIFdelend \DIFaddbegin \DIFadd{did }\DIFaddend not only give information about the displayed parameters, all the data \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend saved in a .dat file to be analyzed afterwards. The most relevant logged parameters \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend time, date, ambient pressure, cavity pressure, cavity temperature, $CO$ concentration, $CO_2$, $CH_4$ and $H_2O$ normal and dry concentration.
\DIFdelbegin \DIFdel{The dry concentration is a correction done automatically by the Picarro analyzer taking into account the moisture inside the analyzer.
}\DIFdelend
\begin{figure}[H]
\begin{align*}
\DIFdelbeginFL %DIFDELCMD < \includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/picarro-GUI.jpeg}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/duringanalysis.jpg}
\DIFaddendFL \end{align*}
\caption{Picarro Graphical User Interface. From Top to Bottom: $CO_2$ ppm, $CO$ ppm, $CH_4$ ppm and Cavity Pressure. \label{fig:picarro-GUI}}
\end{figure}
\subsubsection{Analysis Strategy}\DIFaddbegin \label{sec:analysisstrategy}
\DIFaddend
\DIFdelbegin \DIFdel{As it has been mentioned in the previous section, during the flight, calibrating gas will be flowing through the Picarro G2401.
After the flight, the collected samples from the CAC and }\DIFdelend %DIF > As it has been mentioned in the previous section, during the flight, calibrating gas was flowing through the Picarro G2401. After the flight, the collected samples from the CAC were analyzed. As it is shown in Figure \ref{fig:aircore-analysis}, the end of the CAC tube that remained closed during sampling was connected to the Picarro inlet. The other end of the CAC was connected to the fill gas that was acting as a push gas. As soon as this connection was done, the valve shown in Figure \ref{fig:picarro-connections} was switched from calibrating gas to "AIRCORE" position. Then the Picarro GUI showed a sudden increase in concentrations similar to the one shown in Figure \ref{fig:picarro-GUI} due to the difference between the calibrating gas and the sample concentrations. Note that the magnesium perchlorate dryer shown in Figure \ref{fig:aircore-analysis} was removed during analysis.
\DIFaddbegin
%DIF > \begin{figure}[H]
%DIF > \begin{align*}
%DIF > \includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/aircore-analysis.png}
%DIF > \end{align*}
%DIF > \caption{Schematics of CAC Analysis System \cite{AircoreFlights}. %\label{fig:aircore-analysis}}
%DIF > \end{figure}
%DIF > The analyzer pump and the push gas helped the sample to go through the analyzer. The analysis started from the side which contains the samples taken at higher altitudes to avoid losing resolution.
%DIF > The beginning and end of the sample analysis was detected due to changes in concentrations, at the beginning between calibrating gas/sample, and at the end, between sample/fill gas.
%DIF > Once the analysis was done, the stratospheric part of the sample taken with the CAC was stored in a sampler as seen in Figure \ref{fig:aircore-sampler}. This CAC sampler was at FMI and contained fifteen separate sections. All the valves were open when the sample was introduced. Once the analysis was finished and the whole sample was in the sampler, all the valves were closed at the same time, separating the samples for different altitudes and preventing further molecular diffusion.
%DIF > \begin{figure}[H]
%DIF > \begin{align*}
%DIF > \includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/aircore-sampler.jpeg}
%DIF > \end{align*}
%DIF > \caption{CAC Sampler with 15 Different Stages. %\label{fig:aircore-sampler}}
%DIF > \end{figure}
\DIFadd{Approximately one month after the CAC analysis, }\DIFaddend the \DIFdelbegin \DIFdel{AAC will be analyzed. As it is shown in Figure \ref{fig:aircore-analysis}, the end of the CAC tube that remains closed during sampling will be connected to the Picarro inlet. The other end }\DIFdelend \DIFaddbegin \DIFadd{Picarro raw data files were available and the analysis could start. Figure \ref{fig:picarro-raw-data} shows some of the Picarro's raw data.
}
\begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/PicarroRawData.png}
}\end{align*}
\caption{\DIFaddFL{Picarro Raw Data Showing the Concentrations of CO, \mbox{%DIFAUXCMD
$CO_2$
}%DIFAUXCMD
, and \mbox{%DIFAUXCMD
$CH_4$
}%DIFAUXCMD
all in ppm. }\label{fig:picarro-raw-data}}
\end{figure}
\DIFadd{For the analysis purposes, the program MATLAB was used. Several steps were required to accurately place the Picarro measurements on a vertical scale in order to retrieve the vertical profiles. The dry mole fractions of CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
and CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
provided by the Picarro were used. The reason behind that was because they were automatically corrected by the instrument for a combined effect of dilution and line broadening caused by water vapor.
}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{7-data-analysis-and-results/img/FinalCO2.png}
\label{fig:CO2mixing}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{7-data-analysis-and-results/img/FinalCH4.png}
\label{fig:CH4mixing}
\end{subfigure}
\caption{\DIFaddFL{Picarro Analysis of the CAC sample from the BEXUS 26 flight. Left: CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
Mixing Ratios as a Function of the Analysis Time in Seconds; Right: CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
Mixing Ratios as a Function of the Analysis Time in Seconds. }\label{fig:mixingratios}}
\end{figure}
\DIFadd{Figure \ref{fig:mixingratios} shows an example of CO2 and CH4 mixing ratios measured by the Picarro instrument during the BEXUS 26 campaign. In order to extract the measurements corresponding to the sampled air, the top and the bottom of the profiles needed to be defined. The top }\DIFaddend of the CAC \DIFdelbegin \DIFdel{will be connected to the fill gas that will act as a push gas . As soon as this connection is done, the valve shown in Figure \ref{fig:picarro-connections} will be switched from calibrating gasto "AIRCORE" position. Then the Picarro GUI will show a sudden drop/increase in concentrations similar to the one shown in Figure \ref{fig:picarro-GUI} due to the difference between the calibrating gas and the sample concentrations. Note that the magnesium perchlorate dryer shown in Figure \ref{fig:aircore-analysis} is removed during analysis}\DIFdelend \DIFaddbegin \DIFadd{sample was considered to be at midpoint of the transition in concentration between the push gas and the remaining fill gas}\DIFaddend . \DIFaddbegin \DIFadd{This point is marked with a green star in Figure \ref{fig:mixingratios}.
The bottom of the profile was defined at midpoint on the transition of concentration between push gas and sampled air. It is marked with a red star in Figure \ref{fig:mixingratios}.
}\DIFaddend
\begin{figure}[H]
\begin{align*}
\DIFdelbeginFL %DIFDELCMD < \includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/aircore-analysis.png}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/FinalCO.png}
\DIFaddendFL \end{align*}
\caption{\DIFdelbeginFL \DIFdelFL{Schematics }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{CO Mixing Ratios as a Function }\DIFaddendFL of \DIFdelbeginFL \DIFdelFL{CAC }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{the }\DIFaddendFL Analysis \DIFdelbeginFL \DIFdelFL{System \mbox{%DIFAUXCMD
\cite{AircoreFlights}}\hspace{0pt}%DIFAUXCMD
}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{Time in Seconds}\DIFaddendFL . \DIFdelbeginFL %DIFDELCMD < \label{fig:aircore-analysis}%%%
\DIFdelendFL \DIFaddbeginFL \label{fig:COmixing}\DIFaddendFL }
\end{figure}
The \DIFdelbegin \DIFdel{analyzer pump and the push gas help the sample to go through the analyzer. The analysis will be started from the side which contains the samples taken at higher altitudes to avoid losing resolution. The }\DIFdelend beginning and end of the sample analysis \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend detected due to changes in concentrations, at the beginning between calibrating gas/sample, and at the end, between sample/fill gas. \DIFdelbegin \DIFdel{Once the analysis is done, }\DIFdelend \DIFaddbegin \DIFadd{The remaining fill gas in the coil had high concentration of CO, while the stratosphere had considerably lower CO concentrations. Figure \ref{fig:COmixing} , was used to define the sample. Again, the top of the profile is marked with a green star and }\DIFaddend the \DIFdelbegin \DIFdel{sample taken with the CAC is stored in a sampler as seen in Figure \ref{fig:aircore-sampler}. This CAC sampler is at FMI and contains fifteen separate sections. All the valves are open when the sample is introduced.
Once the analysis is finished and the whole sample is in the sampler, all the valves are closed at the same time, separating the samples for different altitudes and preventing further molecular diffusion.
}\DIFdelend \DIFaddbegin \DIFadd{bottom of the profile with a red star.
}\DIFaddend
\DIFdelbegin %DIFDELCMD < \begin{figure}[H]
%DIFDELCMD < %%%
\begin{align*}
\DIFdelFL{\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/aircore-sampler.jpeg}
}\end{align*}
%DIFAUXCMD
%DIFDELCMD < \caption{%
{%DIFAUXCMD
\DIFdelFL{CAC Sampler with 15 Different Stages. }%DIFDELCMD < \label{fig:aircore-sampler}%%%
}
%DIFAUXCMD
%DIFDELCMD < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdel{After the sample has been analyzed, the time trace of analysis will be converted into a mole fraction profile as a function of atmospheric pressure, using the }\DIFdelend \DIFaddbegin \DIFadd{It is assumed that the air entering the tube equilibrates the sample with ambient pressure and adjusts very quickly with the mean coil temperature. As the characteristics of the CAC (length, diameter) do not change, ambient pressure and mean coil temperature are the two main factors that regulate the number of moles in the CAC. Using the }\DIFaddend ideal gas law, \DIFaddbegin \DIFadd{it is possible to calculate the number of moles captured in the tube all along the trajectory.
}
%DIF > After the sample has been analyzed, the time trace of analysis was converted into a mole fraction profile as a function of atmospheric pressure, using the ideal gas law,
\DIFaddend \begin{equation}
PV = nRT <=> n = \frac{PV}{RT}
\label{eq:idealgaslaw}
\end{equation}
where P \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the ambient pressure, V \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the inner volume of the CAC\DIFdelbegin \DIFdel{/AAC}\DIFdelend , n the fraction of moles, R \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the universal gas constant in \DIFdelbegin \DIFdel{\mbox{%DIFAUXCMD
$J K^{-1} mol^{-1}$
}%DIFAUXCMD
}\DIFdelend \DIFaddbegin \DIFadd{J K\mbox{%DIFAUXCMD
$^{-1}$
}%DIFAUXCMD
mol\mbox{%DIFAUXCMD
$^{-1}$
}%DIFAUXCMD
}\DIFaddend and T the ambient temperature in Kelvin, \cite{Membrive}.
A constant unit of pressure in the atmosphere \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend represented by a unit of length in the CAC tube, due to the method that the CAC \DIFdelbegin \DIFdel{will sample }\DIFdelend \DIFaddbegin \DIFadd{sampled }\DIFaddend the ambient air.
\DIFaddbegin \DIFadd{With measured time series of pressure (Pi) and temperature (Ti), it was possible to relate the number of air moles in the tube (ni) to the atmospheric pressure at any given time during the flight:
}\DIFaddend
\DIFdelbegin \DIFdel{During the analysis }\DIFdelend \DIFaddbegin \begin{equation}
\DIFadd{n_i =\frac{P_i V}{R T_i},
\label{eq:ni-2}
}\end{equation}
\DIFadd{and this number was maximum when the CAC reached the surface,
}\begin{equation}
\DIFadd{n_{max} =\frac{P_s V}{R T_s},
\label{eq:n-max}
}\end{equation}
\DIFadd{where P\mbox{%DIFAUXCMD
$_s$
}%DIFAUXCMD
and T\mbox{%DIFAUXCMD
$_s$
}%DIFAUXCMD
corresponded to the surface pressure and to the
temperature of the CAC when landed at the surface.
}
\DIFadd{The flow rate during the analysis was kept constant at 40.8 cm\mbox{%DIFAUXCMD
$^3$
}%DIFAUXCMD
/min, which ensured that }\DIFaddend the number of moles that \DIFdelbegin \DIFdel{will go }\DIFdelend \DIFaddbegin \DIFadd{went }\DIFaddend through the analyzer \DIFdelbegin \DIFdel{will increase }\DIFdelend \DIFaddbegin \DIFadd{increased }\DIFaddend linearly with time. So, the number of moles at any time during the analysis \DIFdelbegin \DIFdel{will be
}\DIFdelend \DIFaddbegin \DIFadd{was
}\DIFaddend \begin{equation}
n_i = n^{max}\frac{t_i}{\Delta t}
\label{eq:ni}
\end{equation}
where \DIFdelbegin \DIFdel{\mbox{%DIFAUXCMD
$n^{max}$
}%DIFAUXCMD
is the maximum number of moles i.e when the CAC reaches the Earth's surface, and }\DIFdelend $\Delta t$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the total time duration of the analysis between the top and bottom of the CAC sample.
\DIFdelbegin \DIFdel{Finally, the vertical profiles will be obtained by using equations \ref{eq:idealgaslaw} and \ref{eq:ni}, and relate a specific }\DIFdelend \DIFaddbegin \DIFadd{At the next step, Equations \ref{eq:ni-2}, and \ref{eq:ni} were used to associate every }\DIFaddend pressure point with every Picarro measurement of the sample \DIFdelbegin \DIFdel{.
}\DIFdelend \DIFaddbegin \DIFadd{to retrieve the vertical profiles.
In order to do that, it was important that the data points of the sample from the Picarro, matched the data points for pressure and temperature from the ground station. This was not the case, because the Picarro data had by far more. For that reason, the ground station data for pressure and temperature were interpolated to match the ones from the Picarro.
}\DIFaddend
%DIF > When the interpolation was done, the CO$_2$, and CH$_4$ profiles were plotted against the pressure as seen in Figure
\DIFaddbegin
\DIFadd{Finally, the CO, CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
, and CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
vertical profiles were plotted against the pressure. The resulted vertical profiles as well as discussion of the results can be seen in Section \ref{sec:scientificresults}.
%DIF > using the 1976 US Standard Atmosphere model, every pressure point was translated into a specific altitude, and
}
%DIF > Finally, the vertical profiles was obtained by using equations \ref{eq:idealgaslaw} and \ref{eq:ni}, and relate a specific pressure point with every Picarro measurement of the sample.
\DIFaddend The AAC sampling system \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was planned to }\DIFaddend be analyzed, in the same manner as the CAC, using the same Picarro gas analyzer. In the same way as for the CAC, the calibrating gas \DIFdelbegin \DIFdel{needs }\DIFdelend \DIFaddbegin \DIFadd{would needed }\DIFaddend to be flowing through the analyzer until the moisture \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend minimum and the readings in concentrations \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend stable. Then a sampling \DIFdelbegin \DIFdel{bags system will be }\DIFdelend \DIFaddbegin \DIFadd{bag system would have been }\DIFaddend connected to the analyzer and a dry gas bottle, in a similar way as it was done in Test 17. The tubes connecting the sampling bags \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend flushed with dry gas and when the concentrations given by the Picarro analyzer \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend stable, the air inside the sampling bags \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend go through the analyzer followed again by dry gas.
Watching at the Picarro GUI, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend easily recognizable when a sampling bag \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend being analyzed due to the difference in concentrations between its air and the dry gas. \DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend Again, as for the CAC, equations \ref{eq:idealgaslaw} and \ref{eq:ni} \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend going to be used to relate a specific pressure point with every Picarro measurement of the sample.
The basic working principle used by the chromatographer to obtain the concentrations \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend as follows:
\begin{itemize}
\item Have calibrating gas - sample - calibrating gas flowing through the analyzer. (It could also be the case: calibrating gas - dry gas - sample - dry gas - calibrating gas but the principle \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the same).
\item Identify in the GUI readings the different gases easily seen by sudden variations in the concentrations.
\item Compare the calibrating gas reading with the known real value. Do this before and after the sample. This difference corresponds to the drift given by the Picarro.
\item Interpolate the values of drift from before and after the sample to obtain the drift during the sample.
\item Correct the readings given by the Picarro analyzer due to drift and that is the real concentration value.
\end{itemize}
NOTE: A calibrating gas \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a gas that has been flowing through the Picarro analyzer multiple times and its concentration \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend known with accuracy. A calibrating gas \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend to flow before and after the samples in order to compare the readings given by the analyzer with the real value and obtain a corrected value for the samples.
\pagebreak
\subsection{Launch Campaign}
\subsubsection{Flight preparation activities during launch campaign} %.... maybe some of this is currently in chapter 6
The \DIFaddbegin \DIFadd{scientific and pneumatic }\DIFaddend flight preparations can be found in Section \ref{prep_for_Esrange}.
\DIFdelbegin \subsubsection{\DIFdel{Flight performance}}
%DIFAUXCMD
\addtocounter{subsubsection}{-1}%DIFAUXCMD
\DIFdel{It is expected to receive a downlink from the gondola. All datareceived will be stored in the ground station computer. The estimated data across the }\DIFdelend \DIFaddbegin \DIFadd{On the first day of the campaign the experiment boxes were mounted into the gondola for the first time to check where the gondola fixation points should be. Once this was checked the box was dismounted again for final preparations. Styrofoam was fixed onto the bottom of the gondola to act as extra support for the boxes. It was fixed with the same double sided tape as was used to fix the Styrofoam to the walls of the experiment boxes.
}
\DIFadd{Once the CAC box was fully integrated with the AirCore it was discovered that one temperature sensor required re soldering.
}
\DIFadd{During preparations for the }\DIFaddend E-link \DIFdelbegin \DIFdel{will be \mbox{%DIFAUXCMD
$7.128$
}%DIFAUXCMD
MB, while the stored data onboard SD card is estimated to be \mbox{%DIFAUXCMD
$6.552$
}%DIFAUXCMD
MB}\DIFdelend \DIFaddbegin \DIFadd{test it was discovered that the The Amphenol RJF21B connector was built in the incorrect configuration and it had to be dismantled and rebuilt before E-link testing could be completed.
}
\DIFadd{During the Flight Compatibility Test (FCT) it was discovered that the experiment was sensitive to the Radio Frequencies (RF) emitted from the VHF radios used. If the VHF radios were used within a 10-15m radius of the experiment box errors would appear in the sensor data. In the most extreme case this caused a complete failure of the software and no more data was received until a power cycle was completed. Following this discovery a radio silence area was set around the experiment to prevent these errors from occurring again. It is thought that this phenomenon was caused by two factors, the first being that the boxes housing the experiment have very large surface areas which are completed covered in aluminum and are grounding the electronics. Therefore when the experiment was on if RF interfered with the floating ground point in the boxes surface this can affect the grounding voltage. The second factor was the fact that the I2C connection was spread across very long wires making them more susceptible to interference. This also gave some background to the sensor issues experienced during thermal testing.
}
\DIFadd{Just before mounting the box onto the gondola for the final time all sides of the box were taped with kapton tap to cover any small gaps in between the walls and the structural bars.
}
\subsubsection{\DIFadd{Flight performance}}
\DIFadd{The flight began nominally with data being down linked as expected. There were some communication losses before takeoff but these were to be expected from the gondola antenna being too close to the ground}\DIFaddend .
\DIFaddbegin \DIFadd{Thermal systems were observed from the ground station to be operating nominally.
}
\DIFadd{After takeoff the software successfully entered ascent mode and the CAC successfully opened. Thermal control continued nominally.
}
\DIFadd{At the first sampling point for the AAC subsystem the software operated as it should attempting to turn on the pump however the pump failed to switch on and caused a full reset of the board. After the board reset the software could correctly re-identify the mode and reopen the CAC valve. However manual control was taken in an attempt the remedy the pump. Unfortunately all attempts to start the pump were unsuccessful during ascent and float phases.
}
\DIFadd{During the descent phase in order to preserve the samples in the CAC no further attempts were made to start the pump as if the CAC valve closed and reopened it would compromise the samples within it.
}
\DIFadd{From takeoff until the landing there were no sensor errors as had been observed during testing. It is thought that the sensor errors during testing may have been due to RF from mobile phones and other on ground emitters. This would explain why there were errors on ground but not during flight.
}
\DIFadd{Upon recovery it was noted that all mechanical systems operated nominally.
}
\DIFaddend \subsubsection{Recovery}
\DIFdelbegin \DIFdel{If our request for quick helicopterrecovery of the CAC is granted, the retrieval team will be provided a checklist, in Section \ref{sec:recovery-checklist}, so they can pull out the CAC from the gondola while the AAC will be brought back with the rest of the gondola.
}\DIFdelend \DIFaddbegin \DIFadd{The recovery checklist, in Section \ref{sec:recovery-checklist}, was given to the recovery team to collect the CAC. Due to low cloud cover it was not possible to make a recovery by helicopter. Instead the recovery team drove out to the landing site before hiking through several kilometers of forest. They found the gondola had landed onto the air inlet and outlet tubes however no damage was observed. It is thought that the gondola came down slowly due to the trees and tilted at the last moment. Dirt and forest debris was inside all three tubes.
}\DIFaddend
\DIFdelbegin \subsubsection{\DIFdel{Post flight activities}}
%DIFAUXCMD
\addtocounter{subsubsection}{-1}%DIFAUXCMD
\DIFdel{Once the gondola has been brought back, the samples collected by the CAC and AAC will be analyzed}\DIFdelend \DIFaddbegin \DIFadd{The CAC was then returned to Esrange at around 1am the same night. It was immediately hooked up to the analyzers which were previously prepared.
}
\DIFadd{The AAC returned the following night at around 2am and was also immediately investigated to see if any samples had been collected.
}
\DIFadd{Both systems were returned before the gases inside the tubes and bags would have been too mixed. The TUBULAR Team is very grateful to all who made this recovery happen so fast given the conditions.
}
\subsubsection{\DIFadd{Post flight activities}} \label{post_flight}
%DIF > The CAC was immediately hooked up to the gas analyzer once it was returned. It was clear relatively quickly that samples had been collected and the data continued to go through the analyzer for around 45 minutes. The sampled air was saved in smaller pieces of tubing to allow for potential further analysis at a later date. As this data then needed to go through significant amounts of post processing no further action was taken.
\DIFadd{The CAC system was recovered and brought back to Esrange approximately 13 hours after the gondola landed, and was immediately hooked up to the gas analyzer. The CAC analysis system can be seen in Figure \ref{fig:aircore-analysis}.
}
\begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/aircore-analysis.png}
}\end{align*}
\caption{\DIFaddFL{Schematics of CAC Analysis System \mbox{%DIFAUXCMD
\cite{AircoreFlights}}\hspace{0pt}%DIFAUXCMD
. }\label{fig:aircore-analysis}}
\end{figure}
\DIFadd{For the analysis purposes, parts 10 to 17, shown in Figure \ref{fig:CAC-schematic} were removed. The magnesium perchlorate filter was also removed and wrapped in plastic foil for sealing reasons, and was taken by the FMI people. The fill gas was connected to the quick connector body (9), seen in Figure \ref{fig:CAC-schematic}, and since the CAC valve closed at 6 km of altitude, one would expect a pressure decrease. In that case it would be necessary to fill the coil with fill gas, and bring it to ambient pressure, before connecting it to the analyzer. But no pressure change was seen. This could only mean two things: either there was a leak and ambient air entered the coil or the CAC valve never worked and the fill gas from the flushing procedure was still there. Next, the Picarro analyzer was connected to the quick connector body (1) as seen in Figure \ref{fig:CAC-schematic}. As it has been mentioned in Section \ref{sec:data-analysis-plan}, during the flight, calibrating gas was flowing through the Picarro G2401. As soon as the values measured with the continuous analyzer were stabilized to the expected values for the calibration gas, the analysis of the air captured in the coil could start. As soon as this connection was done, both CAC ends were opened simultaneously, and the valve shown in Figure \ref{fig:picarro-connections} was switched from calibrating gas to "AIRCORE" position. Few moments later, Picarro read the sample and the first readings showed up in the screen, seen in Figure \ref{fig:first-readings}. The air was pulled from one end into the continuous analyzer and low-concentration fill gas was pulled through the other end. The top of the profile with the remaining fill gas was pulled first into the analyzer.
}
\DIFadd{The fill gas was a high-concentration standard in order to have a noticeable difference between the remained fill gas in the coil and the stratospheric air sample at the top of the profile. The low-concentration calibration standard was chosen to be used as push gas to have a noticeable difference of the mixing ratios compared with the expected values of CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
and CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
at the surface}\DIFaddend .
\DIFaddbegin \begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/StartofCACanalysis.jpg}
}\end{align*}
\caption{\DIFaddFL{First Readings of the CAC. From top to bottom: \mbox{%DIFAUXCMD
$CO_2$
}%DIFAUXCMD
ppm, \mbox{%DIFAUXCMD
$CO$
}%DIFAUXCMD
ppm, \mbox{%DIFAUXCMD
$CH_4$
}%DIFAUXCMD
ppm and cavity pressure. }\label{fig:first-readings}}
\end{figure}
\DIFadd{As seen in Figure \ref{fig:first-readings} there was a sudden increase in the concentrations and this was the fill gas that had remained, as expected, in the coil. After a while, there was a sudden decrease in the concentrations, and that was the start of the actual sample. The whole CAC profile can be seen in Figure \ref{fig:CAC-profile}
}
\begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/cacprofile.jpg}
}\end{align*}
\caption{\DIFaddFL{CAC Complete Profile after the Analysis was Completed. From top to bottom: \mbox{%DIFAUXCMD
$CO_2$
}%DIFAUXCMD
ppm, \mbox{%DIFAUXCMD
$CO$
}%DIFAUXCMD
ppm, \mbox{%DIFAUXCMD
$CH_4$
}%DIFAUXCMD
ppm and cavity pressure. }\label{fig:CAC-profile}}
\end{figure}
\DIFadd{The slight increase in the \mbox{%DIFAUXCMD
$CO_2$
}%DIFAUXCMD
and \mbox{%DIFAUXCMD
$CH_4$
}%DIFAUXCMD
concentrations in Figure \ref{fig:CAC-profile} flagged the beginning of the tropospheric part of the sample. After approximately 40 minutes the CAC sample was almost finished and it could be confirmed that the CAC valve was leaking, letting ambient air enter the coil. Even though air from the ground entered the coil, the humidity levels were kept low, because of the magnesium perchlorate filter.
}
\DIFadd{The CAC system managed to sample the stratosphere and the troposphere down to 6 km of altitude. The lower parts of the profile represent ambient air that went inside the coil through the valve. The analysis of the results can be seen in Section \ref{sec:scientificresults}.
}
\DIFadd{After the analysis was completed, the stratospheric part of the sample was saved into a sampler, composed of fifteen smaller tubes as seen in Figure \ref{fig:aircore-sampler}. All the valves were open when the sample was introduced. Once the stratospheric part of the sample was in the sampler, all the valves were closed at the same time, separating the samples for different altitudes and preventing further molecular diffusion. This part of the sample, will be further analyzed for isotopes and other atmospheric gases.
}\begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=0.9\linewidth]{7-data-analysis-and-results/img/aircore-sampler.jpeg}
}\end{align*}
\caption{\DIFaddFL{CAC Sampler with 15 Different Stages. }\label{fig:aircore-sampler}}
\end{figure}
\DIFadd{For the AAC unfortunately due to the pump failure no data was collected. A post flight failure analysis was carried out on the pump and this can be seen in Section \ref{sec:failureanalysis}.
}
\DIFadd{Data received by the ground station was also analyzed to find the pressure and temperature profiles during the flight. This was completed on MATLAB and was shown during the post flight briefing at campaign.
}
\DIFaddend \subsection{Results}
\DIFdelbegin \DIFdel{No results for now.
More will come after the launch campaign in an updated version of the SED. }\DIFdelend \DIFaddbegin \DIFadd{The results gained from the TUBULAR flight can be broken down into the various subsystems as follows
}\DIFaddend
\DIFaddbegin \subsubsection{\DIFadd{Mechanical Subsystem Performance}}
\textbf{\DIFadd{Structural Performance}}
\smallskip
\DIFadd{The frame structure and the aluminum walls withstood all the flight phases providing the required protection to all the components inside both boxes.
}
\DIFadd{Regarding the frame, the most critical load that it had to face was during landing. The gondola landed on the side where the boxes where allocated, thus they experienced a high load. Thanks to the use of bumpers as anchors of the boxes to the gondola rails and the styrofoam as a sitting surface, the force was damped, see Figures \ref{fig:bumpers_landing} and \ref{fig:styrofoam_landing}. Consequently the boxes did not move from its original place.
}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{7-data-analysis-and-results/img/Bumpers_Post_Flight.jpg}
\caption{\DIFaddFL{Position of the bumpers after landing.}}
\label{fig:bumpers_landing}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{7-data-analysis-and-results/img/Styrofoam_Post_Flight.jpg}
\caption{\DIFaddFL{Sytrofoam below the boxes after landing.}}
\label{fig:styrofoam_landing}
\end{figure}
\DIFadd{The walls did not suffer any remarkable damage apart from the dirt that stuck to them as a cause of the sideways landing and some scratches from trees during landing.
}
\textbf{\DIFadd{Pneumatic circuit Performance}}
\smallskip
\DIFadd{The experiment had two separate pneumatic circuits, on each box.
}
\DIFadd{The air sampling with the CAC system was nominal. The valve opened and closed upon both automated and manual commands. This allowed to empty the 300-meter coiled tube during ascent and filling with stratospheric air during the descent phase.
}
\DIFadd{On the other hand, the large pneumatic system of the AAC system experienced a failure in the pump which lead to the failure of the this alternative sampling system. Although the bags could not be filled with stratospheric air for later analysis, data of both airflow and pressure sensor was received as expected and all the valves worked nominally (manifold and flushing).
}
\DIFadd{The failure analysis of the pump can be found in Section \ref{sec:failureanalysis}.
}
\subsubsection{\DIFadd{Electrical Subsystem Performance}}
\DIFadd{Throughout the flight, none of the previous sensor dropouts that had been experienced were seen. This is thought to be due to the absence of larger electromagnetic interference's.
}
\DIFadd{All other electrical parts worked as intended.
}
%DIF > However, during the campaign when all the teams were mounting their experiments onto the gondola and integration testing was underway, the digital communication lines started to react abnormally especially temperature sensors began to give unusual readings. After some investigation, the reason discovered behind this kind of behaviour was interference due to the close by wireless radio communication. The frequency at which the radio (walkie talkie) operating was actually charging up our experiment box. As a result, the grounding plane started to fluctuate between different levels which caused the experiment box to behave like an antenna. Because of this, once total communication black out between the experiment and ground station was witnessed. After getting familiar this with problem, the wireless radio communication has been avoided close to the experiment box.
\DIFadd{For details on the full failure analysis of the pump see Section \ref{sec:failureanalysis}.
}
\subsubsection{\DIFadd{Software Subsystem Performance}}
\DIFadd{The software managed to control the experiment through the majority of the phases of the mission. The software worked even with frequent telemetry connection cutoff before the takeoff, during take-off it switched to Ascent mode successfully. When the failure with the pump occurred it successfully reset and put itself in the correct mode. Since sampling caused a reset it was decided that the software would be kept in Manual mode for the remainder of the flight.}\par
\DIFadd{An unforeseen behavior was observed during descent when the choice was made to change the mode from Manual mode to Normal-Descent mode. Directly after this change a loss of communication happened before it reestablished itself a few seconds later, with all valves closed and the experiment in Standby mode. This is was indicative of a reset. Why it reset itself was most likely brought on by the fact the experiment passed several sampling points in Manual mode. Manual mode was thought to only be used for a short while and not to skip a sampling point. Incapable of taking a sample in Manual mode it is believed that the ASC performed a sampling when the software had the authority to do so, in which the pump was involved and therefor a reset happened. After the reset the software successfully went into Normal-Descent mode without taking a sample using the AAC. The choice was the made to take it into Safe mode which closed every valve successfully.}\par
\DIFadd{During the flight the only intermission of telemetry was during the reboot of the software after a reset. A permanent loss of telemetry happened at a low altitude due to limitations with line-of-sight. The on board software continued to record sensor data for several hours after landing.
}
\DIFaddend \subsubsection{\DIFdelbegin \DIFdel{Expected Results}\DIFdelend \DIFaddbegin \DIFadd{Thermal Subsystem Performance}\DIFaddend }
\DIFaddbegin \begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{4-experiment-design/img/Termal_flight_true.jpg}
\caption{\DIFaddFL{The Temperature for the Different Sensors During Whole Fight.}}
\label{fig:Termal_flight_true}
\end{figure}
\DIFadd{The thermal result for the flight are as seen in Figure \ref{fig:Termal_flight_true}. Both the critical components did not go lower than the operating threshold. The heaters operated as expected and kept the pump and manifold in their respective threshold limits. During the float phase a test were done to see if the pump issue were thermal related. The pump were then heated up to the upper limit and tried to start but did not work. It could then be concluded that the issue with pump during the flight were not thermal related. The simulations estimated the heaters would use 26.66Wh and during flight (calculated from Figure \ref{fig:Termal_flight_true}) 27.667Wh were used so the simulations were a good estimation.
}
\subsubsection{\DIFadd{Past Results}}
\DIFaddend \label{sec:ExpecterResults}
After the analysis of the samples, the expected results \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend the vertical profiles of CO, CO$_2$, and CH$_4$. The profiles \DIFdelbegin \DIFdel{will present }\DIFdelend \DIFaddbegin \DIFadd{presented }\DIFaddend a similar pattern to that of Figure \ref{fig:vertical-profile-karion} \DIFaddbegin \DIFadd{which was found in an experiment by Karion et al (AirCore: An Innovative Atmospheric Sampling System) \mbox{%DIFAUXCMD
\cite{Karion}}\hspace{0pt}%DIFAUXCMD
}\DIFaddend . The continuous profile (dashed line) belongs to the CAC while the discrete values (black dots) belongs to the AAC (\cite{Karion}). Both profiles are showing a decrease in concentration of CH$_2$ and CH$_4$ with increasing altitude.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{7-data-analysis-and-results/img/ExpectedVerticalProfilesKarion.png}
\end{align*}
\caption{Pressure Profiles for (Left) CO$_2$ and (Right) CH$_4$ by Three Different Methods \cite{Karion}.\label{fig:vertical-profile-karion}}
\end{figure}
\DIFdelbegin \DIFdel{The }\DIFdelend \DIFaddbegin \DIFadd{This }\DIFaddend experiment's goal \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend to achieve the highest vertical resolution possible. Since the vertical resolution \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend determined by the length and the diameter of the tube \cite{Membrive}, a 300 m long tube \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used, consisting of 2 smaller tubes. One of 200 m length with \num{3e-3} m outside diameter and \num{1.3e-4} m wall thickness, and another one of 100 m length with \num{6e-3} m outside diameter and \num{1.3e-4} m wall thickness. For achieving higher stratospheric resolution, the tube with the smaller diameter \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used to sample the higher altitudes and the one with the bigger diameter for the lower ones.
Figure \ref{fig:resolution-lenght} by Olivier Membrive \cite{Membrive} compares the vertical resolution that can be expected with three different AirCores.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{7-data-analysis-and-results/img/ResolutionVslength.png}
\end{align*}
\caption{Comparison of the Vertical Resolutions That can be Expected with Different AirCores, After 3h Storage Time Before Analysis \cite{Membrive}.\label{fig:resolution-lenght}}
\end{figure}
The High-Resolution AirCore-HR (red line),\cite{Membrive}, is a combination of two tubes. One of 200 m and one of 100 m.
The NOAA 'original' CAC, \cite{Karion}, (black line) is a 152 m long tube and the AirCore-GUF (designed and developed at Goethe University Frankfurt), (blue line) is a combination of three tubes, 100 m long in total.
The longer AirCore, AirCore-HR, achieved a higher resolution throughout the whole sampled air.
In addition, the vertical resolution depends on the the mixing inside the tube.
\DIFdelbegin \DIFdel{The experiment takes }\DIFdelend \DIFaddbegin \DIFadd{This experiment took }\DIFaddend into account two types of mixing. Molecular diffusion and the shear flow diffusion, known as Taylor dispersion. The effect of molecular diffusion \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend described by the root-mean-square of the distance of molecular travel,
\begin{equation}
X_{rms} = \sqrt{2Dt}
\end{equation}
where, D \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the molecular diffusivity of the molecule in the surrounding gas, and t \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the time over which travel occurs, \cite{Karion}.
For the tubing dimension that \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend used in this experiment, the flow of air through the CAC, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend laminar. In such a flow, a parabolic velocity profile \DIFdelbegin \DIFdel{exists }\DIFdelend \DIFaddbegin \DIFadd{existed }\DIFaddend inside the tube, causing longitudinal mixing (Taylor dispersion).
Before \DIFdelbegin \DIFdel{the experiment is }\DIFdelend \DIFaddbegin \DIFadd{their experiment was }\DIFaddend recovered, only molecular diffusion \DIFdelbegin \DIFdel{will affect }\DIFdelend \DIFaddbegin \DIFadd{affected }\DIFaddend the sample, but during analysis both molecular diffusion and Taylor dispersion \DIFdelbegin \DIFdel{will affect }\DIFdelend \DIFaddbegin \DIFadd{affected }\DIFaddend the sample. Combining both of them, an effective diffusion coefficient \DIFdelbegin \DIFdel{can be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend calculated as,
\begin{equation}
D{eff} = D + \frac{a^2\overline{V^2}}{48D}
\end{equation}
where D \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the molecular diffusivity, a \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the tube's inner radius, and $\overline{V}$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the average velocity \cite{Membrive}. The first term \DIFdelbegin \DIFdel{translates }\DIFdelend \DIFaddbegin \DIFadd{translated }\DIFaddend into the longitudinal direction, while the second one \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the Taylor dispersion.
After completing Test 4 and Test 18 as seen in Tables \ref{tab:vacuum-test}, and \ref{tab:pump-low-pressure-test} respectively, the team managed to get the standard flow rate readings for the different altitudes. Standard flow rate is the volumetric flow rate of a gas corrected to standarized conditions of temperature and pressure. In this case the logged flow rates correspond to sea level conditions. Table \ref{tab:flow-rates} shows the standard flow rates at the sampling altitudes.
% Please add the following required packages to your document preamble:
% \usepackage{multirow}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
& \multicolumn{1}{l|}{\textbf{Sampling Altitudes}} & \multicolumn{1}{l|}{\textbf{Ambient Pressure}} & \multicolumn{1}{l|}{\textbf{Standard Flow rate}} \\ \hline
\multirow{2}{*}{\textbf{Ascent Phase}} & 18 km & 75.0 hPa & $\sim$0.38 L/min \\ \cline{2-4}
& 21 km & 46.8 hPa & $\sim$0.21 L/min \\ \hline
\multirow{4}{*}{\textbf{Descent Phase}} & 17.5 km & 81.2 hPa & $\sim$0.41 L/min \\ \cline{2-4}
& 16 km & 102.9 hPa & $\sim$0.55 L/min \\ \cline{2-4}
& 14 km & 141.0 hPa & $\sim$0.79 L/min \\ \cline{2-4}
& 12 km & 193.3 hPa & $\sim$1.22 L/min \\ \hline
\end{tabular}
\caption{Sampling Altitudes as well as the Corresponding Ambient Pressures According to the 1976 US Standard Atmosphere and the Standard Flow Rates at Each Altitude.}
\label{tab:flow-rates}
\end{table}
It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend necessary to also calculate the actual flow rates at the different altitudes. The conversion was done using the equation \cite{flowrateswebsite}:
$ Volumetric flow = (Standard flow rate) \cdot \Big(\frac{T_{alt}}{T_{std}}\Big) \cdot \Big(\frac{P_{std}}{P_{alt}}\Big) $
where, \\
$P_{std}= 1013 hPa$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the standard pressure. \\
$T_{std}=294.25K$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the standard temperature.\\
$T_{alt}$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the temperature at the different altitudes.\\
$P_{alt}$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the pressure at the different altitudes.\\
Table \ref{tab:normal-flow-rates}, shows the actual flow rates at the sampling altitudes.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
& \multicolumn{1}{l|}{\textbf{Sampling Altitudes}} & \multicolumn{1}{l|}{\textbf{Ambient Pressure}} & \multicolumn{1}{l|}{\textbf{Actual Flow rate}} \\ \hline
\multirow{2}{*}{\textbf{Ascent Phase}} & 18 km & 75.0 hPa & $\sim$3.78 L/min \\ \cline{2-4}
& 21 km & 46.8 hPa & $\sim$3.36 L/min \\ \hline
\multirow{4}{*}{\textbf{Descent Phase}} & 17.5 km & 81.2 hPa & $\sim$3.77 L/min \\ \cline{2-4}
& 16 km & 102.9 hPa & $\sim$3.99 L/min \\ \cline{2-4}
& 14 km & 141.0 hPa & $\sim$4.18 L/min \\ \cline{2-4}
& 12 km & 193.3 hPa & $\sim$4.71 L/min \\ \hline
\end{tabular}
\caption{Sampling Altitudes as well as the Corresponding Ambient Pressures According to the 1976 US Standard Atmosphere and the Normal Flow Rates at Each Altitude.}
\label{tab:normal-flow-rates}
\end{table}
Finally, storage time, that \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend the time from the moment the tube \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend sealed until the end of the analysis, \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a key factor that \DIFdelbegin \DIFdel{affects }\DIFdelend \DIFaddbegin \DIFadd{affected }\DIFaddend the experiment's results in terms of resolution.
Figure \ref{fig:resolution-time} shows the effect of time delay between landing and analysis, on the expected vertical resolution.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{7-data-analysis-and-results/img/ResolutionVsTime.png}
\end{align*}
\caption{Expected Vertical Resolution of AirCore-HR, for a Storage Time of 3h (Black), 6h (Blue), 12h (Green), 24h (Orange) and 1 Week (Red) \cite{Membrive}.\label{fig:resolution-time}}
\end{figure}
It is clear that the sooner the samples \DIFdelbegin \DIFdel{are going to be }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend analyzed, the better the results for the vertical resolution of the CAC sample. At an altitude of 20 km the resolution \DIFdelbegin \DIFdel{decreases }\DIFdelend \DIFaddbegin \DIFadd{decreased }\DIFaddend significantly from 300 m to 500 m for 6h and 12h of delay, respectively, \cite{Membrive}. But even after a week of storage, a vertical profile \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend still be achieved with lower resolution.
Based on past BEXUS projects, the time to experiment recovery \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend estimated at 12 to 24 hours, if not multiple days. As such, it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend expected that the desired vertical resolution of gas analysis \DIFdelbegin \DIFdel{will favour }\DIFdelend \DIFaddbegin \DIFadd{favoured }\DIFaddend AAC configuration over that of CAC due to mixing of gases in the latter configuration, resulting in poorer vertical resolution.
The vertical resolution for the AAC \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was expected to be }\DIFaddend approximately 500 m. This \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend achieved assuring the airflow intake rate. For Ascent Phase, a nominal speed of 5 m/s \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend considered, which \DIFdelbegin \DIFdel{means that it will }\DIFdelend \DIFaddbegin \DIFadd{meant that it would }\DIFaddend take 28.57 seconds to fill a sampling bag with 1.8L of air while ascending 142.85 m, and an actual airflow intake rate of approximately 3.78 L/min at 18 km of altitude. For Descent Phase, the nominal speed \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend assumed to be 8 m/s. While descending 156.4 m a sampling bag \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be filled in 19.55 seconds, with 1.3 L of air and an actual airflow intake rate of 3.99 L/min at 16 km of altitude. However, taking into account that the volume of the samples, at sea level, \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend lower, the sampling time \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend longer and the vertical resolution closer to 500m.
%However, considering the fact that the pump will not have the same efficiency at higher altitudes, the sampling time may be longer and the airflow intake rate may be higher. The exact numbers will be included in the upcoming version of the SED.
For a 500 m of vertical displacement, the horizontal resolution of the AAC \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend approximated based on past BEXUS flights data obtained from the BEXUS manual \cite{BexusManual}. The average horizontal resolution obtained for Ascent Phase \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 588m and for Descent Phase \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend 186.5 m. This \DIFdelbegin \DIFdel{means }\DIFdelend \DIFaddbegin \DIFadd{meant }\DIFaddend that the square area covered by the sample \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{would have been }\DIFaddend 500 m x 588 m and 500 m x 186.5 m for ascent and Descent Phases respectively.
\DIFdelbegin \DIFdel{It is }\DIFdelend \DIFaddbegin \subsubsection{\DIFadd{Scientific Results}}\label{sec:scientificresults}
%DIF > Raw data was only obtained on the 5th December therefore analysis is still ongoing and there are no official results as of yet. FMI has also sent over some preliminary results which can be seen in Figure \ref{fig:verticalprofiles}.
%DIF > \begin{figure}[H]
%DIF > \begin{align*}
%DIF > \includegraphics[width=1\linewidth]{7-data-analysis-and-results/img/verticalprofiles.png}
%DIF > \end{align*}
%DIF > \caption{Preliminary Vertical Profiles for (Left) CO$_2$, (Middle) CH$_4$, and (Right) CO.\label{fig:verticalprofiles}}
%DIF > \end{figure}
\DIFadd{Figure \ref{fig:verticalprofiles} shows the CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
, CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
, and CO profiles measured during the BEXUS 26 flight. Each profile comprises about 3000 points on the vertical axis. The data from 400 hPa down to 1000 hPa were deleted. At that time the CAC valve was closed and the gases were leaking inside the coil with some unknown delay, and from inside the box (not from the troposphere). From the ambient temperature profile, seen in Figure \ref{fig:temperatureprofile}, the tropopause was estimated to be at 165.2 hPa.
%DIF > The profiles stop at 400 hPa due to the CAC valve closure at about 6 km of altitude. Uncertainties of the profiles are possible due to the CAC valve leakage after landing, where air from the surface, as well as the box entered the coil.
}\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{7-data-analysis-and-results/img/temperatureprofile.png}
\caption{\DIFaddFL{Ambient Temperature in \mbox{%DIFAUXCMD
$\degree$
}%DIFAUXCMD
C Over Ambient Pressure in hPa.}}
\label{fig:temperatureprofile}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{7-data-analysis-and-results/img/Finalprofilesmast.png}
\caption{\DIFaddFL{Vertical Profiles Retrieved from the Air Sampled with the CAC on the BEXUS 26 flight. Left: CO (ppb), Middle: CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
(ppm), Right: CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
(ppm).}}
\label{fig:verticalprofiles}
\end{figure}
\DIFadd{As seen in Figure \ref{fig:verticalprofiles}, the preliminary results follow the general pattern of the past results of the Section \ref{sec:ExpecterResults}. In general, the concentrations of CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
, CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
, and CO are decreasing with decreasing pressure i.e increasing altitude. The maximum value of CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
is 405 ppm, for CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
is approximately 2 ppm, and for the CO is close to 90 ppb. The red star represents the concentrations at 1000 hPa (surface) as they measured just about 20 km away from the landing site.
}
\DIFadd{The vertical resolution of the sample follow the one of the High-Resolution AirCore-HR (red line) in Figure \ref{fig:resolution-lenght} since the same length tube was used. Since the analysis was performed after 13 hours, the vertical resolution decrease is closer to the one represented by the green line in Figure \ref{fig:resolution-time}.
}
\DIFadd{In the middle figure of Figure \ref{fig:verticalprofiles}, a strong decrease of CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
can be observed in the first layers above 6 km. CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
reaches its highest value of 405 ppm just above the tropopause (\mbox{%DIFAUXCMD
$\sim$
}%DIFAUXCMD
162.5 hPa). In the stratosphere, CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
values are lower since the exchange rate between upper troposphere and lower stratosphere takes several years \mbox{%DIFAUXCMD
\cite{Membrive}}\hspace{0pt}%DIFAUXCMD
.
}
\DIFadd{The CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
vertical profile is presented in the right side of Figure \ref{fig:verticalprofiles}. Mixing ratios of CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
have a small variability in the troposphere. The strong decrease of CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
in the
stratosphere is easy to see in Figure \ref{fig:verticalprofiles}, with values of 1.9 ppm near the tropopause at \mbox{%DIFAUXCMD
$\sim$
}%DIFAUXCMD
162.5 hPa to 1.2 ppm at
\mbox{%DIFAUXCMD
$\sim$
}%DIFAUXCMD
20 hPa.
}
\DIFadd{A comparison between the middle and the right profile of Figure \ref{fig:verticalprofiles} shows CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
variability is higher near the ground, whereas CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
variability is higher in the mid-to-upper troposphere and in the stratosphere. This is in agreement with the fact that CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
may have negative and positive anomalies at the surface (associated mainly with vegetation uptake and anthropogenic emissions), whereas CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
has mostly positive anomalies coming from the surface and negative anomalies coming from the stratosphere \mbox{%DIFAUXCMD
\cite{Membrive}}\hspace{0pt}%DIFAUXCMD
.
}
\DIFadd{A comparison was performed to the CO, CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
and CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
estimated profiles based on the map files that have been made using the combination of earlier measurements with a meteorological model based adjustments. The measured profiles were compared to the estimated profiles, showing relatively good agreement. This comparison is presented in Figure \ref{fig:mapcomparison}.
%DIF > It is worth mentioning that the profiles from the forecast are from the next day of the BEXUS 26 flight due to meteorological adjustment (tropopause and latitude).
}
\begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=1\linewidth]{7-data-analysis-and-results/img/Finalprofileswithmap.png}
}\end{align*}
\caption{\DIFaddFL{Comparison of CAC Left: CO, Middle: CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
, and Right: CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
vertical profiles (blue) with co-located forecast (red)}\label{fig:mapcomparison}\DIFaddFL{.}}
\end{figure}
\DIFadd{The agreement between both CO and CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
profiles (Figure \ref{fig:mapcomparison}, left and right profiles) is satisfying throughout the sampling range in terms of structures. For higher altitudes, the decrease of CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
measured by the CAC is much more pronounced than the
one simulated by the forecast.
At first, one can say that the forecast for the CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
profiles (Figure \ref{fig:mapcomparison}, middle profile) displays different structures than those measured by the CAC. But, it correctly reproduces the strong decrease in CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
in the troposphere, as well as the increase in concentration close to the tropopause (\mbox{%DIFAUXCMD
$\sim$
}%DIFAUXCMD
165.2 hPa). The CAC and the forecast both reveal a decrease in CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
starting from above the tropopause up to the top of the profile.
}
\subsubsection{\DIFadd{Future Work}}\label{sec:futurework}
\DIFadd{It was }\DIFaddend expected that the AAC \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend serve as model enabling a cost-effective large scale deployment scheme for regular high altitude greenhouse gas measurement. Unlike CAC, the design of AAC \DIFdelbegin \DIFdel{will not impose }\DIFdelend \DIFaddbegin \DIFadd{would have not imposed }\DIFaddend experimental restrictions based on the proximity of infrastructure for shipping and analysis. As such, a successful proof of concept of AAC sampling system \DIFdelbegin \DIFdel{will serve }\DIFdelend \DIFaddbegin \DIFadd{would have served }\DIFaddend as a basis to enable reliable cost-effective measurements in remote areas. \DIFaddbegin \DIFadd{For these reasons whilst the BEXUS programme has now ended for the TUBULAR Team, there is sufficient interest from FMI and team members that it is intended to fly the experiment again. A different battery set will be used to overcome the problems discovered during the BEXUS 26 campaign and the TUBULAR Team still hopes to complete all the aims of the TUBULAR experiment. It is hoped a re-flight may be possible during the spring of 2019. If this goes well the TUBULAR Team will be pleased to present the comparison results at The 24th ESA Symposium on European Rocket and Balloon Programmes and Related Research alongside the results from the BEXUS 26 campaign.
}\DIFaddend
\DIFaddbegin \DIFadd{The TUBULAR Team is also still intending to publish a scientific paper on the results of the experiment, however it has been decided to wait until further data has been collected during the reflight to do this.
}
\DIFaddend \pagebreak
\subsection{\DIFdelbegin \DIFdel{Lessons Learned}%DIFDELCMD < }
%DIFDELCMD < %%%
\DIFdel{At the end of the build and test phase of the experiment }\DIFdelend \DIFaddbegin \DIFadd{Failure Analysis}}\label{sec:failureanalysis}
\DIFadd{During the flight the experiment experienced unexpected errors. This section will go into the procedures and results of the failure analysis. In general there was two stages of analysis, firstly the post flight analysis that was made as soon as the experiment was retrieved during campaign and, secondly the lab analysis that was made in the lab some time after launch campaign.
}
\subsubsection{\DIFadd{Post flight analysis}}
\DIFadd{Shortly after the conclusion of failure on the AAC system an investigation plan was created to make sure that the team did not destroy any potential evidence. Deducted from the behaviour during flight, a list of potential problems was made and can be seen in Table \ref{tab:potential-failure-causes}. Most possible causes were unlikely due to the extensive testing made before flight.
}\begin{table}[H]
\centering
\begin{tabular}{|c|c|}
\hline
\DIFaddFL{1 }& \DIFaddFL{Shorted output pin on Arduino }\\ \hline
\DIFaddFL{2 }& \DIFaddFL{Pump elastic diaphragm broke }\\ \hline
\DIFaddFL{3 }& \DIFaddFL{Pump too cold }\\ \hline
\DIFaddFL{4 }& \DIFaddFL{Short circuit in pump}\\ \hline
\DIFaddFL{5 }& \DIFaddFL{Pump MOSFET broken}\\ \hline
\DIFaddFL{6 }& \DIFaddFL{Pump current draw too high}\\ \hline
\DIFaddFL{7 }& \DIFaddFL{Pump drive shaft blocked}\\ \hline
\DIFaddFL{8 }& \DIFaddFL{Pump tubing blocked}\\ \hline
\end{tabular}
\caption{\DIFaddFL{List of Potential Failure Causes}}
\label{tab:potential-failure-causes}
\end{table}
\DIFadd{Shortly after the experiment was retrieved a structured post flight investigation was made to investigate the possible causes of the in flight failure. The experiment walls were removed one at a time and there was no unexpected smells when opening the walls of the experiment that might have indicated burning of components. After the main PCB board was accessible, resistance measurements were taken on the MOSFET's. Which all were the same, indicating that the MOSFET's were not damaged. There were no shorts anywhere on the PCB either. The forward resistance of the pump was measured and compared to the forward resistance of a spare pump. The resistances were similar and deemed not to be suspicious. There was no discontinuities on the PCB where they were not expected. No problems were found from electrical measurements and visual inspection.
}
\DIFadd{Thus the next step in the procedure was to try to start the system from a power supply and check functionality. The power supply was set to 28.8V and a current limit of 1.8A. The system was turned on and data was feed out nominally as it did during flight as well. After inspecting basic functionality of the experiment the pump and the flushing valve were turned on and opened to see if the same issue occurred as during flight. The pump turned on and the valve opened nominally and pumped air through the system. Since the pipes going out were dismounted at this point and one of the concerns was that the pump might have been blocked, the inlet of the pump was blocked and the procedure repeated. The pump turned on again, although without blowing air, as expected.
}
\DIFadd{At this point the team had suspicions that it might be a current limitation problem since the same behaviour has been seen before in the lab when the experiment could not be supplied with enough current. Thus a decision was made to change the settings of the bench power supply used during testing. It was first set to 24V and 1.8A current limitation and the experiment continued to work nominally. Next, the power supply was set to 24V }\DIFaddend and \DIFdelbegin \DIFdel{having already passed the integration progress review , the }\DIFdelend \DIFaddbegin \DIFadd{1A and the pump could no longer start at this point and the exact same behaviour as during flight was seen. The system shut down, stopped sending data to the ground station, then rebooted and reestablished the telemetry feed. At this point, no further testing was made at the campaign due to others needing the used facilities and it was deemed safe to continue the testing in the lab later on.
}
\subsubsection{\DIFadd{Lab analysis}}
\DIFadd{The lab analysis mainly focused on the power consumption of the pump and what happened power wise when the pump was turned on. The whole system was powered through a bench power supply with a 206m Ohm resistance in series to be able to measure the peak currents when the pump turns on by measure the voltage drop over this resistance with an oscilloscope. There were two notable peaks when starting the pump, the first was on a time scale of 100ms and produced a total current draw of 1.019A. The second was on the time scale of 110\mbox{%DIFAUXCMD
$\micro$
}%DIFAUXCMD
s and had a total current draw of 7.56A. Although, it is not certain that the second peak is real, it could have been produced by other disturbances since similar behaviours have been seen before on the same scope when other devices on the power net have been turned on or off.
}
\DIFadd{After these test were performed, the experiment was tested with a test battery pack consisting of eight SAFT LSH20 cells. Although, the specific cells tested with had already been used and it is known that these batteries self drain once they have been used once. When trying to start the pump, the same behaviour as during flight failure was seen. Furthermore the voltage on the batteries dropped drastically to 3.5V from 22.9V which would explain the system shutdown. This can be seen in Figure \ref{fig:experiment-battery-test}.
}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{7-data-analysis-and-results/img/BatteryTestPumpStart.JPG}
\caption{\DIFaddFL{Supply Voltage During Pump Start While Running on Batteries}}
\label{fig:experiment-battery-test}
\end{figure}
\subsubsection{\DIFadd{Conclusion}}
\DIFadd{The experiment works within the current specifications of a single SAFT LSH20 cell. Although, the supplier has not specified the single cell behaviour for short peaks such as ~100\mbox{%DIFAUXCMD
$\micro$
}%DIFAUXCMD
s. The effects on current specifications when using these cells in series is also unknown. Although, it is known that lithium-thionyl chloride batteries have a relatively large internal resistance which might affect the performance. Thus the conclusion is that the experiment was current limited and thus reset itself, but the source of limitation is still unknown.
}
\DIFadd{Since it was not possible to start the pump during the pre-flight readiness review as a result of the flushing already being made, starting the pump while running on batteries should have been tested before. Either in the Lab, or before the system was flushed at launch campaign. Then the issue might have been discovered before the flight.
}\pagebreak
\subsection{\DIFadd{Lessons Learned}}
\DIFadd{At the end of the project, the }\DIFaddend TUBULAR Team has learned \DIFaddbegin \DIFadd{many }\DIFaddend important lessons regarding document creation as well as learning how to build an idea into a project\DIFaddbegin \DIFadd{, integrating it, testing it, flying it and analysisng the data afterwards}\DIFaddend . \par
The TUBULAR Team \DIFdelbegin \DIFdel{expects }\DIFdelend \DIFaddbegin \DIFadd{found }\DIFaddend that the REXUS/BEXUS programme \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend rewarding in terms of experience regarding balloon craft design and development, with real deadlines, published documents, and team work.
\DIFdelbegin \DIFdel{This part of the document will be updated in later SEDs to reflect what the team members have learned.
}\DIFdelend
\subsubsection{Management Division}
\begin{itemize}
\item Coordination between multiple project stakeholders.
\item Task definition, estimation, and management.
\item Task integration.
\item Conflict management and resolution.
\item Communication flows.
\item Funding research and outreach.
\item Identifying team member strengths as well as weaknesses and assigning responsibilities accordingly without neglecting the opportunities to improve on weaknesses.
\item Do not assume cross-division communication will take place without organizing/planning it.
\item Reviewing progress of assigned task should be continuous rather than waiting for their due dates.
\item Agree on and clearly communicate to the team definition of \enquote{Done} when referring to tasks being completed.
\item Agree on and clearly communicate to the team the definition of \enquote{Final Version} when referring to schematics, diagrams, and component lists.
\item The lessons learned section of previous BEXUS SEDs is an invaluable resource that answers many BEXUS related recurring questions.
\item If changes in management are required it is important that there is a sufficiently long change over period to allow a transfer of knowledge.
\item Tasks that are not completed on time or were simply not worked on during the assigned time will impact projected deadlines and these situations must be planned for and mitigated against. An early red flag for this is if the reported team working hours tend to be lower than expected at which point one can expect to have to make up those hours up before a deadline. These concerns must continuously be communicated to the team.
\item The REXUS/BEXUS programme is a significant investment in time and resources from all programme partners and as such the unique opportunity is not limited to participating students but to component manufacturers and suppliers as well. With this in mind, the team should not shy away from aggressively seeking funds or sponsorships from component manufacturers and suppliers as they stand to benefit from such a partnership to show case the robustness of their products.
\item Testing will always take longer than expected and so time must be planned to account for this.
\item When working with many remote team members extra time must be allowed for tasks to be completed as the communication is slower. Internal earlier deadlines help a lot.
\item During manufacture and test having many smaller deadlines has proven useful in ensuring things stick to the time plan.
\DIFaddbegin \item \DIFadd{When things don't go to plan during flight it is essential to keep a cool head and think things through calmly. It might be hard to make final decisions on things but it is important to ensure appropriate actions are taken in a timely and sensible fashion.
}\item \DIFadd{You can never do too much testing!
}\DIFaddend \end{itemize}
\subsubsection{Scientific Division}
After an extended research in trace gases and climate change, as well as in atmospheric sampling methods, the science team has gained so far:
\begin{itemize}
\item General knowledge in climate change.
\item General knowledge in the different sampling methods of the atmosphere; its characteristics and applications.
\item Study scientific papers in detail.
\item Outreach to scientific community.
\item Translating scientific concepts to technical teams.
\item Knowledge of how to design the scientific requirements in such a way that are in the permitted limits of the budget while the technical requirements are fulfilled.
\item How to sufficiently distribute the tasks within the science team and keep good communication with the other departments.
\item Experience, that writing down the tasks that need to be done, and keep tracking on them is better rather than having them as goals.
\item \DIFdelbegin \DIFdel{Knowledge in data analysis procedure and how to extract the desired results from raw data.
}%DIFDELCMD < \item %%%
\item%DIFAUXCMD
\DIFdelend Experience in producing a presentation only with the key points of a project and presenting it in front of other people.
\item Work as a group from different locations.
\item \DIFaddbegin \DIFadd{How to prepare and plan a test, under the real environment of the experiment.
}\item \DIFaddend The importance of testing, and how to sufficiently deal with problems that come up unexpectedly.
\DIFaddbegin \item \DIFadd{How to perform a failure analysis, documenting every step.
}\item \DIFadd{Knowledge in data analysis procedure and how to extract the desired results from raw data.
}\item \DIFadd{Using MATLAB to obtain the vertical profiles of the CO\mbox{%DIFAUXCMD
$_2$
}%DIFAUXCMD
, CH\mbox{%DIFAUXCMD
$_4$
}%DIFAUXCMD
, and CO gasses.
}\DIFaddend
\end{itemize}
\subsubsection{Electrical Division}
The electrical team has enhanced its understanding of the electronics design as well as gained confidence in selecting appropriate components as per requirements. Some of the points team improved as their general understanding are listed below:
\begin{itemize}
\item Gained confidence in designing electronics circuitry.
\item Familiarized with the selection of the electrical components.
\item By reading through large number of data sheets, team is now able to easily extract and understand technical details.
\item Learned and developed power calculation skills.
\item Got experience of using the Eagle software and how to find and make the libraries, footprints, and schematics for the required components.
\item How to test the components in the vacuum chamber.
\item Learned about the different connectors, wires and how to place the components on the PCB so the actual design can fit into the experiment box.
\item Discovered the cascading consequences of changing one component.
\item Finding how having big sheets with a lot of information can be preferable to several sheets with less specification.
\item While designing PCB's with Eagle it's a good idea to draw the traces manually rather than using autotracer. Since it allows you double check your schematics while pulling the traces.
\item When using netnaming to design schematics for later PCB designs in Eagle it's very important to triple check the net names since they sometimes change in unexpected ways.
\item Got practical experience of soldering the different types of sensors, wires and connectors.
\item Learned how to \DIFdelbegin \DIFdel{sold }\DIFdelend \DIFaddbegin \DIFadd{solder }\DIFaddend SMD miniature pressure sensors onto the PCB.
\item Familiarized with using work shop tools and machinery.
\DIFaddbegin \item \DIFadd{Got experience how to work around spontaneous problems arise due to design changes by the other departments and the testings.
}\item \DIFadd{Learned how to conduct failure analysis; it's basic methodology and rules required in order to avoid destruction of any evidence for the post-flight analysis.
}\item \DIFadd{Learned how important electrical housekeeping data could be during a in-flight errors.
}\item \DIFadd{Learned how to come up with and assess compromises during launch campaign to accommodate to eventual changes.
}\DIFaddend
\end{itemize}
\subsubsection{Software Division}
\begin{itemize}
\item Learned more about version control in the form of Git.
\item Learned how to implement an RTOS on Arduino.
\item Learned how to translate experiment requirements to a software design.
\item Learned how to split functionality into several testable functions.
\item Gained experience on software unit testing.
\item Learned how to design and create GUI using MATLAB GUIDE.
\item Learned how to use Git, a version control system for tracking changes in computer files and coordinating work on those files among multiple people.
\item Learned how to implement TCP/IP and UDP on ethernet connection.
\item Learned how to make telecommand and telemetry.
\item Learned how the I2C and SPI protocols work and operate.
\item Learned how to efficiently debug software.
%DIF < \item Learned how
\DIFaddbegin \item \DIFadd{Learned that when suppressing the output of a system, it would also be good to not ignore it.
}\item \DIFadd{Learned that using a sequence based system is not optimal when said system is wished to not be operated non sequentially.
}\item \DIFadd{Learned that the expected cases one designs around, may not be the actual cases which encountered in real life.%DIF >
}\DIFaddend \end{itemize}
\subsubsection{Mechanical Division}
\begin{itemize}
\item Come up with real design solutions starting from conceptual problems.
\item Make a proper use of both space and mass.
\item Learn mechanical \textit{tricks} when designing.
\item Adapt the design to components availability and characteristics.
\item Select and contact with vendors.
\item Implement a real pneumatic system.
\item Compute structural analysis.
\item Team collaboration with other departments, i.e. Electrical, Science, and Thermal.
\item Design is trickier when it comes to implementation.
\item Always document specific department knowledge. If who designed a certain part of the experiment is not available for the manufacture phase, whoever works on it should be able to figure out most of the solutions by themselves.
\item Manufacturing and integration of the different subsystems of the experiment takes longer than expected during design phase.
\item The design is never frozen until everything is built and working properly.
\DIFaddbegin \item \DIFadd{Good planning when designing and manufacturing allows to avoid last minute }\textit{\DIFadd{tricks}} \DIFadd{during Launch Campaign.
}\item \DIFadd{After flying the experiment and thinking of what could be changed to improve it, several ideas arise.
}\DIFaddend \end{itemize}
\subsubsection{Thermal Division}
\begin{itemize}
\item Learned how to do Steady-State and Transient thermal analysis in ANSYS.
\item Coordinate between other division to find a solution that works for everyone.
\item Do a thermal plan and structure up what needs to be done for a long period of time.
\item How to improve and be more efficient when adjusting to sudden changes in design.
\item How to balance details in simulations.
\item How to do thermal test, analyze the result and make improvement of the results.
\item How to work with Styrofoam.
\item How temperatures inside component operating ranges can impact component performances.
\DIFaddbegin \item \DIFadd{How the real flight actually is different from test and simulations and it is hard to do a perfect test and simulations beforehand.
}\DIFaddend \end{itemize}
\DIFaddbegin
\DIFaddend \pagebreak
\section{Abbreviations and References}
\subsection{Abbreviations}
%% My fight with this isn't over but for the sake of time I postpone fixing this.... T-T
% % abbreviations:
\newacronym{aac}{AAC}{Alternative Air Core}
\newacronym{ttc}{TT&C}{Telemetry, Tracking, and Command}
\newacronym{dlr}{DLR}{German Aerospace Centre}
\newacronym{snsb}{SNSB}{Swedish National Space Board}
\newacronym{esa}{ESA}{European Space Agency}
\newacronym{ssc}{SSC}{Swedish Space Corporation}
\newacronym{moraba}{MORABA}{Mobile Rocket Base}
\newacronym{sed}{SED}{Student Experiment Documentation}
\newacronym{irf}{IRF}{Swedish Institute of Space Physics}
\newacronym{fmi}{FMI}{Finnish Meteorological Institute}
\newacronym{led}{LED}{Light Emitting Diode}
\newacronym{ltu}{LTU}{Lule\aa University of Technology}
\newacronym{co2}{CO2}{Carbon Dioxide}
\newacronym{cad}{CAD}{Computer Aided Design}
\newacronym{cac}{CAC}{Conventional Air Core}
\newacronym{tbd}{TBD}{To Be Decided}
\newacronym{bex}{BEXUS}{Balloon Experiment for University Students}
\newacronym{dc}{DC}{Direct Current}
\newacronym{gc}{GC}{Ground Control Station}
\newacronym{spi}{SPI}{Serial Peripheral Interface}
\newacronym{zarm}{ZARM}{Zentrum f{\"u}r angewandte Raumfahrttechnologie und Mikrogravitation}
\newacronym{cfd}{CFD}{Computational Fluid Dynamics}
\newacronym{sd}{SD}{Secure Digital}
\newacronym{obc}{OBC}{Onboard Computer}
\newacronym{gpio}{GPIO}{General Pins Input Output}
\newacronym{sdp}{SDP}{Serial Data Pin}
\newacronym{scp}{SCP}{Serial Clock Pin}
\newacronym{miso}{MISO}{Master Input Slave Output}
\newacronym{mosi}{MOSI}{Master Output Slave Input}
\newacronym{clk}{CLK}{Serial Clock}
\newacronym{i2c}{I2C}{Inter Integrated Circuit}
%\newacronym{most}{MOST}{??????????????????}
\newacronym{fcs}{FCS}{Frame Check Sequence}
\newacronym{gui}{GUI}{Graphical User Interface}
\newacronym{ide}{IDE}{Integrated Development Environment}
\newacronym{ch4}{CH4}{Methane}
\newacronym{noaa}{NOAA}{National Oceanographic and Atmospheric Administration}
\newacronym{guf}{GUF}{Goethe University Frankfurt}
\newacronym{mb}{MB}{Mega Byte}
\newacronym{i/o}{I/O}{Input/Ouput}
\newacronym{hood}{HOOD}{Hierarchic Object-Oriented Design}
\newacronym{cog}{CoG}{Center of Gravity}
\newacronym{rtos}{RTOS}{Real-time operating system}
\newacronym{}{ppm}{parts per million}
\newacronym{}{ppb}{parts per billion}
%\glsaddall
%\printglossary[type=\acronymtype,title=Abbreviations,nonumberlist]
% %\printglossary
% %\printglossary[title=8.1 Abbreviations]
% \printglossaries
% %\printglossary[title=TitleName, toctitle=TOCname]
\begin{longtable}{p{3cm} p{9cm}}
AAC & Alternative to the Air Coil\\
ASC & Air Sampling Control\\
ANSYS & ANalysis SYStem\\
BEXUS & Balloon Experiment for University Students\\
CAC & Conventional Air Coil\\
CAD & Computer Aided Design \\
CDR & Critical Design Review\\
CFD & Computational Fluid Dynamics\\
CH$_{4}$ & Methane\\
CLK & Serial Clock\\
CO & Carbon Monoxide\\
CO$_{2}$ & Carbon Dioxide\\
COG & Center of Gravity \\
CRDS & Cavity Ring Down Spectrometer\\
DC & Direct Current\\
DFM & Design for Manufacturability \\
DLR & Deutsches Zentrum f{\"u}r Luft- und Raumfahrt \\
EB & Electronic Box \\
EBASS & Esrange BAlloon Service System\\
ECTS & European Credit Transfer System\\
EPDM & Ethylene Propylene Diene Monomer\\
ESA & European Space Agency \\
FCS & Frame Check Sequence\\
FEA & Finite Element Analysis\\
FMI & Finnish Meteorological Institute\\
GC & Ground Control Station\\
GPIO & General Pins Input Output\\
GPS & Global Positioning System\\
GUI & Graphical User Interface\\
H$_2$0 & Water \\
HOOD & Hierarchic Object-Oriented Design\\
I2C & Inter-Integrated Circuit \\
IDE & Integrated Software Environment \\
I/O & Input/Output\\
IR & Infra-Red\\
IRF & Institutet för rymdfysik (Swedish Institute for Space Physics)\\
LED & Light Emitting Diode\\
LTU & Luleå University of Technology \\
MATLAB & MATrix LABoratory\\
MB & Mega Byte\\
MISO & Master Input Slave Output\\
MORABA & Mobile Rocket Base \\
MOSFET & Metal Oxide Semiconductor Field Effect Transistor\\
MOSI & Master Output Slave Input\\
%MOST & ?\\
MSc & Master of Science \\
NOAA & National Oceanographic and Atmospheric Administration \\
OBC & Onboard Computer\\
ppb & parts per billion\\
ppm & parts per million\\
PCB & Printed Circuit Board\\
PDR & Preliminary Design Review\\
REXUS & Rocket Experiment for University Students \\
RJ45 & Registered Jack 45 \\
RTOS & Real-time operating system\\
SAFT & Soci\'{e}t\'{e} des Accumulateurs Fixes et de Traction\\
SCP & Serial Clock Pin\\
SD & Secure Digital (Storage) \\
SDP & Serial Data Pin\\
SED & Student Experiment Documentation \\
SNSB & Swedish National Space Board \\
SPI & Serial Peripheral Interface\\
SSC & Swedish Space Corporation \\
STP & Standard Temperature Pressure\\
TBC & To Be Confirmed\\
TBD & To Be Determined \\
TCP & Transmission Control Protocol\\
TT$\&$C & Telemetry, Tracking, and Command\\
UDP & User Datagram Protocol\\
VC & Valve Center\\
ZARM & Zentrum f{\"u}r angewandte Raumfahrttechnologie und Mikrogravitation \\
\label{tab:abbrevi}
\end{longtable}
\raggedbottom
\pagebreak
\subsection{References}
\renewcommand{\refname}{}
\bibliography{refs}
\bibliographystyle{plain}
\pagebreak
\begin{appendices}
\newpage
%\subsection{Preliminary Design Review (PDR)}
\includepdf[scale=0.8,pages={1},pagecommand=\section{Experiment Reviews}\subsection{Preliminary Design Review (PDR)},offset=0 -1cm]{appendix/pdf/RXBX-PDR-Report-TUBULAR-v1-2-06Feb18.pdf}
\includepdf[scale=0.8,pages={2,3},pagecommand={}]{appendix/pdf/RXBX-PDR-Report-TUBULAR-v1-2-06Feb18.pdf}
\includepdf[scale=0.8,pages={1}, pagecommand=\subsection{Critical Design Review (CDR)},offset=0 -1cm]{appendix/pdf/RXBX11-TUBULAR-CDR-Report-v1-1-31May18.pdf}
\includepdf[scale=0.8,pages={2,3,4},pagecommand={}]{appendix/pdf/RXBX11-TUBULAR-CDR-Report-v1-1-31May18.pdf}
\includepdf[scale=0.8,pages={1}, pagecommand=\subsection{Integration Progress Review (IPR)},offset=0 -1cm]{appendix/pdf/BX26-IPR-TUBULAR-Report-v1-1-26Jul18.pdf}
\includepdf[scale=0.8,pages={2,3,4,5,6,7,8},pagecommand={}]{appendix/pdf/BX26-IPR-TUBULAR-Report-v1-1-26Jul18.pdf}
\DIFaddbegin \includepdf[scale=0.8,pages={1}, pagecommand=\subsection{Experiment Acceptance Review (EAR)},offset=0 -1cm]{appendix/pdf/RXBX11_TUBULAR_EAR-Report_v1-0_10Oct18.pdf}
\includepdf[scale=0.8,pages={2,3,4,5,6,7},pagecommand={}]{appendix/pdf/RXBX11_TUBULAR_EAR-Report_v1-0_10Oct18.pdf}
\DIFaddend %\subsection{Integration Process Review}
%\subsection{Experiment Acceptance Review}
\section{Outreach} \label{sec:appE}
\subsection{Outreach on Project Website}
To increase the projects out reach the TUBULAR Team created a project website. On the website there are descriptions of the project, a link to download the latest SED, information on the TUBULAR Team members and sponsors and a contact link. In addition the microblogging carried out by the TUBULAR Team is also displayed on the website.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{appendix/img/outreach/outreach-tubwebsite-front.PNG}
\caption{The Home Page of TUBULAR's Website.}
\label{fig:outreach-tubwebsite}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/outreach/outreach-tubwebsite-inside.PNG}
\end{align*}
\caption{The Daily Microblogging Displayed on the Website.}
\label{fig:outreach-microblog}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/outreach/outreach-tubwebsite-timeline.PNG}
\end{align*}
\caption{The Timeline for This Project Available on the Website.}
\label{fig:outreach-timeline}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\linewidth]{appendix/img/outreach/outreach-tubwebsite-team.PNG}
\end{align*}
\caption{The Information of the Tubular's Team Members Available on the Website.}
\label{fig:outreach-team}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/outreach/outreach-tubwebsite-spons.PNG}
\end{align*}
\caption{The Sponsors In This Project Available on the Website.}
\label{fig:outreach-spons}
\end{figure}
\begin{landscape}
\subsection{Outreach Timeline}
\begin{figure}[H]
\centering
\includegraphics[width=1.5\textwidth]{appendix/img/outreach/outreach-timeline.jpg}
\caption{Outreach Timeline for the Whole BEXUS Project.}
\label{fig:outreach-timeline}
\end{figure}
\end{landscape}
\newpage
\subsection{Social Media Outreach on Facebook}
Another outreach avenue is Facebook. On Facebook the TUBULAR Team posts photos, short text updates and links to our blog posts.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\linewidth]{appendix/img/outreach/outreach-facebook.jpg}
\end{align*}
\caption{Photos from Social Media Outreach on Facebook.}
\label{fig:outreach-facebook}
\end{figure}
\newpage
\subsection{Social Media Outreach on Instagram}
On Instagram the TUBULAR Team posts regularly with updates on the project progress and what the TUBULAR Team has been up to.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.9\linewidth]{appendix/img/outreach/outreach-instagram.PNG}
\end{align*}
\caption{Some of the Social Media Outreach on Instagram.}
\label{fig:outreach-instagram}
\end{figure}
\newpage
\subsection{Outreach with Open Source Code Hosted on a REXUS/BEXUS GitHub Repository}
The TUBULAR Team has opened a GitHub Repository to share all the code used in the TUBULAR project. It was created with an open invite to all other REXUS/BEXUS teams to view, use and contribute to.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.6\linewidth]{appendix/img/outreach/outreach-github.png}
\end{align*}
\caption{The Open Source Code Hosted on a REXUS/BEXUS GitHub Repository.}
\label{fig:outreach-github}
\end{figure}
\subsection{Outreach with Team Patch}
The team also had patches made of the TUBULAR logo and 150 patches have been ordered. Around 70 of these have already been bought by the team for themselves and to give to friends and family. It is intended that the remaining 80 will be sold for a small profit at university.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.3\textwidth]{appendix/img/outreach/outreach-patch.png}
\end{align*}
\caption{A Photo of the Patch in Production Sent by the Company Making it.}
\label{fig:outreach-patch}
\end{figure}
\subsection{Visit by the Canadian Ambassador}
During Canadian Ambassador Heather Grant's visit at the Swedish Institute of Space Physics, the team got the honor to do a brief presentation of the TUBULAR project. It also included a short explanation regarding one of the electrical tests in the vacuum chamber. This is now displayed on the universities website with a link to the TUBULAR website.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\textwidth]{appendix/img/outreach/ambassardor.png}
\end{align*}
\caption{Picture taken by Ella Carlsson which is shown on the LTU Website.}
\label{fig:outreach-patch}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\textwidth]{appendix/img/outreach/amabadasoiashdna.png}
\end{align*}
\caption{The Text Accomopanying the Image on the LTU Webpage.}
\label{fig:outreach-patch}
\end{figure}
\subsection{Attendance at Lift Off 2018}
The team will attend Lift Off 2018 event and present the TUBULAR project for students, companies and organizations. The guests will have a chance to have a peek at the whole experiment and find out more about the REXUS/BEXUS programme.
% Additional Technical Information
\newpage
\section{Additional Technical Information}\label{sec:appM}
\subsection{Materials Properties}
\begin{longtable}{|m{0.15\textwidth}|c|m{0.12\textwidth}|m{0.12\textwidth}|m{0.12\textwidth}|m{0.12\textwidth}|m{0.12\textwidth}|}
\hline
\textbf{Material} & \textbf{Density} & \textbf{Tensile strength} & \textbf{$\mathbf{0.2 \% }$ Proof stress} & \textbf{Ductile yield $\mathbf{A_5}$} & \textbf{Modulus of elasticity} & \textbf{Brinell hardness} \\ \hline
EN AW - AlMgSi 6060 & $2.7\ g/cm^3$ & $245\ MPa$ & $195\ MPa$ & $10\%$ & $70 GPa$ & $75\ HB$ \\ \hline
\caption{Mechanical Properties of the Bosch Rexroth Strut Profiles.}
\label{table:profile_material}
\end{longtable}
\bigskip
\begin{longtable}{|m{0.12\textwidth} |c|m{0.12\textwidth}|m{0.12\textwidth}| m{0.13\textwidth} | m{0.12\textwidth}|}
\hline
\textbf{Material} & \textbf{Density} & \textbf{Tensile strength} & \textbf{Yield Strength} & \textbf{Modulus of elasticity} & \textbf{Brinell hardness} \\ \hline
Aluminum 5754 & $2.67\ g/cm^3$ & $190\ MPa$ & $80\ MPa$ & $70\ GPa$ & $77\ HB$ \\ \hline
\caption{Mechanical Properties of the Aluminum Panels.}
\label{table:wall_aluminum}
\end{longtable}
\bigskip
\begin{longtable}{|m{0.12\textwidth} |c|m{0.12\textwidth}|m{0.16\textwidth}|}
\hline
\textbf{Material} & \textbf{Density} & \textbf{Tensile strength} & \textbf{Maximum Temperature} \\ \hline
Styrofoam 250 SL-AN & $28\ kg/m^3$ & $90\ kPa$ & $75\ ^\circ C$ \\ \hline
\caption{Mechanical Properties of the Styrofoam Insulation/Protection.}
\label{table:wall_styrofoam}
\end{longtable}
\newpage
\subsection{Coiled Tube and Sampling Bag Example} \label{sec:appA}
\subsubsection{CAC Coiled Tube} \label{A}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.6\linewidth]{appendix/img/cac-coil.png}
\end{align*}
\caption{CAC Coiled Tube.}
\label{fig:A1}
\end{figure}
\subsubsection{Air Sampling Bag} \label{B}
\begin{figure}[H]
\begin{align*}
\includegraphics[height=0.4\linewidth]{appendix/img/Bag-we-use.jpg}
\end{align*}
\caption{Air Sampling Bag.}
\label{fig:A2}
\end{figure}
\subsection{Dimensions of the sampling bag}
\label{dimensions-bags}
Table \ref{table:bags-dimensions} shows how the dimensions of the bags change according to the sampled volume. This data has been obtained by testing and has been taken into account in order to determine the maximum number of bags that can be filled inside the box.
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\scalebox{0.8}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Volume} & \textbf{Length (horizontal)}& \textbf{Height (vertical)} & \textbf{Width }\\ \hline
Empty & 26.4 cm & 28 cm & 0.5 cm \\ \hline
0.5 L & 26.4 cm & 27.5 cm & 1.5 cm \\ \hline
1 L & 26 cm & 27.5 cm & 2 cm \\ \hline
1.5 L & 25.5 cm & 26.5 cm & 4.5 cm \\ \hline
2 L & 25 cm & 25 cm & 5.5 cm \\ \hline
2.5 L & 24.5 cm & 23 cm & 7.5 cm \\ \hline
3 L & 24 cm & 22 cm & 10.5 cm \\ \hline
\end{tabular}}}
\caption{Dimensions of the Bags When Filled with Different Air Sample Volumes.}
\label{table:bags-dimensions}
\end{table}
\subsection{List of components in The Brain}
\label{list-of-components-brain}
\textbf{\underline{Level 1 - Pump}}
\\
List of components of Level 1:
\begin{enumerate}[label=\Alph*.]
\item 1 Magnesium filter
\item 1 Pump
\item 1 Temperature sensor
\item 2 Heaters
\item 3 Tubes
\item 8 interfaces
\end{enumerate}
\textbf{\underline{Level 2 - Valve Center}}
\\
List of components of Level 2:
\begin{enumerate}[label=\Alph*.]
\item 1 Airflow sensor
\item 1 Static pressure sensor
\item 1 Temperature sensor
\item 2 Heater
\item 1 Manifold
\item 6 Sampling valves
\item 1 Flushing valve
\item 11 Tubes
\item 14 interfaces
\end{enumerate}
\textbf{\underline{Level 3 - Electronics}}
\\
List of components of Level 3:
\begin{enumerate}[label=\Alph*.]
\item 1 PCB
\item 5 D-Sub female connectors
\item 1 E-link socket
\item 1 Power socket
\end{enumerate}
All the electrical components connected to the PCB in Level 3 are summarized in Tables \ref{tab:list_of_components_CAC} and \ref{tab:list_of_components_AAC}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{\textbf{CAC}} \\ \hline
\multicolumn{1}{|c|}{Area} & \multicolumn{1}{c|}{Electrical component} & \multicolumn{1}{c|}{\#} \\ \hline
\rowcolor[HTML]{FFCC67}
\cellcolor[HTML]{FFCC67} & Solenoid valve & 1 \\ \cline{2-3}
\rowcolor[HTML]{FFCC67}
\multirow{-2}{*}{\cellcolor[HTML]{FFCC67}CAC} & Temperature sensor & 3 \\ \hline
\end{tabular}
\caption{Connections to CAC Box.}
\label{tab:list_of_components_CAC}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{\textbf{AAC}} \\ \hline
\multicolumn{1}{|c|}{Area} & Electrical component & \# \\ \hline
\rowcolor[HTML]{FFCCC9}
\cellcolor[HTML]{FFCCC9} & Pump & 1 \\ \cline{2-3}
\rowcolor[HTML]{FFCCC9}
\cellcolor[HTML]{FFCCC9} & Heater & 2 \\ \cline{2-3}
\rowcolor[HTML]{FFCCC9}
\multirow{-4}{*}{\cellcolor[HTML]{FFCCC9}Level 1} & Temperature sensor & 1 \\ \hline
\rowcolor[HTML]{9AFF99}
\cellcolor[HTML]{9AFF99} & Static Pressure sensor & 1 \\
\cline{2-3}
\rowcolor[HTML]{9AFF99}
\cellcolor[HTML]{9AFF99} & Airflow sensor & 1 \\ \cline{2-3}
\rowcolor[HTML]{9AFF99}
\cellcolor[HTML]{9AFF99} & Solenoid valves & 7 \\ \cline{2-3}
\rowcolor[HTML]{9AFF99}
\cellcolor[HTML]{9AFF99} & Heater & 2 \\ \cline{2-3}
\rowcolor[HTML]{9AFF99}
\multirow{-5}{*}{\cellcolor[HTML]{9AFF99}Level 2} & Temperature sensor & 1 \\ \hline
\rowcolor[HTML]{96FFFB}
Sampling bags center & Temperature sensor & 3 \\ \hline
\rowcolor[HTML]{E9D66B}
Outside & Pressure sensor & 3 \\ \hline
\end{tabular}
\caption{Connections to AAC System.}
\label{tab:list_of_components_AAC}
\end{table}
\newpage
\subsection{Pneumatic System Interfaces}
\label{sec:appP}
All the fittings in the AAC and CAC subsystem were sponsored and manufactured by Swagelok.
\subsubsection{Straight Fittings}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-200-6.jpg}
\caption{SS-200-6}
\end{subfigure}
~
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-1-2.jpg}
\caption{SS-400-1-2}
\end{subfigure}
\caption{Straight Tube and Male Fittings.}
\label{Appx:Straight_fittings}
\end{figure}
\subsubsection{90 Degree Fittings}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-9.jpg}
\caption{SS-400-9}
\end{subfigure}
~
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-2-2.jpg}
\caption{SS-400-2-2}
\end{subfigure}
~
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-8-4.jpg}
\caption{SS-400-8-4}
\end{subfigure}
\caption{Various Kinds of 90 Degree Fittings}
\label{Appx:Elbow_fittings}
\end{figure}
\subsubsection{Tee Fittings}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-3.jpg}
\caption{SS-400-3}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-3-4TTM.jpg}
\caption{SS-400-3-4TTM}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-3-8TMT.jpg}
\caption{SS-400-3-8TMT}
\end{subfigure}
\caption{Various Kinds of Tee Fittings}
\label{Appx:Tee_fittings}
\end{figure}
\subsubsection{Quick Connectors}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-QC4-B-4PF.jpg}
\caption{SS-QC4-B-4PF}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-QC4-B-200.jpg}
\caption{SS-QC4-B-200}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-QC4-B-400.jpg}
\caption{SS-QC4-D-400}
\end{subfigure}
\caption{Quick Connect Body and Quick Connect Stem With Valve}
\label{Appx:QC_fittings}
\end{figure}
\subsubsection{Reducer and Adapters Fittings}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-6-2.jpg}
\caption{SS-400-6-2}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-4-TA-7-4RG.jpg}
\caption{SS-4-TA-7-4RG}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-300-R-4.jpg}
\caption{SS-300-R-4}
\end{subfigure}
\caption{Various Kinds of Reducer and Adapters Fittings}
\label{Appx:Reducer_Adapters_fittings}
\end{figure}
\subsubsection{Port Connections and Ferrule Set}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-401-PC.jpg}
\caption{SS-401-PC}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-C.jpg}
\caption{SS-400-C}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-400-SET.jpg}
\caption{SS-400-SET}
\end{subfigure}
~
\begin{subfigure}[b]{0.21\textwidth}
\centering
\includegraphics[width=\textwidth]{appendix/img/interfaces/SS-6M5-4M.jpg}
\caption{SS-6M5-4M}
\end{subfigure}
\caption{Various Kinds of Port Connections and Ferrule Set}
\label{Appx:Other_fittings}
\end{figure}
\newpage
\subsection{Manufacturing Drawings}
\label{sec:mech_drawings}
The following drafts are to be used to manufacture the mechanical components
of the experiment.
% \includepdf[scale=0.8,pages={1},pagecommand=\section{Appendix G - Equipment Loan Agreement}\label{sec:appG}]{appendix/pdf/equipement-loan-agreement.pdf}
% \includepdf[scale=0.8,pages={2,3}]{appendix/pdf/equipement-loan-agreement.pdf}
\includepdf[scale=1,pages={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36}]{appendix/pdf/manufacturing-drafts.pdf}
\newpage
%\appendix
\begin{landscape}
\subsection{Software Sequence Diagram} \label{sec:appB}
\subsubsection{Air Sampling Control Object Sequence diagrams}
\begin{figure}[H]
\centering
%\includegraphics[height=0.75\textwidth]{appendix/img/ASC-seq-dia-v1-2-a.png}
\includegraphics[height=0.75\textwidth]{appendix/img/softwareDiagrams/ASC-seq-dia-v1-3-ascent.jpg}
\caption{ASC Object in Normal Mode - Ascent.}
\label{ASCa}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=0.9\textwidth]{appendix/img/softwareDiagrams/ASC-seq-dia-v1-3-descent.jpg}
\caption{ASC Object in Normal Mode - Descent.}
\label{ASCb}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=0.9\textwidth]{appendix/img/ASC-seq-dia-v1-2-c.png}
\caption{ASC Object in Standby Mode.}
\label{ASCb}
\end{figure}
\subsection{Heating Object Sequence Diagrams}
\begin{figure}[H]
\centering
\includegraphics[height=0.8\textwidth]{appendix/img/heater-seq-dia-a.png}
\caption{Heating Object in Standby Mode.}
\label{heatera}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=0.9\textwidth]{appendix/img/heater-seq-dia-b.png}
\caption{Heating Object in Normal Mode - Ascent.}
\label{heaterb}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=0.9\textwidth]{appendix/img/heater-seq-dia-c.png}
\caption{Heating Object in Normal Mode - Descent.}
\label{heaterc}
\end{figure}
\subsection{Sensor Object Sequence Diagrams}
\begin{figure}[H]
\centering
%\includegraphics[height=0.8\textwidth]{appendix/img/sensor-seq-dia-a.png}
\includegraphics[height=0.8\textwidth]{appendix/img/softwareDiagrams/Sequance-Diagram-standby-Mode.jpg}
\caption{Sensor Object in Standby Mode.}
\label{sensora}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=0.9\textwidth]{appendix/img/softwareDiagrams/Sequance-Diagram ascent-Mode.jpg}
\caption{Sensor Object in Normal - Ascent Mode.}
\label{sensorb}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=0.9\textwidth]{appendix/img/softwareDiagrams/Sequance-Diagram-descent-Mode.jpg}
\caption{Sensor Object in Normal - Descent Mode.}
\label{sensorc}
\end{figure}
\end{landscape}
\newpage
\subsection{Software Interface Diagram} \label{sec:appC}
\subsubsection{Sensor Object Interface Diagram} %\label{}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/softwareDiagrams/interface_diagram-1-2.jpg}
\end{align*}
\caption{Sensor Object Interface Diagram.}
\label{fig:C1}
\end{figure}
\subsubsection{Air Sampling Control Object Interface Diagram} %\label{}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/softwareDiagrams/interface-diagram-AC-1-2.jpg}
\end{align*}
\caption{Air Sampling Control Object Interface Diagram.}
\label{fig:C2}
\end{figure}
\subsubsection{Heating Object Interface Diagram} %\label{}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/softwareDiagrams/interface-diagram-Heating-1-3.jpg}
\end{align*}
\caption{Heating Object Interface Diagram.}
\label{fig:C3}
\end{figure}
\newpage
\subsection{PCB Schematics}
\label{sec:pcbSchematics}
Red is traces pulled on the top layer and blue are the traces on the bottom layer.
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{appendix/img/MainPCBTop.jpg}
\caption{Main PCB Top layer Layout in Eagle.}
\label{fig:PCBinEagle}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{appendix/img/MainPCBBot.jpg}
\caption{Main PCB bottom layer Layout in Eagle.}
\label{fig:PCBinEagle}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{appendix/img/PresPCBTop.jpg}
\caption{Barometric pressure sensor PCB top layer Layout in Eagle.}
\label{fig:PCBinEagle}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{appendix/img/PresPCBBot.jpg}
\caption{Barometric pressure sensor PCB bottom layer Layout in Eagle.}
\label{fig:PCBinEagle}
\end{figure}
\newpage
\includepdf[scale=0.75,pages={3},pagecommand=\subsection{Tube}]{appendix/pdf/Rtx-Catalog015-6_Tubing.pdf}
\includepdf[scale=0.75,pages={4},pagecommand=\subsection{AAC Manifold Valve}]{appendix/pdf/VDW_A_EU.pdf}
\includepdf[scale=0.75,pages={7},pagecommand=\subsection{AAC Flushing Valve and CAC Valve}]{appendix/pdf/VDW_B_EU.pdf}
\includepdf[scale=0.75,pages={8,9}]{appendix/pdf/VDW_B_EU.pdf}
\includepdf[scale=0.75,pages={4},pagecommand=\subsection{Pump}]{appendix/pdf/DataSheet_NMP830_850_E005_web.pdf}
\includepdf[scale=0.75,pages={1},pagecommand=\subsection{Airflow Sensor}]{appendix/pdf/honeywell-sensing-airflow-awm50000-series-catalog-pages.pdf}
\includepdf[scale=0.75,pages={2,3,4}]{appendix/pdf/honeywell-sensing-airflow-awm50000-series-catalog-pages.pdf}
\includepdf[scale=0.75,pages={1},pagecommand=\subsection{Static Pressure Sensor}]{appendix/pdf/Gems_3500_eng_tds.pdf}
\includepdf[scale=0.75,pages={2,3}]{appendix/pdf/Gems_3500_eng_tds.pdf}
\newpage
\section{Checklists}\label{sec:Checklist}
%\begin{landscape}
The Pre-Launch Checklist will be taken care of by two team members. One of them will be responsible of reading out loud each item and marking them when they are done. The other one will be responsible for performing the stated actions. At the same time, the one reading will check that the actions are properly conducted.
For three key actions (M5, M9 and M13), a third team member will be responsible of asking and, when possible, checking, that they have been properly conducted.
\subsection{Pre-Launch Checklist}\label{sec:appL}
\begin{longtable} {|m{0.1\textwidth}|m{0.8\textwidth}|m{0.1\textwidth}|}
\hline
\textbf{ID} & \textbf{ITEM} & \textbf{CHECK} \\
\hline
\multicolumn{2}{|l|}{ \textbf{SCIENCE} } & \\
\hline
& \textbf{CAC} & \\
\hline
S1 & \DIFaddbegin \DIFadd{Remove the CAC wall with the D-SUB connector, if it's not removed already. }& \\ \hline
\DIFadd{S2 }& \DIFaddend Connect picarro to quick connector stem at No \DIFdelbegin \DIFdel{6. }\DIFdelend \DIFaddbegin \DIFadd{10. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S2 }\DIFdelend \DIFaddbegin \DIFadd{S3 }\DIFaddend & Attach \DIFaddbegin \DIFadd{the fill gas bottle's }\DIFaddend quick connector stem to quick connector body No 1. & \\ \hline
\DIFdelbegin \DIFdel{S3 }\DIFdelend \DIFaddbegin \DIFadd{S4 }\DIFaddend & \DIFdelbegin \DIFdel{Start flushing the AirCore with a fill gas }\DIFdelend \DIFaddbegin \DIFadd{Let the fill gas run through the AirCore }\DIFaddend at a flow rate of 40ml/min\DIFdelbegin \DIFdel{, the night before the flight}\DIFdelend . & \\ \hline
\DIFdelbegin \DIFdel{S4 }\DIFdelend \DIFaddbegin \DIFadd{S5 }\DIFaddend & Leave it flushing over night. & \\ \hline
\DIFdelbegin \DIFdel{S5 }\DIFdelend \DIFaddbegin \DIFadd{S6 }\DIFaddend & Detach \DIFdelbegin \DIFdel{the }\DIFdelend quick connector stem at No 1. & \\ \hline
\DIFdelbegin \DIFdel{S6 }\DIFdelend \DIFaddbegin \DIFadd{S7 }\DIFaddend & Detach the quick connector stem at No \DIFdelbegin \DIFdel{6. }\DIFdelend \DIFaddbegin \DIFadd{10. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S7 }\DIFdelend \DIFaddbegin \DIFadd{S8 }\DIFaddend & Disconnect the picarro \DIFdelbegin \DIFdel{analyzer}\DIFdelend \DIFaddbegin \DIFadd{analyser}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S8 }%DIFDELCMD < & %%%
\DIFdel{Flush manually parts 7 to 13 and 15 to 21. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdelend S9 & Connect the dryer tube \DIFdelbegin \DIFdel{(No 7.) to No 6. }\DIFdelend \DIFaddbegin \DIFadd{No 14 to No 13. }\DIFaddend & \\ \hline
S10 & Connect parts \DIFdelbegin \DIFdel{7 to 13. }\DIFdelend \DIFaddbegin \DIFadd{11 to 21. }\DIFaddend & \\ \hline
S11 & \DIFdelbegin \DIFdel{Close solenoid valve no 10. }\DIFdelend \DIFaddbegin \DIFadd{Check all connections are tighten. }\DIFaddend & \\ \hline
S12 \DIFaddbegin & \DIFadd{Close the CAC's solenoid valve No 17. }\DIFaddend & \DIFaddbegin \\ \hline
\DIFadd{S13 }& \DIFaddend Connect quick connector stem No \DIFdelbegin \DIFdel{6. to No 5. }\DIFdelend \DIFaddbegin \DIFadd{10 to No 9. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S13 }\DIFdelend \DIFaddbegin \DIFadd{S14 }\DIFaddend & \DIFdelbegin \DIFdel{Screw in the plug to the CAC inlet/oulet tube (21)}\DIFdelend \DIFaddbegin \DIFadd{Connect No 10 with No 11. }& \\ \hline
\DIFadd{S15 }& \DIFadd{Check all connections are tighten}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S14 }\DIFdelend \DIFaddbegin \DIFadd{S16 }\DIFaddend & \DIFdelbegin \DIFdel{Place the AirCore in its box}\DIFdelend \DIFaddbegin \DIFadd{Put the CAC wall with the D-SUB connector back}\DIFaddend . & \\ \hline
\DIFdelbegin %DIFDELCMD <
%DIFDELCMD < %%%
\DIFdelend & \textbf{AAC/MANIFOLD} & \\ \hline
\DIFdelbegin \DIFdel{S15 }\DIFdelend \DIFaddbegin \DIFadd{S17 }\DIFaddend & Unscrew the plug from the inlet (1) and outlet tube (29). & \\ \hline
\DIFdelbegin \DIFdel{S16 }\DIFdelend \DIFaddbegin \DIFadd{S18 }\DIFaddend & Screw in the \DIFdelbegin \DIFdel{quick connector with male thread }\DIFdelend \DIFaddbegin \DIFadd{male threaded quick connector }\DIFaddend to the inlet tube (1). & \\ \hline
\DIFdelbegin \DIFdel{S17 }\DIFdelend \DIFaddbegin \DIFadd{S19 }\DIFaddend & Connect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at }\DIFdelend \DIFaddbegin \DIFadd{to }\DIFaddend the AAC's inlet tube (1). & \\ \hline
\DIFdelbegin \DIFdel{S18 }\DIFdelend \DIFaddbegin \DIFadd{S20 }\DIFaddend & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve. (valve that controls vacuum or dry gas). }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S19 }%DIFDELCMD < & %%%
\DIFdelend Open flushing valve (27). & \\ \hline
\DIFdelbegin \DIFdel{S20 }\DIFdelend \DIFaddbegin \DIFadd{S21 }\DIFaddend & Turn central valve \DIFaddbegin \DIFadd{on }\DIFaddend so that is open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S21 }\DIFdelend \DIFaddbegin \DIFadd{S22 }\DIFaddend & \DIFdelbegin \DIFdel{Start flushing. Amount: 10 times the }\DIFdelend \DIFaddbegin \DIFadd{Let the dry gas run through the }\DIFaddend AAC's \DIFdelbegin \DIFdel{volume for as long as it's necessary (and a little bit longer)}\DIFdelend \DIFaddbegin \DIFadd{manifold for 10 minutes}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S22 }\DIFdelend \DIFaddbegin \DIFadd{S23 }\DIFaddend & Close flushing valve (27) & \\ \hline
\DIFdelbegin \DIFdel{S23 }\DIFdelend \DIFaddbegin \DIFadd{S24 }\DIFaddend & Turn central valve \DIFaddbegin \DIFadd{off }\DIFaddend so that is close to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S24 }%DIFDELCMD < & %%%
\DIFdel{Screw in the plug to the AAC outlet tube (29). }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdelend S25 & \DIFdelbegin \DIFdel{Leave }\DIFdelend \DIFaddbegin \DIFadd{Disconnect }\DIFaddend the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and the }\DIFaddend dry gas bottle \DIFdelbegin \DIFdel{, and central valve system connected to the }\DIFdelend \DIFaddbegin \DIFadd{through a central valve from the AAC's }\DIFaddend inlet tube (1). & \\ \hline
\DIFaddbegin \DIFadd{S26 }\DIFaddend & \DIFdelbegin \textbf{\DIFdel{AAC/TUBES}} %DIFAUXCMD
\DIFdelend \DIFaddbegin \DIFadd{Screw in the plug to the AAC inlet tube (1). }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S26 }\DIFdelend & \DIFdelbegin \DIFdel{Connect quick connector with stem at the 1st tube T-union (33)}\DIFdelend \DIFaddbegin \textbf{\DIFadd{AAC/TUBES/BAGS}} \DIFaddend & \\ \hline
S27 & \DIFdelbegin \DIFdel{Open the respective solenoid valve in the manifold (23}\DIFdelend \DIFaddbegin \DIFadd{Connect the vacuum pump and the dry gas bottle through a central valve to the AAC's outlet tube (29}\DIFaddend )\DIFaddbegin \DIFadd{. }\DIFaddend & \\ \hline
S28 & \DIFdelbegin \DIFdel{Turn central valve so that is open to dry gas. }\DIFdelend \DIFaddbegin \DIFadd{Make sure the AAC's inlet tube (1) is shielded. }\DIFaddend & \\ \hline
S29 & \DIFdelbegin \DIFdel{Start flushing. Amount: 10 times the tube's volume for as long as its necessary (and a little bit longer)}\DIFdelend \DIFaddbegin \DIFadd{Open 1st bag's manual valve}\DIFaddend . & \\ \hline
S30 & \DIFdelbegin \DIFdel{Close solenoid valve in the manifold (23) }\DIFdelend \DIFaddbegin \DIFadd{Open flushing valve (27)}\DIFaddend . & \\ \hline
S31 & \DIFdelbegin \DIFdel{Turn central valveso that is close to dry gas. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S32 }%DIFDELCMD < & %%%
\DIFdel{Disconnect quick connector with stem from the T-union (33). }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S33 }%DIFDELCMD < & %%%
\DIFdel{Connect quick connector with stem at the 2nd tube T-union (33). }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S34 }%DIFDELCMD < & %%%
\DIFdel{Open the respective }\DIFdelend \DIFaddbegin \DIFadd{Open 1st bag's }\DIFaddend solenoid valve in the manifold (23) \DIFdelbegin \DIFdel{. }\DIFdelend & \\ \hline
\DIFdelbegin \DIFdel{S35 }\DIFdelend \DIFaddbegin \DIFadd{S32 }\DIFaddend & \DIFdelbegin \DIFdel{Turn }\DIFdelend \DIFaddbegin \DIFadd{Open }\DIFaddend central valve so that is open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S36 }\DIFdelend \DIFaddbegin \DIFadd{S33 }\DIFaddend & Start \DIFdelbegin \DIFdel{flushing. Amount: 10 times the tube's volume for as long as its necessary (and a little bit longer) }\DIFdelend \DIFaddbegin \DIFadd{filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S37 }\DIFdelend \DIFaddbegin \DIFadd{S34 }\DIFaddend & \DIFdelbegin \DIFdel{Close solenoid valve in the manifold (23) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S38 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is close to dry gas}\DIFdelend \DIFaddbegin \DIFadd{After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S39 }\DIFdelend \DIFaddbegin \DIFadd{S35 }\DIFaddend & \DIFdelbegin \DIFdel{Disconnect quick connector with stem from the T-union (33) }\DIFdelend \DIFaddbegin \DIFadd{Empty the bag with controlled vacuum only 1-2 mbar below ambient pressure. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S40 }\DIFdelend \DIFaddbegin \DIFadd{S36 }\DIFaddend & \DIFdelbegin \DIFdel{Connect quick connector with stem at the 3rd tube T-union (33) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S41 }%DIFDELCMD < & %%%
\DIFdel{Open the respective solenoid valve in the manifold (23) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S42 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is }\DIFdelend \DIFaddbegin \DIFadd{Turn the central valve }\DIFaddend open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S43 }\DIFdelend \DIFaddbegin \DIFadd{S37 }\DIFaddend & Start \DIFdelbegin \DIFdel{flushing. Amount: 10 times the tube's volume for as long as its necessary (and a little bit longer) }\DIFdelend \DIFaddbegin \DIFadd{filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S44 }\DIFdelend \DIFaddbegin \DIFadd{S38 }\DIFaddend & \DIFdelbegin \DIFdel{Close solenoid valve in the manifold (23) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S45 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is close to dry gas}\DIFdelend \DIFaddbegin \DIFadd{After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S46 }\DIFdelend \DIFaddbegin \DIFadd{S39 }\DIFaddend & \DIFdelbegin \DIFdel{Disconnect quick connector with stem from the T-union (33) }\DIFdelend \DIFaddbegin \DIFadd{Empty the bag with controlled vacuum only 1-2 mbar below ambient pressure. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S47 }\DIFdelend \DIFaddbegin \DIFadd{S40 }\DIFaddend & \DIFdelbegin \DIFdel{Connect quick connector with stem at the 4th tube T-union (33) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S48 }%DIFDELCMD < & %%%
\DIFdel{Open the respective solenoid valve in the manifold (23) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S49 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is open to dry gas}\DIFdelend \DIFaddbegin \DIFadd{Repeat steps S36 to S39 one more time}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S50 }\DIFdelend \DIFaddbegin \DIFadd{S41 }\DIFaddend & \DIFdelbegin \DIFdel{Start flushing. Amount: 10 times the tube's volume for as long as its necessary (and a little bit longer) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S51 }%DIFDELCMD < & %%%
\DIFdelend Close \DIFaddbegin \DIFadd{1st bag's }\DIFaddend solenoid valve in the manifold (23)\DIFdelbegin %DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S52 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is close to dry gas}\DIFdelend . & \\ \hline
\DIFdelbegin \DIFdel{S53 }\DIFdelend \DIFaddbegin \DIFadd{S42 }\DIFaddend & Disconnect \DIFdelbegin \DIFdel{quick connector with stem from the T-union (33) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S54 }%DIFDELCMD < & %%%
\DIFdel{Connect quick connector with stem at the 5th tube T-union (33) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S55 }%DIFDELCMD < & %%%
\DIFdel{Open the respective solenoid valve in the manifold (23) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S56 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is open to dry gas }\DIFdelend \DIFaddbegin \DIFadd{the vacuum pump and the dry gas bottle through a central valve from the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S57 }\DIFdelend \DIFaddbegin \DIFadd{S43 }\DIFaddend & \DIFdelbegin \DIFdel{Start flushing. Amount: 10 times the tube's volume for as long as its necessary (and a little bit longer)}%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S58 }%DIFDELCMD < & %%%
\DIFdel{Close solenoid valve in the manifold (23) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S59 }%DIFDELCMD < & %%%
\DIFdel{Turn central valve so that is close to dry gas}\DIFdelend \DIFaddbegin \DIFadd{Unscrew the plug from the AAC inlet tube (1)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S60 }\DIFdelend \DIFaddbegin \DIFadd{S44 }\DIFaddend & \DIFdelbegin \DIFdel{Disconnect quick connector with stem from the T-union (33)}%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S61 }%DIFDELCMD < & %%%
\DIFdelend Connect \DIFdelbegin \DIFdel{quick connector with stem at the 6th tube T-union (33) }\DIFdelend \DIFaddbegin \DIFadd{the vacuum pump and the dry gas bottle through a central valve to the AAC's inlet tube (1). }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S62 }\DIFdelend \DIFaddbegin \DIFadd{S45 }\DIFaddend & \DIFdelbegin \DIFdel{Open the respective solenoid valve in the manifold (23)}%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S63 }%DIFDELCMD < & %%%
\DIFdelend Turn central valve \DIFaddbegin \DIFadd{on }\DIFaddend so that is open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S64 }\DIFdelend \DIFaddbegin \DIFadd{S46 }\DIFaddend & \DIFdelbegin \DIFdel{Start flushing. Amount: 10 times the tube's volume for as long as its necessary (and a little bit longer) }\DIFdelend \DIFaddbegin \DIFadd{Let the dry gas run through the AAC's manifold for 2 minutes. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S65 }\DIFdelend \DIFaddbegin \DIFadd{S47 }\DIFaddend & Close \DIFdelbegin \DIFdel{solenoid valve in the manifold (23}\DIFdelend \DIFaddbegin \DIFadd{flushing valve (27}\DIFaddend ) & \\ \hline
\DIFdelbegin \DIFdel{S66 }\DIFdelend \DIFaddbegin \DIFadd{S48 }\DIFaddend & Turn central valve \DIFaddbegin \DIFadd{off }\DIFaddend so that is close to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S67 }\DIFdelend \DIFaddbegin \DIFadd{S49 }\DIFaddend & Disconnect \DIFdelbegin \DIFdel{quick connector with stem from the T-union (33) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S68 }%DIFDELCMD < & %%%
\DIFdel{Disconnect the vacuum pump - dry gas system from the }\DIFdelend \DIFaddbegin \DIFadd{the vacuum pump and the dry gas bottle through a central valve from the AAC's }\DIFaddend inlet tube (1)\DIFaddbegin \DIFadd{. }\DIFaddend & \\ \hline
\DIFdelbegin \DIFdel{S69 }\DIFdelend \DIFaddbegin \DIFadd{S50 }\DIFaddend & \DIFdelbegin \DIFdel{Unscrew the quick connector with male thread from the inlet tube (1) }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S70 }%DIFDELCMD < & %%%
\DIFdel{Screw the plug at the AAC Inlet }\DIFdelend \DIFaddbegin \DIFadd{Screw in the plug to the AAC inlet }\DIFaddend tube (1)\DIFaddbegin \DIFadd{. }\DIFaddend & \\ \hline
\DIFaddbegin \DIFadd{S51 }\DIFaddend & \DIFdelbegin \textbf{\DIFdel{AAC/BAGS}} %DIFAUXCMD
%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S71 }%DIFDELCMD < & %%%
\DIFdelend Connect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at the T-union (33)of the 1st bag}\DIFdelend \DIFaddbegin \DIFadd{to the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S72 }\DIFdelend \DIFaddbegin \DIFadd{S52 }\DIFaddend & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve. (valve that controls vacuum or filling bags) }\DIFdelend \DIFaddbegin \DIFadd{Make sure the AAC's inlet tube (1) is shielded}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S73 }\DIFdelend \DIFaddbegin \DIFadd{S53 }\DIFaddend & Open \DIFdelbegin \DIFdel{1st }\DIFdelend \DIFaddbegin \DIFadd{2nd }\DIFaddend bag's manual valve. & \\ \hline
\DIFdelbegin \DIFdel{S74 }\DIFdelend \DIFaddbegin \DIFadd{S54 }\DIFaddend & \DIFdelbegin \DIFdel{Turn the central valve }\DIFdelend \DIFaddbegin \DIFadd{Open flushing valve (27). }& \\ \hline
\DIFadd{S55 }& \DIFadd{Open 2nd bag's solenoid valve in the manifold (23) }& \\ \hline
\DIFadd{S56 }& \DIFadd{Open central valve so that is }\DIFaddend open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S75 }\DIFdelend \DIFaddbegin \DIFadd{S57 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S76 }\DIFdelend \DIFaddbegin \DIFadd{S58 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S77 }\DIFdelend \DIFaddbegin \DIFadd{S59 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S78 }\DIFdelend \DIFaddbegin \DIFadd{S60 }\DIFaddend & Turn the central valve open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S79 }\DIFdelend \DIFaddbegin \DIFadd{S61 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S80 }\DIFdelend \DIFaddbegin \DIFadd{S62 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S81 }\DIFdelend \DIFaddbegin \DIFadd{S63 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S82 }\DIFdelend \DIFaddbegin \DIFadd{S64 }\DIFaddend & Repeat \DIFaddbegin \DIFadd{steps S60 to S63 }\DIFaddend one more time. \DIFdelbegin \DIFdel{Total 3 times}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S65 }& \DIFadd{Close 2nd bag's solenoid valve in the manifold (23)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S83 }\DIFdelend \DIFaddbegin \DIFadd{S66 }\DIFaddend & Disconnect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{system from the T-union (33)of the 1st bag}\DIFdelend \DIFaddbegin \DIFadd{through a central valve from the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S84 }\DIFdelend \DIFaddbegin \DIFadd{S67 }\DIFaddend & \DIFaddbegin \DIFadd{Unscrew the plug from the AAC inlet tube (1). }& \\ \hline
\DIFadd{S68 }& \DIFaddend Connect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at the T-union (33)of the 2nd bag}\DIFdelend \DIFaddbegin \DIFadd{to the AAC's inlet tube (1)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S85 }\DIFdelend \DIFaddbegin \DIFadd{S69 }\DIFaddend & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve}\DIFdelend \DIFaddbegin \DIFadd{Turn central valve on so that is open to dry gas. }& \\ \hline
\DIFadd{S70 }& \DIFadd{Let the dry gas run through the AAC's manifold for 2 minutes}\DIFaddend . \DIFdelbegin \DIFdel{(valve that controls vacuum or filling bags}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S71 }& \DIFadd{Close flushing valve (27}\DIFaddend ) \DIFaddbegin & \\ \hline
\DIFadd{S72 }& \DIFadd{Turn central valve off so that is close to dry gas. }& \\ \hline
\DIFadd{S73 }& \DIFadd{Disconnect the vacuum pump and the dry gas bottle through a central valve from the AAC's inlet tube (1). }& \\ \hline
\DIFadd{S74 }& \DIFadd{Screw in the plug to the AAC inlet tube (1). }& \\ \hline
\DIFadd{S75 }& \DIFadd{Connect the vacuum pump and the dry gas bottle through a central valve to the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S86 }\DIFdelend \DIFaddbegin \DIFadd{S76 }\DIFaddend & \DIFdelbegin \DIFdel{Open 2nd }\DIFdelend \DIFaddbegin \DIFadd{Make sure the AAC's inlet tube (1) is shielded. }& \\ \hline
\DIFadd{S77 }& \DIFadd{Open 3rd }\DIFaddend bag's manual valve. & \\ \hline
\DIFdelbegin \DIFdel{S87 }\DIFdelend \DIFaddbegin \DIFadd{S78 }\DIFaddend & \DIFdelbegin \DIFdel{Turn the central valve }\DIFdelend \DIFaddbegin \DIFadd{Open flushing valve (27). }& \\ \hline
\DIFadd{S79 }& \DIFadd{Open 3rd bag's solenoid valve in the manifold (23) }& \\ \hline
\DIFadd{S80 }& \DIFadd{Open central valve so that is }\DIFaddend open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S88 }\DIFdelend \DIFaddbegin \DIFadd{S81 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S89 }\DIFdelend \DIFaddbegin \DIFadd{S82 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S90 }\DIFdelend \DIFaddbegin \DIFadd{S83 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S91 }\DIFdelend \DIFaddbegin \DIFadd{S84 }\DIFaddend & Turn the central valve open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S92 }\DIFdelend \DIFaddbegin \DIFadd{S85 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S93 }\DIFdelend \DIFaddbegin \DIFadd{S86 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S94 }\DIFdelend \DIFaddbegin \DIFadd{S87 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S95 }\DIFdelend \DIFaddbegin \DIFadd{S88 }\DIFaddend & Repeat \DIFaddbegin \DIFadd{steps S84 to S87 }\DIFaddend one more time. \DIFdelbegin \DIFdel{Total 3 times}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S89 }& \DIFadd{Close 3rd bag's solenoid valve in the manifold (23)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S96 }\DIFdelend \DIFaddbegin \DIFadd{S90 }\DIFaddend & Disconnect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{system from the T-union (33)of the 2nd bag}\DIFdelend \DIFaddbegin \DIFadd{through a central valve from the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S97 }\DIFdelend \DIFaddbegin \DIFadd{S91 }\DIFaddend & \DIFaddbegin \DIFadd{Unscrew the plug from the AAC inlet tube (1). }& \\ \hline
\DIFadd{S92 }& \DIFaddend Connect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at the T-union (33)of the 3rd bag}\DIFdelend \DIFaddbegin \DIFadd{to the AAC's inlet tube (1)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S98 }\DIFdelend \DIFaddbegin \DIFadd{S93 }\DIFaddend & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve}\DIFdelend \DIFaddbegin \DIFadd{Turn central valve on so that is open to dry gas. }& \\ \hline
\DIFadd{S94 }& \DIFadd{Let the dry gas run through the AAC's manifold for 2 minutes}\DIFaddend . \DIFdelbegin \DIFdel{(valve that controls vacuum or filling bags}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S95 }& \DIFadd{Close flushing valve (27}\DIFaddend ) \DIFaddbegin & \\ \hline
\DIFadd{S96 }& \DIFadd{Turn central valve off so that is close to dry gas. }& \\ \hline
\DIFadd{S97 }& \DIFadd{Disconnect the vacuum pump and the dry gas bottle through a central valve from the AAC's inlet tube (1). }& \\ \hline
\DIFadd{S98 }& \DIFadd{Screw in the plug to the AAC inlet tube (1)}\DIFaddend . & \\ \hline
S99 & \DIFdelbegin \DIFdel{Open 3rd }\DIFdelend \DIFaddbegin \DIFadd{Connect the vacuum pump and the dry gas bottle through a central valve to the AAC's outlet tube (29). }& \\ \hline
\DIFadd{S100 }& \DIFadd{Make sure the AAC's inlet tube (1) is shielded. }& \\ \hline
\DIFadd{S101 }& \DIFadd{Open 4th }\DIFaddend bag's manual valve. & \\ \hline
\DIFdelbegin \DIFdel{S100 }\DIFdelend \DIFaddbegin \DIFadd{S102 }\DIFaddend & \DIFdelbegin \DIFdel{Turn the central valve }\DIFdelend \DIFaddbegin \DIFadd{Open flushing valve (27). }& \\ \hline
\DIFadd{S103 }& \DIFadd{Open 4th bag's solenoid valve in the manifold (23) }& \\ \hline
\DIFadd{S104 }& \DIFadd{Open central valve so that is }\DIFaddend open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S101 }\DIFdelend \DIFaddbegin \DIFadd{S105 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S102 }\DIFdelend \DIFaddbegin \DIFadd{S106 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S103 }\DIFdelend \DIFaddbegin \DIFadd{S107 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S104 }\DIFdelend \DIFaddbegin \DIFadd{S108 }\DIFaddend & Turn the central valve open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S105 }\DIFdelend \DIFaddbegin \DIFadd{S109 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S106 }\DIFdelend \DIFaddbegin \DIFadd{S110 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S107 }\DIFdelend \DIFaddbegin \DIFadd{S111 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S108 }\DIFdelend \DIFaddbegin \DIFadd{S112 }\DIFaddend & Repeat \DIFaddbegin \DIFadd{steps S108 to S111 }\DIFaddend one more time. \DIFdelbegin \DIFdel{Total 3 times. }\DIFdelend & \\ \hline
\DIFdelbegin \DIFdel{S109 }\DIFdelend \DIFaddbegin \DIFadd{S113 }\DIFaddend & \DIFdelbegin \DIFdel{Disconnect the vacuum pump, the dry gas bottle system from the T-union (33)of the 3rd bag}\DIFdelend \DIFaddbegin \DIFadd{Close 4th bag's solenoid valve in the manifold (23)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S110 }\DIFdelend \DIFaddbegin \DIFadd{S114 }\DIFaddend & \DIFdelbegin \DIFdel{Connect }\DIFdelend \DIFaddbegin \DIFadd{Disconnect }\DIFaddend the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at the T-union (33)of the 4th bag}\DIFdelend \DIFaddbegin \DIFadd{from the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S111 }\DIFdelend \DIFaddbegin \DIFadd{S115 }\DIFaddend & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve. (valve that controls vacuum or filling bags}\DIFdelend \DIFaddbegin \DIFadd{Unscrew the plug from the AAC inlet tube (1}\DIFaddend ). & \\ \hline
\DIFdelbegin \DIFdel{S112 }\DIFdelend \DIFaddbegin \DIFadd{S116 }\DIFaddend & \DIFdelbegin \DIFdel{Open 4th bag's manual valve. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S113 }%DIFDELCMD < & %%%
\DIFdel{Turn the central valve open to dry gas. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S114 }%DIFDELCMD < & %%%
\DIFdel{Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S115 }%DIFDELCMD < & %%%
\DIFdel{After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty}\DIFdelend \DIFaddbegin \DIFadd{Connect the vacuum pump and the dry gas bottle through a central valve to the AAC's inlet tube (1)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S116 }%DIFDELCMD < & %%%
\DIFdel{Empty the bag with controlled vacuum only 1-2 hPa below ambient pressure. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdelend S117 & Turn \DIFdelbegin \DIFdel{the central valve }\DIFdelend \DIFaddbegin \DIFadd{central valve on so that is }\DIFaddend open to dry gas. & \\ \hline
S118 & \DIFdelbegin \DIFdel{Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 }\DIFdelend \DIFaddbegin \DIFadd{Let the dry gas run through the AAC's manifold for 2 }\DIFaddend minutes. & \\ \hline
S119 & \DIFdelbegin \DIFdel{After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. }\DIFdelend \DIFaddbegin \DIFadd{Close flushing valve (27) }\DIFaddend & \\ \hline
S120 & \DIFdelbegin \DIFdel{Empty the bag with controlled vacuum only 1-2 hPa below ambient pressure}\DIFdelend \DIFaddbegin \DIFadd{Turn central valve off so that is close to dry gas}\DIFaddend . & \\ \hline
S121 & \DIFdelbegin \DIFdel{Repeat one more time. Total 3 times. }%DIFDELCMD < & \\ \hline
%DIFDELCMD < %%%
\DIFdel{S122 }%DIFDELCMD < & %%%
\DIFdelend Disconnect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{system from the T-union (33)of the 4th bag}\DIFdelend \DIFaddbegin \DIFadd{through a central valve from the AAC's inlet tube (1)}\DIFaddend . & \\ \hline
\DIFaddbegin \DIFadd{S122 }& \DIFadd{Screw in the plug to the AAC inlet tube (1). }& \\ \hline
\DIFaddend S123 & Connect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at the T-union (33)of the 5th bag}\DIFdelend \DIFaddbegin \DIFadd{to the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
S124 & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve. (valve that controls vacuum or filling bags) }\DIFdelend \DIFaddbegin \DIFadd{Make sure the AAC's inlet tube (1) is shielded}\DIFaddend . & \\ \hline
S125 & Open 5th bag's manual valve. & \\ \hline
S126 & \DIFdelbegin \DIFdel{Turn the central valve }\DIFdelend \DIFaddbegin \DIFadd{Open flushing valve (27). }& \\ \hline
\DIFadd{S127 }& \DIFadd{Open 5th bag's solenoid valve in the manifold (23) }& \\ \hline
\DIFadd{S128 }& \DIFadd{Open central valve so that is }\DIFaddend open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S127 }\DIFdelend \DIFaddbegin \DIFadd{S129 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S128 }\DIFdelend \DIFaddbegin \DIFadd{S130 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S129 }\DIFdelend \DIFaddbegin \DIFadd{S131 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S130 }\DIFdelend \DIFaddbegin \DIFadd{S132 }\DIFaddend & Turn the central valve open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S131 }\DIFdelend \DIFaddbegin \DIFadd{S133 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S132 }\DIFdelend \DIFaddbegin \DIFadd{S134 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S133 }\DIFdelend \DIFaddbegin \DIFadd{S135 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S134 }\DIFdelend \DIFaddbegin \DIFadd{S136 }\DIFaddend & Repeat \DIFaddbegin \DIFadd{steps S132 to S135 }\DIFaddend one more time. \DIFdelbegin \DIFdel{Total 3 times}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S137 }& \DIFadd{Close 5th bag's solenoid valve in the manifold (23)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S135 }\DIFdelend \DIFaddbegin \DIFadd{S138 }\DIFaddend & Disconnect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{system from the T-union (33)of the 5th bag}\DIFdelend \DIFaddbegin \DIFadd{through a central valve from the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S136 }\DIFdelend \DIFaddbegin \DIFadd{S139 }\DIFaddend & \DIFaddbegin \DIFadd{Unscrew the plug from the AAC inlet tube (1). }& \\ \hline
\DIFadd{S140 }& \DIFaddend Connect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{through }\DIFaddend a central valve \DIFdelbegin \DIFdel{at the T-union (33)of the 6th bag}\DIFdelend \DIFaddbegin \DIFadd{to the AAC's inlet tube (1)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S137 }\DIFdelend \DIFaddbegin \DIFadd{S141 }\DIFaddend & \DIFdelbegin \DIFdel{Connect a flow rate sensor close to the central valve}\DIFdelend \DIFaddbegin \DIFadd{Turn central valve on so that is open to dry gas. }& \\ \hline
\DIFadd{S142 }& \DIFadd{Let the dry gas run through the AAC's manifold for 2 minutes}\DIFaddend . \DIFdelbegin \DIFdel{(valve that controls vacuum or filling bags}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S143 }& \DIFadd{Close flushing valve (27}\DIFaddend ) \DIFaddbegin & \\ \hline
\DIFadd{S144 }& \DIFadd{Turn central valve off so that is close to dry gas. }& \\ \hline
\DIFadd{S145 }& \DIFadd{Disconnect the vacuum pump and the dry gas bottle through a central valve from the AAC's inlet tube (1). }& \\ \hline
\DIFadd{S146 }& \DIFadd{Screw in the plug to the AAC inlet tube (1). }& \\ \hline
\DIFadd{S147 }& \DIFadd{Connect the vacuum pump and the dry gas bottle through a central valve to the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S138 }\DIFdelend \DIFaddbegin \DIFadd{S148 }\DIFaddend & \DIFaddbegin \DIFadd{Make sure the AAC's inlet tube (1) is shielded. }& \\ \hline
\DIFadd{S149 }& \DIFaddend Open 6th bag's manual valve. & \\ \hline
\DIFdelbegin \DIFdel{S139 }\DIFdelend \DIFaddbegin \DIFadd{S150 }\DIFaddend & \DIFdelbegin \DIFdel{Turn the central valve }\DIFdelend \DIFaddbegin \DIFadd{Open flushing valve (27). }& \\ \hline
\DIFadd{S151 }& \DIFadd{Open 6th bag's solenoid valve in the manifold (23) }& \\ \hline
\DIFadd{S152 }& \DIFadd{Open central valve so that is }\DIFaddend open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S140 }\DIFdelend \DIFaddbegin \DIFadd{S153 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S141 }\DIFdelend \DIFaddbegin \DIFadd{S154 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S142 }\DIFdelend \DIFaddbegin \DIFadd{S155 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S143 }\DIFdelend \DIFaddbegin \DIFadd{S156 }\DIFaddend & Turn the central valve open to dry gas. & \\ \hline
\DIFdelbegin \DIFdel{S144 }\DIFdelend \DIFaddbegin \DIFadd{S157 }\DIFaddend & Start filling the bag with 3L of dry gas with a flow rate of 2L/min for 1.5 minutes. & \\ \hline
\DIFdelbegin \DIFdel{S145 }\DIFdelend \DIFaddbegin \DIFadd{S158 }\DIFaddend & After 1.5 mins, when the bag is full, turn the central valve open to the vacuum , allowing the bag to empty. & \\ \hline
\DIFdelbegin \DIFdel{S146 }\DIFdelend \DIFaddbegin \DIFadd{S159 }\DIFaddend & Empty the bag with controlled vacuum only 1-2 \DIFdelbegin \DIFdel{hPa }\DIFdelend \DIFaddbegin \DIFadd{mbar }\DIFaddend below ambient pressure. & \\ \hline
\DIFdelbegin \DIFdel{S147 }\DIFdelend \DIFaddbegin \DIFadd{S160 }\DIFaddend & Repeat \DIFaddbegin \DIFadd{steps S156 to S159 }\DIFaddend one more time. \DIFdelbegin \DIFdel{Total 3 times}\DIFdelend \DIFaddbegin & \\ \hline
\DIFadd{S161 }& \DIFadd{Close 6th bag's solenoid valve in the manifold (23)}\DIFaddend . & \\ \hline
\DIFdelbegin \DIFdel{S148 }\DIFdelend \DIFaddbegin \DIFadd{S162 }\DIFaddend & Disconnect the vacuum pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{and }\DIFaddend the dry gas bottle \DIFdelbegin \DIFdel{system from the T-union (33)of the 6th bag}\DIFdelend \DIFaddbegin \DIFadd{through a central valve from the AAC's outlet tube (29)}\DIFaddend . & \\ \hline
\DIFaddbegin \DIFadd{S163 }& \DIFadd{Unscrew the plug from the AAC inlet tube (1). }& \\ \hline
\DIFadd{S164 }& \DIFadd{Connect the vacuum pump and the dry gas bottle through a central valve to the AAC's inlet tube (1). }& \\ \hline
\DIFadd{S165 }& \DIFadd{Turn central valve on so that is open to dry gas. }& \\ \hline
\DIFadd{S166 }& \DIFadd{Let the dry gas run through the AAC's manifold for 2 minutes. }& \\ \hline
\DIFadd{S167 }& \DIFadd{Close flushing valve (27) }& \\ \hline
\DIFadd{S168 }& \DIFadd{Turn central valve off so that is close to dry gas. }& \\ \hline
\DIFadd{S169 }& \DIFadd{Disconnect the vacuum pump and the dry gas bottle through a central valve from the AAC's inlet tube (1). }& \\ \hline
\DIFadd{S170 }& \DIFadd{Screw in the plug to the AAC inlet tube (1). }& \\ \hline
\DIFaddend \multicolumn{2}{|l|}{ \textbf{ELECTRICAL} } & \\ \hline
E1 & Check that all (3 9-Pin, Bags, Out, CAC, and 2 15-pin, Level 1 and 2) D-subs are connected and screwed in on the PCB (hand tight, DO NOT TIGHTEN TO HARD). & \\ \hline
E2 & Check that plastic 28.8V power is connected to level 1 and 2 connectors (Red wire, plastic connector, Male on wires going to each level, female is underneath the PCB.).& \\ \hline
E3 & Check that the plastic 28.8V power cable from the PCB is secured with zip tie to one of the standoffs. & \\ \hline
E4 & Check that power is plugged in on the PCB. & \\ \hline
E5 & Check that Ethernet is connected form PCB to wall. (should hear click) & \\ \hline
E6 & Check that outside pressure sensors are connected to the outside upper (furthest from frame) 9-pin D-sub wall connector (hand tight, DO NOT TIGHTEN TO HARD). & \\ \hline
E7 & Check that CAC is connected on the outside lower (closest to frame) 9-pin D-sub wall connector (hand tight, DO NOT TIGHTEN TOO HARD). & \\ \hline
E8 & Check that power is connected on the outside wall. & \\ \hline
E9 & Check that the Ethernet is connected on the outside wall (should hear click). & \\ \hline
E10 & Check main PCB board is secure (Locking nuts where possible and no nut for the rest).\\ \hline
E11 & Check that pressure sensors are secure on the outside (Bolted down with locking nuts). & \\ \hline
E12 & Check output voltage from DCDC's and make sure they are used equally (after diode). & \\ \hline
E13 & Verify sensors give data to ground station. & \\ \hline
E14 & Verify that all valves open and close as expected (listen and check PCB lights).& \\ \hline
E15 & Verify that heaters get warm when they are turned on (Check temp data, feel, and check lights)& \\ \hline
\multicolumn{2}{|l|}{ \textbf{SOFTWARE} } & \\ \hline
SW1 & The ground station laptop PC will need to be put in place and operational. & \\ \hline
SW2 & The correct version of the onboard software have been uploaded to the OBC. & \\ \hline
SW3 & The communication through E-link with the experiment shall be tested. & \\ \hline
SW4 & Verify that the data from sensors are realistic. & \\ \hline
SW5 & The air sampling itinerary is checked. & \\ \hline
SW6 & SD card contents are checked. & \\ \hline
\multicolumn{2}{|l|}{ \textbf{MECHANICAL} } & \\
\hline
M1 & Check that the frame structure is properly fixed. & \\
\hline
M2 & Check that the handles of both boxes are properly fixed. & \\
\hline
& \textbf{AAC BOX} & \\
\hline
M3 & Check that The Brain is properly attached to the structure of the AAC Box. & \\
\hline
M4 & Check that all the pneumatic connections are set (interfaces, valves, bags): use the manufactured tool for this matter. & \\
\hline
M5 & Check that the bags valve are open. & \\
\hline
M6 & Check that the bags are properly fixed with the circular bar. & \\
\hline
M7 & Check that the electronic interfaces panel is properly fixed to the top wall. & \\
\hline
M8 & Close all the open walls and check that they are all properly fixed and closed. & \\
\hline
M9 & Unscrew the plugs from the inlet and the outlet tube. & \\
\hline
& \textbf{CAC BOX} & \\
\hline
M10 & Check that the AirCore is properly placed. & \\
\hline
M11 & Check that all the pneumatic connections are set (interfaces, valves) & \\
\hline
M12 & Close all the open walls and check that they are all properly fixed and closed. & \\
\hline
M13 & Unscrew the plug from the inlet/outlet tube. & \\
\hline
& \textbf{GONDOLA} & \\
\hline
M14 & Attach both boxes one to the other. & \\
\hline
M15 & Introduce both boxes inside the gondola. & \\
\hline
M16 & Check that the experiment box is fixed to the gondola rails (10 anchor points). & \\
\hline
M17 & Check that the electronic connectors are properly fixed to both electronic panels (D-sub, power, E-link) & \\
\hline
\end{longtable}
\subsection{Cleaning Checklist}
\begin{longtable}{|m{0.75\textwidth}|m{0.2\textwidth}|} \hline
\textbf{Why Cleaning is Important} & \\ \hline
Grease on pipe and fittings will outgas and contaminate samples & \\ \hline
Dust increases the risk of condensation which destroys samples & \\ \hline
Organic material can outgas and contaminate samples & \\ \hline
\end{longtable}
\begin{longtable}{|m{0.75\textwidth}|m{0.2\textwidth}|} \hline
\textbf{DO NOT } & \\ \hline
Blow into tubes or fittings & \\ \hline
Handle the pneumatic system without gloves & \\ \hline
Leave clean items unsealed on the bench & \\ \hline
\end{longtable}
\begin{longtable}{|m{0.75\textwidth}|m{0.2\textwidth}|} \hline
\textbf{Before you begin} & \\ \hline
Workspace clear of debris & \\ \hline
Signage up that this area is clean so no touching & \\ \hline
Wearing gloves & \\ \hline
Workspace cleaned with IPA & \\ \hline
Tools cleaned with IPA & \\ \hline
Tupperware storage cleaned with IPA & \\ \hline
\end{longtable}
\begin{longtable}{|m{0.75\textwidth}|m{0.2\textwidth}|} \hline
\textbf{Working Procedure } & \\ \hline
\textbf{Cutting} & \\ \hline
Use pipe cutter & \\ \hline
Ensure debris does not fall onto workspace & \\ \hline
Cut one piece at a time & \\ \hline
\textbf{Reeming} & \\ \hline
Use the reeming tool & \\ \hline
Reemer must be lower than pipe & \\ \hline
Ensure debris does not fall onto workspace & \\ \hline
Minimise debris falling further into pipe & \\ \hline
Use oil free compressed air to clear pipe of debris & \\ \hline
DO NOT BLOW INTO PIPE & \\ \hline
\textbf{Bending} & \\ \hline
Use the bending tool & \\ \hline
Clamp one end of bending tool to bench if possible & \\ \hline
Bend slowly & \\ \hline
Bend slightly further (1 or 2 degrees) more than the target & \\ \hline
Check bend with protractor BEFORE removing it & \\ \hline
If bend not correct rebend & \\ \hline
If bend correct remove pipe & \\ \hline
\textbf{Cleaning after} & \\ \hline
Place Kapton tape (or equivalent) over both pipe ends & \\ \hline
Place pipe into clean tupperware box & \\ \hline
Place tools into clean tupperware box & \\ \hline
\textbf{Fittings} & \\ \hline
Use IPA to clean vigorously before attachment & \\ \hline
AVOID touching with bare hands & \\ \hline
Always follow correct Swagelok procedure when attaching & \\ \hline
\caption{Table Containing the Cleaning Checklist for use During Manufacture.}
\label{tab:appcleancheck}
\end{longtable}
\subsection{Recovery Team Checklist}\label{ssec:RecoveryCheck}
FAST RECOVERY OF CAC
\begin{itemize}
\item Check no damage exists to outer structure and no white paste seen in inlet tubes, this confirms no leak and chemicals are SAFE.
\item Screw on the three metal plugs provided to the inlet and outlet tubes.
\item Unplug the gondola power cord from the AAC box. Circled with RED paint. \DIFaddbegin \DIFadd{See Figure \ref{fig:Interfaces_Detail_I}.
}\DIFaddend \item Unplug the E-Link connection from the AAC box. Circled with RED paint. \DIFaddbegin \DIFadd{See Figure \ref{fig:Interfaces_Detail_I}.
}\DIFaddend \item Unplug the D-Sub connector from the CAC Box. Circled with RED paint. \DIFaddbegin \DIFadd{See Figure \ref{fig:Interfaces_Detail_I}.
}\DIFaddend \item Unscrew 6 screws vertically aligned in the CAC frame in the outside face of the experiment. Painted in RED. \DIFaddbegin \DIFadd{Allen key \#3. See Figure \ref{fig:Interfaces_Detail_II}.
}\DIFaddend \item Unscrew 6 screws vertically aligned in the CAC frame in the inside face of the experiment \DIFaddbegin \DIFadd{(opposite to outside)}\DIFaddend . Painted in RED. \DIFaddbegin \DIFadd{Allen key \#3.
}\DIFaddend \item Unscrew 2 gondola attachment points from the CAC, L-shape \DIFdelbegin \DIFdel{anchors. }\DIFdelend \DIFaddbegin \DIFadd{anchor, 4 screws in total. Allen key \#3. See Figure \ref{fig:Interfaces_Detail_III}.
}\DIFaddend \item \DIFaddbegin \DIFadd{Loosen the gondola's safety wire (on the CAC side), use a thick Allen key (i.e. Allen key \#5) and a clamp.
}\item \DIFaddend Remove the CAC Box from the gondola from the lateral side. Handles located at the top of the box. \DIFdelbegin \DIFdel{Take car }\DIFdelend \DIFaddbegin \DIFadd{First lift it up, then drag it out. Take care }\DIFaddend with the outlet tube not to hit the gondola structure.
\DIFaddbegin \item \DIFadd{Tighten the gondola's safety wire (on the CAC side), use a thick Allen key (i.e. Allen key \#5) and a clamp.
}\DIFaddend \end{itemize}
\DIFaddbegin \begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{appendix/img/Recovery_1.jpg}
\caption{\DIFaddFL{Electrical Interfaces detail.}}
\label{fig:Interfaces_Detail_I}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{appendix/img/Recovery_2.jpg}
\caption{\DIFaddFL{Fast Recovery Interfaces detail, boxes attachment.}}
\label{fig:Interfaces_Detail_II}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{appendix/img/Figure_49_Gondola_c.png}
\caption{\DIFaddFL{Fast Recovery Interfaces detail.}}
\label{fig:Interfaces_Detail_III}
\end{figure}
\DIFaddend REGULAR RECOVERY OF AAC
\begin{itemize}
\item Unscrew 8 gondola attachment points from the AAC\DIFaddbegin \DIFadd{.
}\DIFaddend \item Remove the AAC Box from the gondola. Handles located at the top of the box.
\end{itemize}
If the recovery is not nominal the following instructions should be followed. It should be noted that the chemical on-board, magnesium perchlorate, has the appearance of white powdery stones when dry and a white paste when wet.
\begin{itemize}
\item If outer structure is damaged or white paste is seen in inlet tubes put on provided gloves before proceeding. Assume possibility chemicals are UNSAFE.
\item If white paste is seen wipe with provided cloth and seal the end of tube with the provided plugs. Put any contaminated items into a bag which is then sealed.
\item In event magnesium perchlorate comes into contact with skin wash immediately with water (following the MSDS procedure).
\item In the event magnesium perchlorate comes into contact with clothes remove clothes as soon as possible and wash before wearing again.
\item Even if no contact was made with the magnesium perchlorate it is recommended to wash hands after as a preventative measure.
\item In the event that magnesium perchlorate is seen on the ground or inside the gondola, the appearance is a white paste or white stone, it should be recovered with gloves and wiped and placed into a sealed bag.
\end{itemize}
Provided material
\begin{itemize}
\item Gloves
\item Piece of cloth
\item Plastic bag
\item Three plugs
\item \DIFdelbegin \DIFdel{Flat screwdriver
}\DIFdelend \DIFaddbegin \DIFadd{Allen key set (at least \#3 and \#5)
}\item \DIFadd{Clamp
}\DIFaddend \end{itemize}
\newpage
\newpage
%\raggedbottom
%\end{landscape}
\begin{landscape}
\section{Team Availability} \label{sec:appD}
\subsection{Team availability from February 2018 to July 2018}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/team-availability/teamavaliabilitytojuly.png}
\end{align*}
\caption{Team Availability From February 2018 to July 2018.}
\label{fig:team-availability-feb18-jul18}
\end{figure}
\end{landscape}
\pagebreak
\begin{landscape}
\subsection{Team availability from August 2018 to January 2019}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/team-availability/teamavaliabilitytojan.png}
\end{align*}
\caption{Team Availability From August 2018 to January 2019.}
\label{fig:team-availability-aug18-jan19}
\end{figure}
\end{landscape}
\pagebreak
\begin{landscape}
\subsection{Graph Showing Team availability Over Summer}
Green squares with question marks indicate uncertainty over whether someone will be available in Kiruna at that time.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/team-availability/summeravaliability.png}
\end{align*}
\caption{Graph Showing Team Avaliability Over the Summer Period.}
\label{fig:team-availability-aug18-jan19}
\end{figure}
\end{landscape}
\begin{landscape}
\section{Gantt Chart} \label{sec:appF}
The current critical path starts with ordering and receiving parts, until this is done building cannot take place. The key components are the pump, valves, tubing, fittings and Arduino. Once orders have been received building can take place and then testing can begin. All remaining tests require some degree of building to be completed. Certain tests such as Test 17 in Table \ref{tab:samples-condensation-test} require the entire pneumatic system to be completed and others such as Test 2 in Table \ref{tab:data-coll-test} require just the electronics and software.
\subsection{Gantt Chart (1/4)}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1.5\textwidth,height=0.8\textheight]{appendix/img/gantt-chart/gantt-chart-updated-part1.png}
\end{align*}
\caption{Gantt Chart (1/4).}
\label{fig:gantt-chart-1}
\end{figure}
\subsection{Gantt Chart (2/4)}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1.5\textwidth]{appendix/img/gantt-chart/gantt-chart-updated-part2.png}
\end{align*}
\caption{Gantt Chart (2/4).}
\label{fig:gantt-chart-2}
\end{figure}
\subsection{Gantt Chart (3/4)}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1.5\textwidth]{appendix/img/gantt-chart/gantt-chart-updated-part3.png}
\end{align*}
\caption{Gantt Chart (3/4).}
\label{fig:gantt-chart-3}
\end{figure}
\subsection{Gantt Chart (4/4)}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1.5\textwidth]{appendix/img/gantt-chart/gantt-chart-updated-part4.png}
\end{align*}
\caption{Gantt Chart (4/4).}
\label{fig:gantt-chart-4}
\end{figure}
By comparing the team availability in Appendix \ref{sec:appD} to the Gantt chart it can be seen that across the summer there is lower team availability. In this time frame there are two periods with particularly low team availability; The early summer and early August. The work has been planned so that the critical work will be completed in the periods with higher availability. In the event that the work takes longer than expected the question marks can become green.
\end{landscape}
\includepdf[scale=0.8,pages={1},pagecommand=\section{Equipment Loan Agreement}\label{sec:appG}]{appendix/pdf/equipement-loan-agreement.pdf}
\includepdf[scale=0.8,pages={2,3}]{appendix/pdf/equipement-loan-agreement.pdf}
\section{Air Sampling Model for BEXUS Flight}\label{sec:appH}
\subsection{Introduction}
% que es aixo? simulation software for ascent and Descent Phases
\subsubsection{Objectives}
The purpose of this is to theoretically simulate the experiment; its preparation, the sampling methodology, and the expected results.
\subsubsection{Justification}
% why?
% The air sample volume is limited, how it should be distributed over the altitude?
% How many bags do we need?
% When we will open and close each bag?
% How big the bags should be?
% many parameter changing over the altitude and time: gondola velocity, air pressure and density (required sampling volume), pump efficiency (sampling time)
% parameter to control: bags resolution (Tubular number??)
This theoretical model will give an estimation of the time needed to fill the bags in order to achieve the best resolution, the required volume of the samples at the different altitudes, to make sure that there is enough sample left for analysis, the sampling altitudes and the number of the bags.
\subsubsection{Methodology}
% how?
% context study, critical scenarios identification, equations, matlab simulation, verification with empirical measures.
For this purpose, a mathematical model was created using MATLAB. In order to make sure that this model is reliable, it is going to be tested for the atmospheric conditions in the Arctic, and then compared with the 1976 US Standard atmosphere model that is used for this region. What is more, the model will be compared with past BEXUS flight data. The goal of the model is to be as close as possible with these past data.
After the tests, and making sure that the mathematical model is accurate, it will be adjusted with the TUBULAR's experiment requirements. In this way, the TUBULAR Team will get a general picture of the experiment's layout. Hence, the results of the experiment will be more or less expected, and in the case of complications, the mathematical model will be used as a reference of understanding what went wrong.
\subsection{Scientific and Empirical Background}
\subsubsection{Study of Previous BEXUS Flights}
This section has been elaborated based on the flight data files located in the previous BEXUS flights folders in the REXUS/BEXUS teamsite. This data was recorded by the Esrange Balloon Service System (EBASS).
\smallskip
This unit is responsible of the piloting of the balloon is done by Esrange. It provides the communication link between the gondola and the ground station. The EBASS airborne unit, receives the data from the on board sensors, and then it sends them to the EBASS ground unit. It is also responsible for the payload control, providing functions like the altitude control, by valve and ballast release or the flight termination. What is more, EBASS keeps track of the filght trajectory with an on-board GPS system.
\smallskip
Tables \ref{table:pre-flight} and \ref{table:post-flight} below gather some general information before and after the BEXUS flights. The pre-flight and the post-flight data are more or less in agreement in estimating for example, the ascent/descent time, the cut-off altitude and the float time. Knowing those information and that the estimations are close enough to the real data, will help the TUBULAR Team to define the experiment's parameters with higher accuracy.
\smallskip
It is worth mentioning that the ascent speed in Table \ref{table:post-flight} is lower than the predicted $5\sim 6 m/s$ which is mentioned in the BEXUS manual. That is because it is the average velocity value of all the data points.
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\scalebox{0.8}{
\begin{tabular}{c|c|c|c|c|c|c|}
\cline{2-7}
\multicolumn{1}{l|}{} & \textbf{BEXUS 20} & \textbf{BEXUS 21} & \textbf{BEXUS 22} & \textbf{BEXUS 23} & \textbf{BEXUS 24} & \textbf{BEXUS 25} \\ \hline
\multicolumn{1}{|c|}{Main Balloon} & Zodiac 12SF & Zodiac 12SF & Zodiac 35SF & Zodiac 35SF & Zodiac 12SF & Zodiac 12SF \\ \hline
\multicolumn{1}{|c|}{Balloon mass {[}kg{]}} & 101.4 & 101.4 & - & - & 101.4 & 101.4 \\ \hline
\multicolumn{1}{|c|}{Parachute {[$m^2$]}} & 80 & 80 & 80 & 80 & 80 & 80 \\ \hline
\multicolumn{1}{|c|}{Vehicle mass - Launch {[}kg{]}} & 256.8 & 287.8 & - & - & 300.6 & 321.15 \\ \hline
\multicolumn{1}{|c|}{Vehicle mass - Descent {[}kg{]}} & 155.4 & 186.4 & 189.58 & 181.5 & 199.2 & 219.75 \\ \hline
\multicolumn{1}{|c|}{Float altitude estimation {[}km{]}} & 28.2 & 27.5 & - & - & 27 & 26.6 \\ \hline
\multicolumn{1}{|c|}{Float pressure estimation {[}hPa{]}} & 15.38 & 17.11 & - & - & 18.5 & 19.6 \\ \hline
\multicolumn{1}{|c|}{Float temperature estimation {[}$\degree{C}${]}} & - 48 & - 48 & - & - & - 49.5 & - 49.9 \\ \hline
\multicolumn{1}{|c|}{Estimated ascent time} & 1h 33min & 1h 31min & - & - & 1h 29min & 1h 27min \\ \hline
\end{tabular}}}
\caption{Pre-flight Information Available in Previous BEXUS Campaigns.}
\label{table:pre-flight}
\end{table}
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\scalebox{0.8}{
\begin{tabular}{c|c|c|c|c|c|c|}
\cline{2-7}
& \textbf{BEXUS 20} & \textbf{BEXUS 21} & \textbf{BEXUS 22} & \textbf{BEXUS 23} & \textbf{BEXUS 24} & \textbf{BEXUS 25} \\ \hline
\multicolumn{1}{|c|}{Ascent time} & 1h 37min & 1h 37min & 1h 51min & 1h 51min & 1h 55min & 3h 45min \\ \hline
\multicolumn{1}{|c|}{Average ascent speed [m/s]} & 4.78 & 4.59 & 4.52 & 4.79 & 3.79 & 1.86 \\ \hline
\multicolumn{1}{|c|}{Floating altitude [km]} & 28 & 27 & 32 & 32 & 26.5 & 25.8 \\ \hline
\multicolumn{1}{|c|}{Floating time} & 2h 10min & 1h 46min & 2h 34min & 2h 42min & 2h 9min & 2h 36min \\ \hline
\multicolumn{1}{|c|}{Cut-off altitude [km]} & 27.7 & 20.5 & 28 & 32 & 25.7 & 25.2 \\ \hline
\multicolumn{1}{|c|}{Ending altitude [m]} & 648 & 723 & 3380 & 1630 & 1050 & - \\ \hline
\multicolumn{1}{|c|}{Descent time} & 36 min & 31 min & 29 min & 31 min & 30 min & - \\ \hline
\end{tabular}}}
\caption{Post-flight Information Regarding the Flight Profile for Previous BEXUS Campaings.} \label{table:post-flight}
\end{table}
In order to find out how many bags it is possible to sample during Ascent and Descent Phase it is important to know the time duration of each phase i.e Ascent, Floating and Descent. For that reason, Figure \ref{fig:trajectories} provides some sights on how previous BEXUS flights perform and what we can expect from BEXUS 26.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.2\textwidth]{appendix/img/flighttrajectory.png}}
\end{align*}
\caption{Altitude Over Flight Time for BEXUS Flights 20,21,22,23,24 and 25.}\label{fig:trajectories}
\end{figure}
%\subsubsubsection{Gondola Dynamics}
\textbf{Gondola Dynamics}
The velocity of the gondola at each phase can give us information about its dynamics. For example, the data from the BEXUS flight 22 was chosen for analysis in order to get an idea of the velocity values and fluctuations throughout the flight. The obtained diagrams, with some marked points showing the time it takes for the gondola to reach a certain altitude, or the velocity of the gondola at a specific altitude, are shown below.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.2\textwidth]{appendix/img/altitudevelocity.png}}
\end{align*}
\caption{Altitude Profile [Up] and Vertical Velocity Profile [Down] Over the Flight Time During BEXUS 22 Flight.}\label{fig:altitudevelocity}
\end{figure}
Figure \ref{fig:velocity} below, illustrates the velocity changes throughout the different phases.It works like a combination of both graphics form previous Figure \ref{fig:altitudevelocity}, however it provides a better representation of the velocity values at each phase. Especially during the Descent Phase, which is the most determinant for the air sampling process.
\smallskip
For each altitude, there are two velocity values, one for the Ascent and one for the Descent Phase. Constant and positive velocities indicate the Ascent Phase. During Ascent Phase the velocity is 6 m/s and almost constant, in agreement with the ascent speed value in the BEXUS manual. A zero velocity value indicates the Float Phase. Then the velocity becomes negative which indicates the Descent Phase. Once again, the velocity value close to the ground is 8 m/s as mentioned in the BEXUS manual\cite{BexusManual}.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.2\textwidth]{appendix/img/velocity.png}}
\end{align*}
\caption{Vertical Velocity of the Gondola Over the Altitude During BEXUS 22 Flight.}\label{fig:velocity}
\end{figure}
\bigskip
\underline{Atmospheric Conditions}
\smallskip
In order to see how the atmospheric conditions change during a BEXUS flight, the data from the BEXUS flight 22 was chosen for analysis. Figure \ref{fig:atmosphericconditions} below shows which kind of information is available for different parameters such as the temperature, the pressure and the air density with altitude.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.3\textwidth]{appendix/img/BX22_atmospheric_variables_hPa.jpg}}
\end{align*}
\caption{Variations in Temperature, Pressure and Air Density During the Ascent and Descent Phase for BEXUS Flight 22.}\label{fig:atmosphericconditions}
\end{figure}
\subsubsection{Trace Gases Distribution}\label{tracegases}
Atmospheric greenhouse gases are mostly concentrated in the upper troposphere and lower stratosphere. The Arctic region is of significant importance since there is where the maximum concentration of greenhouse gases is found due to meridional circulation (temperature differences) that pushes the gases from the equatorial to higher latitudes. Figures \ref{fig:carbondistribution} and \ref{fig:methanedistribution} are showing the concentration over latitude of two of the main greenhouse gases, $CO_2$ and $CH_4$ respectively.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=0.8\textwidth]{appendix/img/carbondistribution.png}}
\end{align*}
\caption{Global Distribution of Atmospheric Carbon Dioxide\cite{latitude}.}\label{fig:carbondistribution}
\end{figure}
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=0.8\textwidth]{appendix/img/methanedistribution.png}}
\end{align*}
\caption{Global Distribution of Atmospheric Methane\cite{latitude}.}\label{fig:methanedistribution}
\end{figure}
The same applies for the vertical distribution of atmospheric greenhouse gases. The favoured altitudes for higher concentrations are the upper troposphere and the lower stratosphere due to gravity waves and the vertical wind, which carry the trace gases at higher altitudes. What is more, $CO_2$ has longer lifetime in the troposphere and stratosphere, where it has essentially no sources or sinks since it is basically chemically inert in the free troposphere.
\smallskip
Figure \ref{fig:verticalco2} shows the global distribution of carbon dioxide in the upper troposphere-stratosphere, at 50-60\degree N for the time period 2000-2010.
\smallskip
Figure \ref{fig:seasonal} shows the global distribution of the seasonal cycle of the monthly mean $CO_2$ (in ppmv) in the upper troposphere and the lower stratosphere for the even months of 2010 and the altitude range from 5-45 $Km$.
\begin{figure}[H]
\begin{align*}
\makebox[\textwidth]{%
\includegraphics[width=0.4\textwidth]{appendix/img/verticaldistributionco2.png}}
\end{align*}
\caption{Global Distribution of $CO_2$ in the Upper Troposphere-Stratosphere\cite{CO2distribution}.}\label{fig:verticalco2}
\end{figure}
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.2\textwidth]{appendix/img/monthlymeanc02.png}}
\end{align*}
\caption{Global Distribution of the Seasonal Cycle of the Monthly Mean $CO_2$ (in ppm) in the Upper Troposphere and the Lower Stratosphere for the Even Months of 2010\cite{CO2distribution}.}\label{fig:seasonal}
\end{figure}
Figures \ref{fig:verticalco2}, and \ref{fig:seasonal}, indicate that the higher $CO_2$ concentrations are found between 5 and 25 $km$ with peaks around 10 to 15 $km$ (figure \ref{fig:verticalco2}) and 20 $km$ for October (figure \ref{fig:seasonal}).
\smallskip
Figures \ref{fig:profile-olivier-membrive} and \ref{fig:profile-lisa} focus more on the region near the Arctic Circle. These figures represent vertical profiles distribution of $CO$, $CO_2$ and $CH_4$ extracted from past research papers \cite{LISA} \cite{Membrive}. The range of altitudes that will be compared is the one between 10 and 25 km. Since Figure \ref{fig:profile-olivier-membrive} vertical axis is in pressure, the equivalent pressures for these altitudes will be from approximately 200 hPa to 20 hPa.
\begin{itemize}
\item $CH_4$ distribution: There is a good agreement between both researches that the concentration around 10 km of altitude is about 1800 ppb and then it starts decreasing gradually with altitude. This decrease seems to be faster above 17 km (~70 hPa) which would make this the region of major interest.
\item $CO_2$ distribution: The concentration around 10 km is approximately 390-400 ppm in both researches. The biggest variation in concentration can be found between 10-17 km. The concentration of $CO_2$ seems to have an increase and then decrease again so this would be the most interesting range to sample.
\item $CO$ distribution: Only one research with CO profiles has been presented here so it cannot be compared with other researches. Analysing the only CO profile, it seems that the largest variation lays on the range 10-15 km, which should be the area of interest.
\end{itemize}
Based on the vertical distribution profiles obtained from past researches, seems that our experiment should focus on sampling between 10-15 km for $CO$ and $CO_2$ but above 17 km for $CH_4$.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.1\textwidth]{appendix/img/profile-olivier-membrive.png}}
\end{align*}
\caption{Vertical Profiles in Black for $CO_2$ and $CH_4$. The Green Lines are High Resolution Forecasts \cite{Membrive}.}\label{fig:profile-olivier-membrive}
\end{figure}
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1\textwidth]{appendix/img/profile-lisa.png}}
\end{align*}
\caption{Vertical Profiles Comparison of AirCore and LISA Measurements of $CO_2$, $CH_4$ and CO Mole Fractions\cite{LISA}.}\label{fig:profile-lisa}
\end{figure}
\subsection{Sampling Flowrate}
% \subsubsection{Pressure-Drop Flowrate}
%Air viscosity at ambient temperature (25ºC) and atmospheric pressure (1 atm = 101325 Pa) is about $10^-5$. This value decreases with the temperature and also with the pressure, which means that it is negligible in Artic conditions provided by Esrange location and specially in high altitudes.
\subsubsection{Pump Efficiency}
For the air sampling process, the micro diaphragm gas pum \emph{850 1.2 KNDC B} from KNF company will be used.
The TUBULAR Team has tested this pump in vacuum conditions in IRF facilities in Kiruna. Table \ref{tab:pump-flowrate-efficiency} shows the obtained results. It has been proved that the pump is operative down to $20\ hPa$.
\begin{table}[H]
\noindent\makebox[\columnwidth]{%
\scalebox{0.8}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Altitude} & \textbf{Pressure} & \textbf{Datasheet Flowrate} & \textbf{Datasheet Efficiency} & \textbf{Empirical Flowrate} & \textbf{Empirical Efficiency} \\ \hline
0\ km & 1013 hPa & 8 L/min & 100 \% & 4.35 L/min & 54.4 \% \\ \hline
0.5\ km & 925 hPa & 7 L/min & 87.5 \% & 4.26 L/min & 63.25 \% \\ \hline
1.5\ km & 850 hPa & 6 L/min & 75 \% & 4.16 L/min & 52 \% \\ \hline
2.3\ km & 760 hPa & 5 L/min & 62.5 \% & 3.88 L/min & 48.5 \% \\ \hline
3.1\ km & 680 hPa & 4 L/min & 50 \% & 3.61 L/min & 45 \% \\ \hline
4.6\ km & 560 hPa & 3 L/min & 37.5 \% & 3.11 L/min & 38.9 \% \\ \hline
6.4\ km & 450 hPa & 2 L/min & 25 \% & 2.61 L/min & 32.6 \% \\ \hline
8.3\ km & 320 hPa & 1 L/min & 12.5 \% & 2.12 L/min & 26.5 \% \\ \hline
10.7\ km & 230 hPa & 0 L/min & 0 \% & 1.50 L/min & 18.8 \% \\ \hline
12\ km & 194 hPa & 0 L/min & 0 \% & 1.22 L/min & 15.3 \% \\ \hline
17\ km & 88 hPa & 0 L/min & 0 \% & 0.47 L/min & 5.9 \% \\ \hline
20\ km & 55.29 hPa & 0 L/min & 0 \% & 0.27 L/min & 3.4 \% \\ \hline
24\ km & 30 hPa & 0 L/min & 0 \% & 0.13 L/min & 1.6 \% \\ \hline
30\ km & 11.97 hPa & 0 L/min & 0 \% & 0.07 L/min & 0.9 \% \\ \hline
\end{tabular}}}
\caption{Pump Flowrate/Efficiency According to the Datasheet and Tests.}
\label{tab:pump-flowrate-efficiency}
\end{table}
% \subsection{Sampling Strategy Tests}
% \subsubsection{Past Research Sampling Strategy Test}
% Some of the most important parameters to be determined in this experiment are vertical resolution, sample size, sampling time and sampling flow rate amongst others. The difficulty in determining them lies on the fact that they are all interrelated. For example, the vertical resolution depends on the vertical speed and the effective sampling time. The amount of air samples that can be collected in each sampling bag is a function of the sampling time and the sampling flow rate. This is the reason why testing the pump's performance will be helpful to make a decision.
% \smallskip
% The test that will be realized is based on previous research \cite{LISA}. The tested elements will be the pump, one sampling bag, the outlet valve and the electronics necessary to record data (pressure and temperature sensors, datalogger and batteries). The simplified version of the experiment is placed in a vessel where the pressure can be regulated by a vacuum pump in order to simulate the desired atmospheric pressures.
% \smallskip
% The procedure for the test will be as follows: reach the desired pressure in the chamber and then start sampling air for 153 seconds. Repeat this process for three different pressures: 31.5 hPa, 60.8 hPa and 117.7 hPa. The data for pressure and temperature is logged at 3 Hz. The result of this measurements is represented in Figure \ref{fig:sampling-time-volume}.
% \begin{figure}[H]
% \begin{align*}
% \noindent\makebox[\textwidth]{%
% \includegraphics[width=0.7\textwidth]{appendix/img/sampling-time-volume.png}}
% \end{align*}
% \caption{Sampling Time (s) - Sampled Volume (L at STP) \cite{LISA}.}\label{fig:sampling-time-volume}
% \end{figure}
% As it can be seen in Figure \ref{fig:sampling-time-volume}, there is a linear increase of the volume that corresponds to the first twenty seconds when the bag is expanding to its full size. Until then the pressure readings are constant but after that point is reached, the pressure inside the sampling bag starts to increase due to air compression. All the data points are calculated using the data logged from the sensors and the ideal gas law.
% \smallskip
% The next step is to use a non-linear least squares method to obtain an empirical model of the parameter named a(t) which is relating the volume at STP with the chamber pressure by the equation $V_{STP}=a(t)\cdot p_a$. The model is only valid for $t>19.7$ seconds which means that the sampling bag has reached total expansion. The fitted values for a(t) are represented in Figure \ref{fig:least-squares-a-fitting}. Once a(t) is obtained, Figure \ref{fig:vessel-p-volume} can be represented just to see the relationship between the vessel pressure and the sampled volume. Three arbitrary sampling times are chosen for this representation and an horizontal line represents the maximum pressure that the sampling bag can withstand. This implies another procedure during the test: fill the bag until the sealing breaks and calculate the differential pressure that was achieved between the inside and the outside.
% \begin{figure}[H]
% \begin{align*}
% \noindent\makebox[\textwidth]{%
% \includegraphics[width=0.7\textwidth]{appendix/img/least-squares-a-fitting.png}}
% \end{align*}
% \caption{Sampling Time (s) - a (L/hPa) \cite{LISA}.}\label{fig:least-squares-a-fitting}
% \end{figure}
% \begin{figure}[H]
% \begin{align*}
% \noindent\makebox[\textwidth]{%
% \includegraphics[width=0.7\textwidth]{appendix/img/vessel-p-volume.png}}
% \end{align*}
% \caption{Vessel Pressure (hPa) - Sampled Volume (L at STP)\cite{LISA}.}\label{fig:vessel-p-volume}
% \end{figure}
% The objective of the above explained test and the calculations that follow will be to obtain an empirical model that gives the sampled air volume as a function of time at any pressure level. This will be the tool to calculate vertical resolutions and expected sample size and it should be a graphic looking like the one in Figure \ref{fig:sampled-volume-atm-p}.
% \begin{figure}[H]
% \begin{align*}
% \noindent\makebox[\textwidth]{%
% \includegraphics[width=0.7\textwidth]{appendix/img/sampled-volume-atm-p.png}}
% \end{align*}
% \caption{Sampled Volume (L at STP) - Atmospheric Pressure (hPa) \cite{LISA}.}\label{fig:sampled-volume-atm-p}
% \end{figure}
% \subsubsection{Test Results}
% The above described test is Test 18 in Table \ref{tab:pump-low-pressure-test} the results of which can be found in Section \ref{subsection:pumplowpressuretest}
% \subsection{Heat transfer}
\subsection{Discussion of the Results}
\subsubsection{Computational Methods vs. Flight Measurements}
% Kalman filter (empiric data vs. mathematical model) Error estimation
\bigskip
\underline{Atmospheric Model}
%The International Standard Atmosphere (ISA) and the US Standard Atmosphere 1976 are pretty much in agreement with some differences in the temperature distribution at higher altitudes.
%In the International Standard Atmosphere model, air is assumed to be dry, clean and to have constant composition. The function limitation of this is that does not take into account humidity effects or differences in pressure due to wind.
\smallskip
In this section, the data from the past BEXUS flights is compared with the 1976 US Standard Atmosphere, for validation reasons. Figure \ref{fig:pressure22} compares the changes in pressure over altitude for the BEXUS flights with the atmospheric model. It can be seen that the flights data-sets are in good agreement with the atmospheric model.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.3\textwidth]{appendix/img/allBX_z_p_hPa.jpg}}
\end{align*}
\caption{Comparative of Pressure Variation Over the Altitude During Different BEXUS Flights with the US Standard Atmosphere (1976).}\label{fig:pressure22}
\end{figure}
Figure \ref{fig:temperature22} below shows the changes in temperature over altitude, for all the BEXUS flights with the atmospheric model. It can be seen that there is a quite large deviation of the temperature above $20 km$ of altitude between the BEXUS flights and the US Standard Atmosphere 1976 model. This is not arbitrary since it appears in all flights. But it is not surprising either, because most of the atmospheric models fail to precisely predict the temperatures at higher altitudes.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.2\textwidth]{appendix/img/allBX_z_T2.jpg}}
\end{align*}
\caption{Comparative of Temperature Variation Over the Altitude During Different BEXUS Flights with the US Standard Atmosphere (1976).}\label{fig:temperature22}
\end{figure}
\bigskip
\underline{Descent Curve}
\smallskip
Again, in this section, the trajectories of past BEXUS flights, were compared with the mathematical model for validation reasons as shown in Figure \ref{fig:bexustrajectories}. Overall, BEXUS flights 20, 23 and 24 are in good agreement with the mathematical model. Some deviations exist between the mathematical model and the BEXUS flights 21 and 22 mostly in the last $5 km$ of the flight.
\begin{figure}[H]
\noindent\makebox[\textwidth]{%
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.1\textwidth]{appendix/img/bexus20mathmodel.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.1\textwidth]{appendix/img/bexus21mathmodel.png}
\end{subfigure}}
\noindent\makebox[\textwidth]{%
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.1\textwidth]{appendix/img/bexus22mathmodel.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.1\textwidth]{appendix/img/bexus23mathmodel.png}
\end{subfigure}}
\centering
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.1\textwidth]{appendix/img/bexus24mathmodel.png}
\end{subfigure}
\caption{Comparative of the Altitude Over Time During the BEXUS Flights 20, 21, 22, 23, 24 with the Mathematical Model.}\label{fig:bexustrajectories}
\end{figure}
\bigskip
\underline{Velocity Profile}
\smallskip
Here, the mathematical model was compared with the velocity profiles during the flights. It can be seen that the mathematical model in general follows the velocity profile with some minor deviations during Descent Phase, which means that the estimation is quite reliable.
\begin{figure}[H]
\noindent\makebox[\textwidth]{%
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=8cm]{appendix/img/velocity20mathmodel.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=8cm]{appendix/img/velocity21mathmodel.png}
\end{subfigure}}
\noindent\makebox[\textwidth]{%
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=8cm]{appendix/img/velocity22mathmodel.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=8cm]{appendix/img/velocity23mathmodel.png}
\end{subfigure}}
\centering
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=8cm]{appendix/img/velocity24mathmodel.png}
\end{subfigure}
\caption{Comparative of the Velocity Over Altitude During the BEXUS Flights 20, 21, 22, 23, 24 with the Mathematical Model.}
\end{figure}
\subsubsection{Mass Effects in the Descent Curve}
Figure \ref{fig:masseffects}, shows how the descent time changes with different gondola mass values, after the cut-off phase. The heavier the payload, the sooner it will land. For example, if the gondola weights $250 kg$, it will land in approximately $25$ minutes after the cutoff, while it would take approximately $40$ minutes to land if it weights $100 kg$.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1\textwidth]{appendix/img/masseffectsdescentcurve.png}}
\end{align*}
\caption{Mass Effects.}\label{fig:masseffects}
\end{figure}
\subsubsection{Discrete Sampling Volumes}
Figure \ref{fig:samplingvolume} supports the TUBULAR Team's decision to use a pump if sampling at high altitudes is meant, even though there is a single point failure risk. At $21 km$ of altitude, the minimum amount of air that would be needed to be sampled, in order to ensure that there is enough left for analysis at ground, would be $2.4 L$. Considering the low pressure at this high altitude, and the time it would be needed to fill the bag, it would be impossible to fulfill the experiment's objectives without using a pump. Moreover, without a pump, sampling at altitudes higher than $22 km$, and also during Ascent Phase, would be impossible.
\begin{figure}[H]
\begin{align*}
\noindent\makebox[\textwidth]{%
\includegraphics[width=1.2\textwidth]{appendix/img/new-data-points-13May.png}}
\end{align*}
\caption{Minimum Sampling Volume at Each Altitude to Obtain Enough Air to Perform a Proper Analysis (180 mL at Sea Level).}\label{fig:samplingvolume}
\end{figure}
\subsubsection{Limitations of the Bag Sampling Method}
\bigskip
\underline{Roof Altitude Effect}
\smallskip
%Since the pump's flow rate at high altitudes was not known yet,
For a hypothetical study case, an ideal and continuous flow rate was used ($1 L/min$). The obtained diagrams below, show that even if the sampling starts at $26 km$, or at $30 km$, or at $40 km$, the number of filled bags would still be the same. This happens, due to the low pressure conditions at such altitudes which not allow a faster filling of a bag, and specially the low air density which forces to sample much more volume of air. Of course, the number of bags that can be filled, depends on the pump's efficiency at high altitudes. So, the altitude of the gondola's cut-off over about $26 km$ would not affect the experiment's outcome.
\begin{figure}[H]
\noindent\makebox[\textwidth]{%
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.2\textwidth]{appendix/img/samplevolume26km.png}
\caption{Starting of Sampling at 26 $km$.}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.2\textwidth]{appendix/img/samplevolume30km.png}
\caption{Starting of Sampling at 30 $km$.}
\end{subfigure}}
\centering
\begin{subfigure}{0.45\textwidth}
\centering\includegraphics[width=1.2\textwidth]{appendix/img/samplevolume40km.png}
\caption{Starting of Sampling at 40 $km$.}
\end{subfigure}
\caption{Bag's Sampling System Limitations.}\label{fig:limits}
\end{figure}
\bigskip
\underline{One Single Pump}
\smallskip
Since the experiment uses a single pump, it is not possible to sample more than one bags at the same time. For the above hypothetical case, the maximum number of filled bags was five, considering continuous sampling. However, this is not the case when it comes to real life. Before sampling a bag, the system has to be flushed. Then, the sampling of a bag begins. After filling one bag, the system has to be flushed again before starting sampling a second bag. In that case, filling five bags, according to the hypothetical scenario, would be practically impossible.
\newpage
\subsection{Conclusions}
%\subsection{Reliability of the Simulation}
% strong points:
% weak points: Trace Gases Distribution
\subsubsection{Sampling Strategy}
After testing the pump at low pressure environment, an overall idea about the performance of the pump at high altitudes is now known and an approximation of the sampling strategy is possible.
The total weight of the BEXUS 26 gondola is approximated to be 266.55 kg, and the balloon is expected to reach 25.8 km altitude, following almost the trajectory of the BEXUS 24 flight as shown in figure \ref{fig:bexustrajectories}.
This serves the objectives of the TUBULAR experiment since, as indicated in Section \ref{tracegases} the altitudes with higher differences in trace gases concentrations, are between 10 and 25 km. These are the altitudes where the sampling will be done. Sampling six bags in total, is enough to fulfil the objectives of the experiment and it is also feasible. Two bags will be sampled during Ascent Phase and four during Descent Phase. The ascent speed of the gondola, as shown in Figure \ref{fig:altitudevelocity}, is estimated to be 5 m/s. A velocity of this rate, makes sampling of two bags possible while achieving a good resolution. It is important to mention here that the pressure inside the two bags that will be sampled during Ascent Phase shall not exceed 140 hPa since their volume will increase with decreasing pressure and they could burst. On the other hand, during Descent Phase the four remaining bags shall be filled with the full 3 L. This is because their volume will decrease with increasing pressure and it has to be made sure that there will be enough sample left for analysis.
The sampling of the first bag will start at 18 km of altitude. The minimum sampling time is estimated to be 29s with an achieved resolution of 143m. The second bag will be sampled at 21 km of altitude, and it will take 43 s to fill the minimum desired volume of air, with a resolution of 214 m. Before sampling, flushing of the AAC system for one minute is taken into account. During that, the gondola will cover a distance of 300 m.
During Descent Phase and considering a descent speed of 8 m/s, the sampling of the third bag will start from 17.5 km. The minimum sampling time is estimated to be 27 sec with a resolution of 216.5 m. The fourth bag will be sampled at 16 km for at least 20 s and resolution of 156 m. The fifth bag will be sampled at 14 km for 14.3 s minimum and resolution of 115 m. The sample of the last bag will start at 12 km for 9 s minimum sampling time and 71 m resolution. Again, one minute of flushing is taken into account, in between the sampling of each bag.
The flow rates of the pump, at each sampling altitude were taken from Table \ref{tab:normal-flow-rates}.
%Figure \ref{fig:samplingvolume} shows the minimum amount of air that needs to be sampled at each altitude in order to have enough left for analysis on ground. At 17 km the volume needed is approximately 2.5L. So, it would take 150 seconds and 900m to sample the first bag. Starting the sampling of the second bag at 18km, it would take 156sec and 936m to fill it with the minimum volume of air, approximately 2.6L. For the third bag, starting the sampling from 20 km it would take 168 second and 1.008 km to sample 2.8 L of air. The last bag would need 300 second and 1.8km to sample almost 5 L of air, starting from 22 km. In total, with one minute of flushing before each sampling of a bag, is 1014 seconds or 28 minutes.
%---------------------------------------------
\subsubsection{Discusion of the Results}
Overall, the mathematical model is in good agreement with the data from the past BEXUS flights as well as, with the atmospheric model used for the Arctic region.
Making this document, helped the TUBULAR Team to cross-check some theoretical values, important for the layout and the planning of the experiment. Tables \ref{table:pre-flight} and \ref{table:post-flight} show that the estimated data before each flight are pretty close with the real data obtained by the flights which helped the TUBULAR Team to define the experiment's parameters with higher accuracy.
In order to make a sampling plan, it is important to know the duration time of each phase. Figure \ref{fig:trajectories}, shows the trajectories of the different BEXUS flights, giving the TUBULAR Team a general idea of what the trajectory of the flight can look like and how the duration of each phase changes regarding the maximum altitude that the gondola reaches.
The velocity profile, Figure \ref{fig:velocity}, is of high importance since the velocity during Ascent and Descent Phase, will determine the resolution of the samples. In general, the velocity values are in agreement with the BEXUS manual, with an ascent speed of 5 m/s and a descent speed, fluctuating after cutoff, before stabilizing at 8 m/s at the last kilometers of the flight. Another important thing that has to be mentioned here, is the TUBULAR Team's decision to sample during Ascent Phase too and not only during Descent Phase. As seen in figure \ref{fig:velocity} the gondola is turbulent after the cutoff with velocities up to 83 m/s, and needs more or less 6 km before stabilizing its velocity as figure \ref{fig:altitudevelocity} indicates. Hence, the altitudes that the gondola will be turbulent, will be covered by sampling during Ascent Phase. This will not affect the comparison with the CAC that will be sampled during Descent Phase only, since the horizontal displacement of the gondola is much smaller than the vertical.
Atmospheric conditions play a crucial role for the TUBULAR experiment. The TUBULAR Team should know, the different pressures at each altitude, since the pressure is the parameter that will trigger the sampling of the bags. What is more, the pressure will determine the performance of the pump and it is crucial to know under what pressures the pump needs to be tested depending on the sample altitude. The temperature is of high importance too and the trickier to predict especially at high altitudes. The TUBULAR Team should be able to keep the temperature of the pump within its working temperature range in order to assure that the pump will start working. To do so, the air temperature must be known at each altitude which will help the TUBULAR Team to come up with a good thermal plan.
The sampling altitude range will not be chosen randomly. The idea is to find the altitude range, where the trace gases show the bigger differences in concentration. In Section \ref{tracegases}, were presented some theoretical trace gases concentration values as well as, some results from past research papers. According to them, the more interesting area to sample is between 10 and 25 km of altitude. The TUBULAR Team, plans to sample between 17 and 22 km during Ascent Phase and 17 to 10 km during Descent Phase.
Additionally, the sampling software revealed some limitations of the sampling system and also which parameters should be taken into account for the experiment's layout and which not.
The weight of the gondola, will affect the maximum altitude that the balloon will reach, and the time needed for the gondola to land, but it doesn't contribute to the decision of how many bags will be used.
The decision of the TUBULAR Team to use a pump was questioned at the beginning, as a single point failure risk. However, this decision is justified by the need of sampling during Ascent Phase, ohterwise the sampling would not be possible. Figure \ref{fig:samplingvolume}, supports the use of a pump because without a pump, sampling at 22 km of altitude, would be impossible considering the low pressure and the time it would take to fill a bag.
Note that even with the pump, some limitations still exist. The sampling of the bags cannot be continuous since the system has to be flushed before sampling a bag. Furthermore, the flow rate of the pump will be lower at high altitudes than it is on the ground, due to pressure differences. Figure \ref{fig:limits} points out that even with an ideal flow rate of 1 L/min and sampling continuously, it is not possible to sample more than five bags, because it takes a lot of time to sample a bag at high altitude atmospheric conditions. Additionally, it makes clear why the maximum altitude that the gondola will reach, does not affect the experiment's outcome. As the gondola ascents, the pressure gets lower and takes more time to sample a bag. So, sampling more bags would not be possible even if the balloon reaches a higher altitude. The same applies for the Descent Phase and the cutoff altitude.
Concluding, whilst at the beginning, the idea was to sample a total of sixteen bags in order to have more samples to compare with the continuous vertical profile obtained by the CAC, this document justifies that this is not feasible. Taking into account all the different parameters, it made clear which of them are important and which are not. Parameters like the gondola's velocity, the pressure at different altitudes, and the pump's flow rate, will determine the outcome of the TUBULAR experiment, the number of the bags that will be used, as well as the sampling altitudes. Parameters like the gondola's weight or the maximum altitude that the balloon will reach, does not affect the experiment's outcome and have a secondary role.
% Note that after the pump tests, the sampling altitudes may change.
\newpage
\newpage
\section{Experiment Thermal Analysis} \label{sec:appI}
\subsection{Component Temperature Ranges}
Table {\ref{tab:thermal-table}}, below covers the thermal ranges of all components \DIFdelbegin \DIFdel{to be }\DIFdelend included in the experiment's flight stage as listed in Section \ref{sec:experiment-components}:
%Added this space to help us see all of the numbers. This "Track changes" sign blocks them every time!
\begin{longtable}{|m{1cm}|m{3.5cm}|m{1.3cm}|m{1.3cm}|m{1.4cm}|m{1.3cm}|m{1.3cm}|m{1.3cm}|}
\hline
\multirow{2}{*}{\textbf{ID}} & \multirow{2}{*}{\textbf{Components}} & \multicolumn{2}{l|}{\textbf{Operating ($\degree{C}$)}} & \multicolumn{2}{l|}{\textbf{Survivable ($\degree{C}$)}} & \multicolumn{2}{l|}{\textbf{Expected ($\degree{C}$)}} \\ \cline{3-8} & & Min. & Max. & Min. & Max. & Min. & Max. \\ \hline
E1 & Arduino Due & -40 & 85 & -60 & 150 & -15.7 & 54.0 \\ \hline
E2 & Ethernet Shield & -40 & 85 & -65 & 150 & -15.7 & 54.0 \\ \hline
E3 & Miniature diaphragm air pump & 5 & 40 & -10 & 40 & 10 & 34.9 \\ \hline
E4 & Pressure Sensor & -40 & 85 & -40 & 125 & -15.7 & 54.0 \\ \hline
E5 & Sampling Valve (inlet and outlet 1/8"" female) & -20 & 68 & -20\DIFdelbegin \footnote{\DIFdel{If survivable temperatures were not given, operating temperatures were used as survivable limits.}%DIFDELCMD < \label{fn:erik}%%%
} %DIFAUXCMD
\addtocounter{footnote}{-1}%DIFAUXCMD
\DIFdelend \DIFaddbegin \DIFadd{\textsuperscript{\ref{fn:erik}} }\DIFaddend & 68\textsuperscript{\ref{fn:erik}} & -15 & 20 \\ \hline
E6 & Airflow sensor AWM43300V & -20 & 70 & -20\textsuperscript{\ref{fn:erik}} & 70\textsuperscript{\ref{fn:erik}} & -8.8 & 34.9 \\ \hline
E7 & Heater ($12.7\times 50.8 mm$) & -200 & 200 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -20 & 36 \\ \hline
%E8 & Voltage Regulator & -40 & 125 & -40\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & -30.62 & 34.93 \\ \hline
E9 & Temperature Sensor & -55 & 125 & -65 & 150 & -19.7 & 43 \\ \hline
E10 & DCDC 24 V & -40 & 85 & -55 & 125 & -15.7 & 54.0 \\ \hline
E12 & Micro SD & -25 & 85 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E13 & Logic CAT5E & -55 & 60 & -55\textsuperscript{\ref{fn:erik}} & 60\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
% E14 & Resistors (33, 150 and 100 ohm) & -55 & 155 & (-55)\textsuperscript{\ref{fn:erik}} & (155)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
% E15 & Capacitors $(0.1 \mu$ F and $10 \mu$ F) & -30 & 85 & -55 & 125 & -55\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E16 & MOSFET for current control & -55 & 175 & -55 & 175 & -15.7 & 54.0 \\ \hline
E17 & Diodes for DCDC converters & -65 & 175 & -65\textsuperscript{\ref{fn:erik}} & 175\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E18 & 3.3V LED & -40 & 85 & -40\textsuperscript{\ref{fn:erik}} & 85\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E19 & 15-pin D-SUB Female connector with pins & -55 & 120 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E20 & 9-pin D-SUB Female connector with pins & -55 & 120 & -200\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E21 & 9-pin D-SUB Female connector with soldering cups & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E22 & 9-pin D-SUB Male connector with soldering cups & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E23 & 15-pin D-SUB Male connector with soldering cups & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E24 & 9-pin D-SUB backing & -40 & 120 & -40\textsuperscript{\ref{fn:erik}} & 120 & -15.7 & 54.0 \\ \hline
E25 & 15-pin D-SUB backing & -40 & 120 & -40\textsuperscript{\ref{fn:erik}} & 120 & -15.7 & 54.0 \\ \hline
% E26 & Wall mounting bolts & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
%E27 & D-SUB cable CAC to AAC & -40 & 85 & -55 & 125 & -40 & 40 \\ \hline
E28 & 3.3 Zener diode & -65 & 175 & -65\textsuperscript{\ref{fn:erik}} & 175\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E29 & Male connector on PCB & -40 & 85 & -40\textsuperscript{\ref{fn:erik}} & 85 & -15.7 & 54.0 \\ \hline
E30 & Female connector from wall & -40 & 85 & -40\textsuperscript{\ref{fn:erik}} & 85 & -50.7 & 15 \\ \hline
E31 & Grounding contact & -55 & 125 & -55\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & -50.7 & 15 \\ \hline
E32 & Logic CAT5 E-link for inside box &-55 & 60 & -55\textsuperscript{\ref{fn:erik}} & 60\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
E33 & Signal Wires & -60 & 200 & -60\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
E34 & Flushing valve (inlet and outlet 1/8"" female) & -20 & 68 & -20\textsuperscript{\ref{fn:erik}} & 68 & -7.4 & \DIFdelbegin \DIFdel{68 }\DIFdelend \DIFaddbegin \DIFadd{25.8 }\DIFaddend \\ \hline
E35 & Valves manifold (outlet 1/8"" female) & -10 & 50 & -10\textsuperscript{\ref{fn:erik}} & 50\textsuperscript{\ref{fn:erik}} & 3 & 18 \\ \hline
E36 & Power wire black & -60 & 200 & -60\textsuperscript{\ref{fn:erik}} & 200\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
% E44 & Heat shrinking tube 2.5 x 1mm & -55 & 125 & (-55)\textsuperscript{\ref{fn:erik}} & (125)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
%E45 & 25-pin D-SUB female connector with pins & -10 & 90 & -10\textsuperscript{\ref{fn:erik}} & 90\textsuperscript{\ref{fn:erik}} & -8.77 & 24.01 \\ \hline
%E46 & 25-pin D-SUB male connector with soldering cups & -10 & 90 & -10\textsuperscript{\ref{fn:erik}} & 90\textsuperscript{\ref{fn:erik}} & -8.77 & 24.01 \\ \hline
%E47 & 25-pin D-SUB backing & -10 & 90 & -10\textsuperscript{\ref{fn:erik}} & 90\textsuperscript{\ref{fn:erik}} & -8.77 & 24.01 \\ \hline
E48 & Power wire red & -60 & 200 & -60\textsuperscript{\ref{fn:erik}} & 200
\textsuperscript{\ref{fn:erik}} & -34 & 15 \\ \hline
% E49 & Potentiometer 1k ohm & -55 & 125 & (-55)\textsuperscript{\ref{fn:erik}} & (120)\textsuperscript{\ref{fn:erik}} & TBD\textsuperscript{\ref{fn:ivan}} & TBD\textsuperscript{\ref{fn:ivan}} \\ \hline
E50 & 6-pin male & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -8.8 & 24.0 \\ \hline
E51 & 8-pin male single row header& -40 & 105 & -40\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -8.8 & 24.0 \\ \hline
E52 & 10-pin male single row header & -55 & 105 & -55\textsuperscript{\ref{fn:erik}} & 105\textsuperscript{\ref{fn:erik}} & -8.8 & 24.0 \\ \hline
E53 & 36-pin male double row header & -40 & 105 & -40 & 125 & -8.8 & 24.0 \\ \hline
E54 & 12 V DC/DC converter & -40 & 85 & -55 & 125 & -15.7 & 54.0 \\ \hline
E55 & 50 k$\Omega$ Potentiometer & -55 & 125 & -55\textsuperscript{\ref{fn:erik}} & 125\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E56 & Static pressure sensor & -40 & 120 & -40\textsuperscript{\ref{fn:erik}} & 120\textsuperscript{\ref{fn:erik}} & -8.8 & 34.9 \\ \hline
E57 & Connector for static pressure sensor & -25 & 80 & -25\textsuperscript{\ref{fn:erik}} & 80\textsuperscript{\ref{fn:erik}} & -8.8 & 34.9 \\ \hline
E58 & PCB & -50 & 110 & -50\textsuperscript{\ref{fn:erik}} & 110\textsuperscript{\ref{fn:erik}} & -15.7 & 54.0 \\ \hline
E59 & Pressure Sensor PCB & -50 & 110 & -50\textsuperscript{\ref{fn:erik}} & 110\textsuperscript{\ref{fn:erik}} & -50 & 39 \\ \hline
\caption{Table of Component Temperature Ranges.}
\label{tab:thermal-table}
\end{longtable}
\raggedbottom
\raggedbottom
\subsection{Thermal equations}
\subsubsection{Variables and Tables}
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Variable} & \textbf{Description} & \textbf{Unit} & \textbf{Value} \\ \hline
$\alpha_{Al}$ & Absorption of aluminum & - & 0.3 \\ \hline
S & Solar constant & $\frac{W}{m^2}$ & 1362 \\ \hline
$A_{Sun}$ & Area affected by the sun & $m^2$ & 0.28 \\ \hline
Albedo & Albedo coefficient & - & 0.15 \\ \hline
$A_{Albedo}$ & Area affected by the albedo & $m^2$ & 0.65 \\ \hline
$\varepsilon_{Earth}$ & Emissivity of Earth & - & 0.95 \\ \hline
$A_{IR}$ & Area affected by the IR flux & $m^2$ & 0.65 \\ \hline
$IR_{25km}$ & Earth IR flux at 25 km & $\frac{W}{m^2}$ & 220 \\ \hline
P & Dissipated power from electronics & W & varies \\ \hline
h & Convection heat transfer constant & $\frac{W}{m^2 \cdot K}$ & 18 \\ \hline
K & Scaling factor for convection & - & varies \\ \hline
$A_{Convection}$ & Area affected by the convection & $m^2$ & 1.3 \\ \hline
$\sigma$ & Stefan-Boltzmann constant & $\frac{W}{m^2 \cdot K^4}$ & $5.67051 \cdot 10^{-8}$ \\ \hline
$A_{Radiation}$ & Radiating area & $m^2$ & 1.3\\ \hline
$\varepsilon_{Al}$ & Emissivity of aluminum & - & 0.09 \\ \hline
$T_{Out}$ & Temperature wall outside & $K$ & varies \\ \hline
$T_{Inside}$ & average uniform temperature inside & $K$ & varies \\ \hline
$T_{Ambient}$ & Ambient temperature outside & $K$ & varies \\ \hline
$T_{Ground}$ & Temperature of the ground & $K$ & 273 \\ \hline
$k_{Al}$ & Thermal conductivity of aluminum & $\frac{W}{m\cdot K}$ & 205 \\ \hline
$k_{PS}$ & Thermal conductivity of polystyrene foam & $\frac{W}{m\cdot K}$ & 0.03 \\ \hline
$L_{Al}$ & Thickness of aluminum sheeting & $m$ & 0.0005 \\ \hline
$L_{PS}$ & Thickness of polystyrene foam & $m$ & varies \\ \hline
$P_{Ground}$ & Pressure at ground & $Pa$ & $101.33 \cdot 10^3$ \\ \hline
$P_{25km}$ & Pressure at $25 km$ & $Pa$ & $2.8 \cdot 10^3$ \\ \hline
\end{tabular}
\caption{Variables Used in Thermal Calculation.}
\label{tab:thermal-variables}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{ll}
\hline
\multicolumn{1}{|l|}{\textbf{Wall part}} & \multicolumn{1}{l|}{\textbf{Thickness (m)}} \\ \hline
\multicolumn{1}{|l|}{Aluminum sheet} & \multicolumn{1}{l|}{0.0005} \\ \hline
& \\ \cline{1-1}
\multicolumn{1}{|l|}{AAC (Styrofoam)} & \\ \hline
\multicolumn{1}{|l|}{Vertical} & \multicolumn{1}{l|}{0.02} \\ \hline
\multicolumn{1}{|l|}{Horizontal} & \multicolumn{1}{l|}{0.02} \\ \hline
\multicolumn{1}{|l|}{Top/Bottom} & \multicolumn{1}{l|}{0.03} \\ \hline
& \\ \cline{1-1}
\multicolumn{1}{|l|}{CAC (Styrofoam)} & \\ \hline
\multicolumn{1}{|l|}{Horizontal towards AAC} & \multicolumn{1}{l|}{0.02} \\ \hline
\multicolumn{1}{|l|}{All other walls} & \multicolumn{1}{l|}{0.05} \\ \hline
\end{tabular}
\caption{The Different Wall Thicknesses Used for AAC and CAC.}
\label{tab:Wall-thickness-AAC-CAC}
\end{table}
%\begin{table}[H]
%\centering
%\caption{Dissipated power at the different stages.}
%\label{tab:dissipated-power-thermal}
%\begin{tabular}{|l|l|l|}
%\hline
%\multirow{2}{*}{Critical stage} & \multicolumn{2}{l|}{Dissipated power (W)} \\ \cline{2-3}
% & Worst case & Average case \\ \hline
%Launch pad & 7.589 & 5.083 \\ \hline
%Early ascent & 11.499 & 8.993 \\ \hline
%Sampling ascent & 13.499 & 13.397 \\ \hline
%Float & 11.499 & 8.993 \\ \hline
%Sampling descent & 13.499 & 13.397 \\ \hline
%Shutdown descent & 2.167 & 2167 \\ \hline
%Landed & 0 & 0 \\ \hline
%\end{tabular}
%\end{table}
%The difference in dissipated power is depending if heaters need to be on or not.
\subsection{Thermal calculations in MATLAB}
For the MATLAB calculations\DIFaddbegin \DIFadd{, }\DIFaddend a few assumptions were made\DIFdelbegin \DIFdel{, they are as follows.
}\DIFdelend \DIFaddbegin \DIFadd{. They were as follows:
}\DIFaddend \begin{itemize}
\item Taking the average of MATLAB calculations for calculations with or without \DIFdelbegin \DIFdel{sun}\DIFdelend \DIFaddbegin \DIFadd{sunlight}\DIFaddend .
\item \DIFdelbegin \DIFdel{Calculate }\DIFdelend \DIFaddbegin \DIFadd{Calculating }\DIFaddend the average temperature on the outside wall of the experiment.
\item Assuming the inner temperature at the bags section is uniform.
\item \DIFdelbegin \DIFdel{The }\DIFdelend \DIFaddbegin \DIFadd{Ignoring the }\DIFaddend pipes letting cold air \DIFdelbegin \DIFdel{in have not been taken into account in MATLAB}\DIFdelend \DIFaddbegin \DIFadd{into the experiment}\DIFaddend .
\item \DIFdelbegin \DIFdel{Assume }\DIFdelend \DIFaddbegin \DIFadd{Assuming }\DIFaddend no interference between the two experiment boxes.
\item All conduction \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend uniform from the inside.
\item Assume steady flow through the walls from conduction.
\item Assume radiation and convection from/on 6 walls not 5.
\end{itemize}
\subsubsection{Solar flux and Albedo}
The albedo is the reflected solar flux from earth so it \DIFdelbegin \DIFdel{can be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend put into the same equation as the solar flux. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend assumed that the sun hit two sides of the experiment at a $45\degree$ angle at all \DIFdelbegin \DIFdel{time }\DIFdelend \DIFaddbegin \DIFadd{times while }\DIFaddend over 10 km \DIFaddbegin \DIFadd{altitude}\DIFaddend . In the \DIFdelbegin \DIFdel{mid }\DIFdelend \DIFaddbegin \DIFadd{middle }\DIFaddend of October at the time \DIFdelbegin \DIFdel{for }\DIFdelend \DIFaddbegin \DIFadd{of }\DIFaddend launch the sun \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{was expected to }\DIFaddend hit the experiment with a maximum inclination of $15\degree$ from the horizon.
\begin{equation*}
Q_{Sun+Albedo} = \alpha_{Al}\cdot S \cdot cos(15) \cdot (A_{Sun} \cdot cos(45) + Albedo \cdot A_{Albedo})
\end{equation*}
\subsubsection{Conduction}
For calculating the outer walls temperature, the assumption of steady flow through walls \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend used.
\begin{equation*}
Q_{Conduction} = [\text{Steady flow through wall}] = \text{Dissipated power} = P
\end{equation*}
\subsubsection{Earth IR flux}
The earth IR flux is the flux that comes from earth as a black body radiating. It \DIFdelbegin \DIFdel{is calculated from finding the }\DIFdelend \DIFaddbegin \DIFadd{was calculated from the determined }\DIFaddend IR flux at \DIFdelbegin \DIFdel{the ground then scale it }\DIFdelend \DIFaddbegin \DIFadd{ground level then scaled }\DIFaddend to the altitude the experiment \DIFdelbegin \DIFdel{will fly at}\DIFdelend \DIFaddbegin \DIFadd{would reach}\DIFaddend . The following equations were found from \cite{BalloonAscent}\DIFdelbegin \DIFdel{.
}\DIFdelend \DIFaddbegin \DIFadd{:
}\DIFaddend \begin{gather*}
IR_{Ground} = \varepsilon_{earth} \cdot \sigma \cdot T_{ground}^4 \\
\tau_{atmIR} = 1.716 - 0.5\cdot \Bigg[e^{-0.65\frac{P_{25km}}{P_{ground}}} + e^{-0.95\frac{P_{25km}}{P_{ground}}}\Bigg] \\
IR_{25km} = \tau_{atmIR} \cdot IR_{Ground}
\end{gather*}
After the IR \DIFdelbegin \DIFdel{has been }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend calculated for the floating altitude it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend put into the following equation.
\begin{equation*}
Q_{IR} = \varepsilon_{earth} \cdot A_{IR} \cdot IR_{25km}
\end{equation*}
\subsubsection{Radiation}
It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend assumed that the experiment \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend experience radiation from all 6 sides. In reality it \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend experience radiation from 5 sides because the CAC box will be in contact with one of the AAC box's sides. It was decided to leave \DIFdelbegin \DIFdel{it at }\DIFdelend \DIFaddbegin \DIFadd{the simulation with input from the }\DIFaddend 6 \DIFaddbegin \DIFadd{sides }\DIFaddend for the calculations in order to compensate for having no holes to let cold air in to the pump.
\begin{equation*}
Q_{Radiation} = \sigma \cdot \varepsilon_{Al} \cdot A_{Radiation} \cdot (T_{Out}^4 - T_{Ambient}^4 )
\end{equation*}
\subsubsection{Convection}
At an altitude of 25 km there is far lower air density than at sea level. \DIFdelbegin \DIFdel{Therefore, it gives that }\DIFdelend \DIFaddbegin \DIFadd{This therefore gave }\DIFaddend a scaling factor $K$ \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{that had }\DIFaddend to be taken into account when calculating the convection and K can be seen in Table \ref{tab:heat-loss} for different altitudes.
\begin{equation*}
Q_{Convection} = h \cdot K \cdot A_{Convection} \cdot (T_{Out} - T_{Ambient})
\end{equation*}
The equation for approximating the heat transfer coefficient for air \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend outlined as:
\begin{equation*}
h = 10.45 - v + 10\cdot\sqrt{v}
\end{equation*}
Where $v$ is the velocity of the fluid medium.
As the balloon \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend expected to rise at approximately $5 m/s$ for the duration of the Ascent Phase, the starting value for the convective heat transfer coefficient $h$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend expected to be $27.811$, assuming negligible wind currents perpendicular to the direction of ascent. %I will probably move this paragraph to our Thermal Design section, since this is important information.
The equations used to obtain the value of $K$ are listed below:
\begin{equation*}
F(T_{sea}, T_{alt}) = \big(\frac{k_{alt}}{k_{sea}}\big)^{1-n}\times \Big[\Big(\frac{\beta_{alt}}{\beta_{sea}}\Big)\times \Big(\frac{\mu_{sea}}{\mu_{alt}}\Big)\times \Big(\frac{c_{p-alt}}{c_{p-sea}}\Big)\times \Big(\frac{\rho(T_{alt})}{\rho(T_{sea})}\Big)^{2}\Big]^{n}
\end{equation*}
Where:
\begin{itemize}
\item $n$ is an exponent value dependent on the turbulence of the fluid medium ($\frac{1}{4}$ for laminar flow and $\frac{1}{3}$ for turbulent flow)
\item $k$ is the thermal conductivity of the air
\item $\beta$ is the thermal expansion coefficient for air
\item $\mu$ is the dynamic viscosity of the air
\item $c_{p}$ is the specific heat capacity of the air at constant pressure
\item $\rho(T)$ is the density of the air as a function of only temperature difference (i.e. for constant pressure)
\item "sea" denotes the current variable is represented by its value found at sea level
\item "alt" denotes the current variable is represented by its value found at a specified altitude
\end{itemize}
The values for $F$ from this equation were then applied to its respective position in the following equation to determine the ratio between the convective heat transfer coefficient $h$ at sea level (assumed to have negligible differences for Esrange ground level) and the same coefficient at a specified altitude:
\begin{equation*}
K = \Big(\frac{\rho(P_{alt})}{\rho(P_{sea})}\Big)^{2n}\times \Big(\frac{\Delta T_{air}}{\Delta T_{sea}}\Big)^{n}\times F(T_{sea}, T_{alt})
\end{equation*}
Where:
\begin{itemize}
\item $\rho(T)$ is the density of the air as a function of only temperature difference (i.e. for constant pressure)
\item $\delta T$ is the difference between the temperature of the ambient air and the surface in question
\end{itemize} \\
%Table for convective and radiative heat loss will go here.
Table \ref{tab:heat-loss} combines the previously listed convection and radiation formulae integrated into the MATLAB scripts to determine the convective and radiative heat loss in the worst case for (highest) power dissipation during each stage of the experiment. Additional information on the thermodynamics of the atmosphere was obtained from \textit{Engineering Toolbox} \cite{EngTool} \\
\begin{longtable}{|m{2.5cm}|m{1.6cm}|m{1cm}|m{1.2cm}|m{1.2cm}|m{1cm}|m{1.5cm}|m{1.5cm}|}
\hline
\textbf{Altitude} & \textbf{Case} & \textbf{$T_{amb}$} & \textbf{$K$} & \textbf{$h_{alt}$} & \textbf{$T_{out}$} & \textbf{$Q_{conv}$} & \textbf{$Q_{rad}$} \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Hangar\\ (Preparations)\end{tabular}} & Cold & 283 & 1 & 10.45 & 20.3 & 139.409 & 6.516 \\
& Expected & 288 & 1 & 10.45 & 25.2 & 139.081 & 6.844 \\
& Warm & 293 & 1 & 10.45 & 30.2 & 138.743 & 7.182 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Ground\\ (Stationary)\end{tabular}} & Cold & 263 & 1 & 18 & -0.8 & 215.705 & 4.690 \\
& Expected & 273 & 1 & 18 & 9.2 & 215.171 & 5.222 \\
& Warm & 283 & 1 & 18 & 19.2 & 214.600 & 5.790 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Ground\\ (Launched)\end{tabular}} & Cold & 263 & 1 & 28.945 & -4.2 & 217.528 & 2.884 \\
& Expected & 273 & 1 & 28.945 & 5.8 & 217.195 & 3.217 \\
& Warm & 283 & 1 & 28.945 & 15.8 & 216.837 & 3.573 \\ \hline
\multirow{3}{*}{5 km} & Cold & 228 & 0.7868 & 22.774 & -37.6 & 217.979 & 2.430 \\
& Expected & 263 & 0.8468 & 24.511 & -3.2 & 216.990 & 3.417 \\
& Warm & 273 & 0.8507 & 24.624 & 6.3 & 216.615 & 3.792 \\ \hline
\multirow{3}{*}{10 km} & Cold & 193 & 0.4882 & 14.131 & -68.1 & 217.916 & 2.480 \\
& Expected & 223 & 0.5286 & 15.300 & -39.1 & 216.940 & 3.453 \\
& Warm & 238 & 0.5421 & 15.691 & -24.4 & 216.336 & 4.055 \\ \hline
\multirow{3}{*}{15 km} & Cold & 193 & 0.3300 & 9.552 & -61.9 & 224.325 & 3.961 \\
& Expected & 233 & 0.3680 & 10.652 & -23.9 & 222.309 & 5.972 \\
& Warm & 253 & 0.3825 & 11.071 & -4.6 & 221.050 & 7.226 \\ \hline
\multirow{3}{*}{20 km} & Cold & 213 & 0.2401 & 6.950 & -35.6 & 220.777 & 7.430 \\
& Expected & 243 & 0.2563 & 7.419 & -7.4 & 218.297 & 9.899 \\
& Warm & 268 & 0.2687 & 7.778 & 16.4 & 215.906 & 12.282 \\ \hline
\multirow{3}{*}{25 km} & Cold & 223 & 0.1683 & 4.871 & -16.0 & 215.482 & 12.549 \\
& Expected & 253 & 0.1792 & 5.187 & 11.4 & 211.791 & 16.226 \\
& Warm & 273 & 0.1847 & 5.346 & 30.1 & 208.893 & 19.112 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Float\\ Phase\end{tabular}} & Cold & 223 & 0.1683 & 3.029 & -1.7 & 190.087 & 19.521 \\
& Expected & 253 & 0.1792 & 3.226 & 24.1 & 185.077 & 24.530 \\
& Warm & 273 & 0.1847 & 3.325 & 41.9 & 181.196 & 28.402 \\ \hline
\multirow{3}{*}{25 km} & Cold & 223 & 0.1683 & 5.173 & -20.3 & 199.514 & 10.633 \\
& Expected & 253 & 0.1792 & 5.508 & 7.4 & 196.295 & 13.838 \\
& Warm & 273 & 0.1847 & 5.677 & 26.3 & 193.765 & 16.356 \\ \hline
\multirow{3}{*}{20 km} & Cold & 213 & 0.2401 & 7.379 & -36.9 & 221.276 & 6.948 \\
& Expected & 243 & 0.2563 & 7.877 & -8.6 & 218.934 & 9.280 \\
& Warm & 268 & 0.2687 & 8.258 & 15.2 & 216.672 & 11.534 \\ \hline
\multirow{3}{*}{15 km} & Cold & 193 & 0.3300 & 10.142 & -63.6 & 216.808 & 3.561 \\
& Expected & 233 & 0.3680 & 11.310 & -25.4 & 214.974 & 5.389 \\
& Warm & 253 & 0.3825 & 11.756 & -6.0 & 213.829 & 6.530 \\ \hline
\multirow{3}{*}{10 km} & Cold & 193 & 0.4882 & 15.004 & -68.8 & 218.074 & 2.326 \\
& Expected & 223 & 0.5286 & 16.246 & -39.7 & 217.155 & 3.242 \\
& Warm & 238 & 0.5421 & 16.661 & -25.0 & 216.586 & 3.809 \\ \hline
\multirow{3}{*}{5 km} & Cold & 228 & 0.7868 & 24.182 & -38.1 & 218.127 & 2.284 \\
& Expected & 263 & 0.8468 & 26.026 & -3.6 & 217.195 & 3.214 \\
& Warm & 273 & 0.8507 & 26.145 & 6.4 & 216.841 & 3.567 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Ground\\ (Landed)\end{tabular}} & Cold & 263 & 1 & 30.734 & -4.6 & 217.700 & 2.713 \\
& Expected & 273 & 1 & 30.734 & 5.4 & 217.386 & 3.027 \\
& Warm & 283 & 1 & 30.734 & 15.4 & 217.049 & 3.363 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Ground\\ (Stationary)\end{tabular}} & Cold & 263 & 1 & 18 & -4.8 & 207.753 & 2.586 \\
& Expected & 273 & 1 & 18 & 5.2 & 207.453 & 2.885 \\
& Warm & 283 & 1 & 18 & 15.2 & 207.132 & 3.205 \\ \hline
\caption{Table of Predicted Heat Loss.}
\label{tab:heat-loss}
\end{longtable}
\raggedbottom
\subsubsection{Thermal \DIFdelbegin \DIFdel{equation}\DIFdelend \DIFaddbegin \DIFadd{equations}\DIFaddend }
If there is no \DIFdelbegin \DIFdel{sun }\DIFdelend \DIFaddbegin \DIFadd{incident sunlight }\DIFaddend on the experiment.
\begin{gather*}
Q_{IR} + Q_{Conduction} = Q_{Radiation} + Q_{Convection} \\
\updownarrow \\
\varepsilon_{earth} \cdot A_{IR} \cdot IR_{25km} + P \\ = \sigma \cdot \varepsilon_{Al} \cdot A_{Radiation} \cdot (T_{Out}^4 - T_{Ambient}^4 ) + h \cdot K \cdot A_{Convection} \cdot (T_{Out} - T_{Ambient})
\end{gather*}
If there is \DIFdelbegin \DIFdel{sun }\DIFdelend \DIFaddbegin \DIFadd{incident sunlight }\DIFaddend on the experiment\DIFdelbegin \DIFdel{it is the same but adding }\DIFdelend \DIFaddbegin \DIFadd{, existing parameters stay included, and the parameter }\DIFaddend $Q_{Sun+Albedo}$ \DIFaddbegin \DIFadd{is also included}\DIFaddend .
\begin{gather*}
Q_{IR} + Q_{Conduction} + Q_{Sun+Albedo} = Q_{Radiation} + Q_{Convection} \\
\updownarrow \\
\varepsilon_{earth} \cdot A_{IR} \cdot IR_{25km} + P + \alpha_{Al}\cdot S \cdot cos(15) \cdot (A_{Sun} \cdot cos(45) + Albedo \cdot A_{Albedo}) \\ = \sigma \cdot \varepsilon_{Al} \cdot A_{Radiation} \cdot (T_{Out}^4 - T_{Ambient}^4 ) + h \cdot K \cdot A_{Convection} \cdot (T_{Out} - T_{Ambient})
\end{gather*}
From \DIFdelbegin \DIFdel{those }\DIFdelend \DIFaddbegin \DIFadd{these }\DIFaddend equations $T_{Out}$ \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be calculated and it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was found to be }\DIFaddend the average temperature on the aluminum sheets facing the outside air.
After $T_{Out}$ \DIFdelbegin \DIFdel{have been found}\DIFdelend \DIFaddbegin \DIFadd{was found, }\DIFaddend the inner temperature \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be calculated by \DIFdelbegin \DIFdel{using }\DIFdelend \DIFaddbegin \DIFadd{determining the }\DIFaddend heat transfer through the wall.
\begin{gather*}
P = \frac{T_{Inside} - T_{Outside}}{A \cdot (\frac{L_{Al}}{k_{Al}} + \frac{L_{PS}}{k_{PS}})} \\
\updownarrow \\
T_{Inside} = P \cdot A \cdot (\frac{L_{Al}}{k_{Al}} + \frac{L_{PS}}{k_{PS}}) + T_{Outside}
\end{gather*}
$T_{Inside}$ \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend then assumed to be the uniform air temperature in the experiment.
\subsubsection{Trial run with BEXUS 25 air temperature data for altitudes}
The air temperature data varying over altitude from \DIFdelbegin \DIFdel{old BEXUS flight }\DIFdelend \DIFaddbegin \DIFadd{previous BEXUS flights }\DIFaddend could be found on the REXUS/BEXUS website. To do a simulated test flight for the calculations done in MATLAB \DIFdelbegin \DIFdel{, to see how it would be }\DIFdelend \DIFaddbegin \DIFadd{(with the intention of seeing how the temperature profile would appear }\DIFaddend for a real flight\DIFaddbegin \DIFadd{), }\DIFaddend it was calculated and plotted in with data from BEXUS 25 flight.
Because of \DIFdelbegin \DIFdel{there being approx }\DIFdelend \DIFaddbegin \DIFadd{it originally having approximately }\DIFaddend 42000 data points\DIFdelbegin \DIFdel{it }\DIFdelend \DIFaddbegin \DIFadd{, the profile }\DIFaddend had to be scaled down\DIFdelbegin \DIFdel{and only }\DIFdelend \DIFaddbegin \DIFadd{. Only }\DIFaddend every $25^{th}$ data point was used to \DIFdelbegin \DIFdel{save time and there was not much detail lossby taking every \mbox{%DIFAUXCMD
$25^{th}$
}%DIFAUXCMD
}\DIFdelend \DIFaddbegin \DIFadd{reduce processing time and this resulted in little detail loss}\DIFaddend . In Figure \ref{fig:thermal-testflight-AAC}\DIFaddbegin \DIFadd{, }\DIFaddend the TUBULAR test flight is the uniform temperature on the inside with a insulation consisting as specified in Table \ref{tab:Wall-thickness-AAC-CAC}.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/Thermal/AAC-test-flight.jpg}
\end{align*}
\caption{Simulated Test Flight of TUBULAR AAC Box with Data From BEXUS 25.}
\label{fig:thermal-testflight-AAC}
\end{figure}
When the data was found it was checked in ANSYS to determine and add heaters to control the most critical parts of the model.
\subsubsection{Trial flight for the CAC} \label{sssec:CAC-trial-flight}
The CAC box \DIFdelbegin \DIFdel{does }\DIFdelend \DIFaddbegin \DIFadd{did }\DIFaddend not require as much thermal design as the AAC box. The only part to \DIFdelbegin \DIFdel{consider is }\DIFdelend \DIFaddbegin \DIFadd{be considered was }\DIFaddend the valve, which \DIFdelbegin \DIFdel{has }\DIFdelend \DIFaddbegin \DIFadd{had }\DIFaddend a lower limit of the operating temperature of $-10\degree{ C}$. It \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend not be a problem because the valve \DIFdelbegin \DIFdel{will open a little before }\DIFdelend \DIFaddbegin \DIFadd{would open just prior }\DIFaddend launch and have \DIFdelbegin \DIFdel{a current }\DIFdelend \DIFaddbegin \DIFadd{current running through it }\DIFaddend throughout the whole flight \DIFaddbegin \DIFadd{--- }\DIFaddend heating it self up. If the thermal analysis \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend proven wrong by a test, showing that it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend not sufficient to use only \DIFdelbegin \DIFdel{self heating}\DIFdelend \DIFaddbegin \DIFadd{self-heating}\DIFaddend , a heater \DIFdelbegin \DIFdel{can }\DIFdelend \DIFaddbegin \DIFadd{could }\DIFaddend be applied at a later date. The passive thermal design for the CAC box \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend consist of aluminum sheets and Styrofoam as specified in \ref{tab:Wall-thickness-AAC-CAC}.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/Thermal/CAC-test-flight.jpg}
\end{align*}
\caption{Simulated Test Flight of TUBULAR CAC Box with Data From BEXUS 25.}
\label{fig:thermal-testflight-CAC}
\end{figure}
\subsubsection{MATLAB Conclusion}
By running the MATLAB script, the hottest and coldest case for $0.02 m$ on the wall and $0.03 m$ on the top and bottom of the Styrofoam could be found for ascent and descent sampling. The thermal conductivity of Styrofoam is $k=0.03$. In Table \ref{tab:temperature-sampling-ascent-descent} it is shown the hottest and coldest case of temperature on the inside when samples should be taken. The hottest and coldest cases are taken from Figure \ref{fig:thermal-testflight-AAC}.
\begin{table}[H]
\centering
\begin{tabular}{l|l|l|l|l|}
\cline{2-5}
\multirow{2}{*}{} & \multicolumn{2}{l|}{\textbf{Ascent}} & \multicolumn{2}{l|}{\textbf{Descent}} \\ \cline{2-5}
& Coldest & Hottest & Coldest & Hottest \\ \hline
\multicolumn{1}{|l|}{AAC} & -11.39 & 16.41 & -30.28 & -4.393 \\ \hline
\multicolumn{1}{|l|}{Outer air} & -38.22 & -15.9 & -44.41 & -38.18 \\ \hline
\end{tabular}
\caption{The Sampling Temperature Ranges for Ascent and Descent for the AAC Box.}
\label{tab:temperature-sampling-ascent-descent}
\end{table}
\subsection{Thermal Simulations in ANSYS}
\DIFdelbegin \DIFdel{In ANSYS, FEA simulations were done using both Steady-State Thermal and Transient Thermal analysis.
Because of the limitations in ANSYS student license a simplified model has been used, which can be seen in Figure \ref{fig:Ansys-CAD-model}. It is in a lower corner of the experiment showing the Brain and has three walls to the sampling bags and the air is uniform on the inside. The uniform inside air can be taken from the data from the test flight in Figure (\ref{fig:thermal-testflight-AAC}). These simulations were done to see what temperature the pump and manifolds will be as they are the most critical components.
}%DIFDELCMD <
%DIFDELCMD < \begin{figure}[H]
%DIFDELCMD < %%%
\begin{align*}
\DIFdelFL{\includegraphics[width=0.5\linewidth]{appendix/img/Thermal/CAD-ansys.JPG}
}\end{align*}
%DIFAUXCMD
%DIFDELCMD < \caption{%
{%DIFAUXCMD
\DIFdelFL{The CAD Model Used for ANSYS Simulations}}
%DIFAUXCMD
%DIFDELCMD < \label{fig:Ansys-CAD-model}
%DIFDELCMD < \end{figure}
%DIFDELCMD <
%DIFDELCMD < %%%
\DIFdel{The CAD model is as }\DIFdelend \DIFaddbegin \DIFadd{The CAD model used is }\DIFaddend seen in the figure \ref{fig:Ansys-CAD-model}. The side exterior walls \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend $0.02 m$ \DIFaddbegin \DIFadd{in height}\DIFaddend , the interior walls of the Brain to the bags \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend $0.03 m$ \DIFaddbegin \DIFadd{in length }\DIFaddend and the top and bottom wall \DIFdelbegin \DIFdel{consist }\DIFdelend \DIFaddbegin \DIFadd{consisted }\DIFaddend of $0.03 m$ \DIFaddbegin \DIFadd{long }\DIFaddend Styrofoam as well. The outer parts of the pipes \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend set to stainless steel with a constant temperature (the same as the ambient outside). The tubes closest to the pump and the one \DIFdelbegin \DIFdel{going from }\DIFdelend \DIFaddbegin \DIFadd{leading from the }\DIFaddend pump to the manifold were set to \DIFdelbegin \DIFdel{air to simulate and }\DIFdelend \DIFaddbegin \DIFadd{include air to }\DIFaddend be able to vary \DIFaddbegin \DIFadd{during the simulation }\DIFaddend depending on the temperature outside \DIFaddbegin \DIFadd{of the experiment }\DIFaddend and the pump heating up from the heater.
\DIFaddbegin \begin{figure}[H]
\begin{align*}
\DIFaddFL{\includegraphics[width=0.5\linewidth]{appendix/img/Thermal/CAD-ansys.JPG}
}\end{align*}
\caption{\DIFaddFL{The CAD Model Used for ANSYS Simulations}}
\label{fig:Ansys-CAD-model}
\end{figure}
\DIFadd{In ANSYS, FEA simulations were done using both Steady-State Thermal and Transient Thermal analysis.
Because of the limitations in ANSYS student license, a simplified model was used, which can be seen in Figure \ref{fig:Ansys-Brain-model}. It was focused on the corner region of the experiment housing the Brain and had three walls to the sampling bags and assumed the air was uniformly heated on the inside. The uniform inside air could be taken from the data from the test flight in Figure (\ref{fig:thermal-testflight-AAC}). These simulations were done to find what temperature the pump and manifolds would reach, as they were the most critical components in the experiment.
}
\DIFaddend A transient thermal analysis was also performed by simulating a test flight with data from BEXUS 25 using results from MATLAB. It was performed so the thickness of the wall could be verified to see if it was good enough and whether adding heaters was required. \DIFdelbegin \DIFdel{By being flexible with adding heaters, moving them around, changing their strength and their on time , it is }\DIFdelend \DIFaddbegin \DIFadd{Through the addition of, correct placement, adequate assigned power, and activation time for the heaters, it was }\DIFaddend possible to enable the pump and the manifold to operate in their required temperature ranges.
\subsection{ANSYS Result}
\subsubsection{Including Air With Same Density as Sea Level in the Brain}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.5\linewidth]{appendix/img/Thermal/Air-inside-AAC-sampling-ascent.JPG}
\end{align*}
\caption{Cross Section of the Air in the Brain at the Time to Sample During Ascent.}
\DIFdelbeginFL %DIFDELCMD < \label{fig:Ansys-CAD-model}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \label{fig:Ansys-Brain-model} %DIF > Ivan was here
\DIFaddendFL \end{figure}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.49\linewidth]{appendix/img/Thermal/Pump-sampling-ascent.JPG}}
\hifll
\subfloat{\includegraphics[width=0.48\linewidth]{appendix/img/Thermal/Pump-sampling-descent.JPG}}
\caption{The Pump at the Time to Sample During Ascent (left) and Descent (right).}
\label{fig:Pump-Valve-ascent-sample}
\end{figure}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.48\linewidth]{appendix/img/Thermal/Valve-manifold-sampling-ascent.JPG}}
\hifll
\subfloat{\includegraphics[width=0.50\linewidth]{appendix/img/Thermal/Valve-manifold-sampling-descent.JPG}}
\caption{The manifold at the Time to Sample During Ascent (left) and Descent (right).}
\label{fig:Pump-Valve-ascent-sample-descent}
\end{figure}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.43\linewidth]{appendix/img/Thermal/Critical-lowest-pump.JPG}}
\hifll
\subfloat{\includegraphics[width=0.45\linewidth]{appendix/img/Thermal/Critical-lowest-valve.JPG}}
\caption{Pump and Manifold at the Coldest Part of Ascent.}
\label{fig:Pump-Valve-ascent-critical-lowest}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.5\linewidth]{appendix/img/Thermal/flushing-valve-ascent.JPG}
\end{align*}
\caption{Flushing Valve a Little Before Sampling Shall Start.}
\label{fig:flushing-valve}
\end{figure}
\subsubsection{No Air in the Brain}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.49\linewidth]{appendix/img/Thermal/Pump-sampling-ascent-no-air.JPG}}
\hifll
\subfloat{\includegraphics[width=0.49\linewidth]{appendix/img/Thermal/Pump-sampling-descent-no-air.JPG}}
\caption{The Pump at the Time to Sample During Ascent (left) and Descent (right).}
\label{fig:pump-no-air-in-brain}
\end{figure}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.44\linewidth]{appendix/img/Thermal/manifold-sampling-ascent-no-air.JPG}}
\hifll
\subfloat{\includegraphics[width=0.442\linewidth]{appendix/img/Thermal/manifold-sampling-descent-no-air.JPG}}
\caption{The manifold at the Time to Sample During Ascent (left) and Descent (right).}
\label{fig:structure-no-air}
\end{figure}
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.44\linewidth]{appendix/img/Thermal/no-air-sampling-ascent.JPG}}
\hifll
\subfloat{\includegraphics[width=0.442\linewidth]{appendix/img/Thermal/no-air-sampling-descent.JPG}}
\caption{The Structure of the Brain at the Time to Sample During Ascent (left) and Descent (right).}
\DIFdelbeginFL %DIFDELCMD < \label{fig:structure-no-air}
%DIFDELCMD < %%%
\DIFdelendFL \DIFaddbeginFL \label{fig:structure-no-air2}
\DIFaddendFL \end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.5\linewidth]{appendix/img/Thermal/flushing-valve-no-air-ascent.JPG}
\end{align*}
\caption{Flushing Valve a Little Before Sampling Shall Start.}
\label{fig:flushing-valve}
\end{figure}
\subsection{Result}
The main objective from \DIFdelbegin \DIFdel{performing first }\DIFdelend \DIFaddbegin \DIFadd{first performing }\DIFaddend the MATLAB calculations and then the ANSYS simulations \DIFdelbegin \DIFdel{has been to iterate and }\DIFdelend \DIFaddbegin \DIFadd{was to }\DIFaddend find the wall thickness of Styrofoam between the Brain and the inside of the AAC box \DIFaddbegin \DIFadd{and make updated iterations of the result}\DIFaddend . The next objective was to iterate the design \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{by }\DIFaddend adding heaters to find the required amount and find approximately how long they need to run. By running a transient thermal analysis for the test flight \DIFdelbegin \DIFdel{there was the possibility }\DIFdelend \DIFaddbegin \DIFadd{it was possible }\DIFaddend to simulate heaters that will be on and off to determine how strong they need to be.
The results from the ANSYS simulations assumed a worst case scenario. It was to be expected that the results were not fully accurate, and instead were slightly warmer in reality. The worst case \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend with air inside \DIFaddbegin \DIFadd{the experiment }\DIFaddend at normal density. In reality when it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend time to sample (at 17km), the air density \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be less then 15\% of the air density at sea level \cite{EngToolair}. \DIFdelbegin \DIFdel{It means there will }\DIFdelend \DIFaddbegin \DIFadd{This meant that there would }\DIFaddend be less heat loss from components to the air inside the brain \DIFaddbegin \DIFadd{than predicted}\DIFaddend . Figures \ref{fig:Pump-Valve-ascent-sample} and \ref{fig:Pump-Valve-ascent-sample-descent} show that the temperature of the pump \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend above $5\degree{C}$ and the manifold \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend above $-10\degree{C}$. It \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend only during a portion of the Ascent Phase, just prior to the start of sampling that the heater \DIFdelbegin \DIFdel{should }\DIFdelend \DIFaddbegin \DIFadd{would need to }\DIFaddend be on in order for the pump to be above $5\degree{C}$, and it \DIFdelbegin \DIFdel{shall only be needed }\DIFdelend \DIFaddbegin \DIFadd{would only need }\DIFaddend to be on during this Phase. By having a heater on the flushing valve and the manifold it \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend possible to get all the valves to the operating temperature. The flushing tube that \DIFdelbegin \DIFdel{goes }\DIFdelend \DIFaddbegin \DIFadd{led }\DIFaddend out to the open air outside \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend cool down the flushing valve\DIFdelbegin \DIFdel{so adding }\DIFdelend \DIFaddbegin \DIFadd{, so }\DIFaddend a heater there to compensate \DIFdelbegin \DIFdel{is required. When it is }\DIFdelend \DIFaddbegin \DIFadd{for the heat loss would be required. It would then be }\DIFaddend time to flush right before sample can be seen in Figure \ref{fig:flushing-valve}. The manifold would still need a heater because it will be affected by the cold outer air and help heat up all the components\DIFaddbegin \DIFadd{.
}\DIFaddend
The insulation for the AAC \DIFdelbegin \DIFdel{will be as }\DIFdelend \DIFaddbegin \DIFadd{used is }\DIFaddend specified in Table \ref{tab:Wall-thickness-AAC-CAC}. For the three inner walls between the Brain and the bags there \DIFdelbegin \DIFdel{will be }\DIFdelend \DIFaddbegin \DIFadd{was }\DIFaddend a $0.03 m$ \DIFaddbegin \DIFadd{long }\DIFaddend wall of Styrofoam. \DIFdelbegin \DIFdel{By using two }\DIFdelend \DIFaddbegin \DIFadd{Two }\DIFaddend $5\,W$ heaters for the pump \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{(}\DIFaddend one on top and one on bottom side\DIFaddbegin \DIFadd{)}\DIFaddend , a 5 W heater for the flushing valve and one for the manifold \DIFaddbegin \DIFadd{were used}\DIFaddend . The thermal simulations \DIFdelbegin \DIFdel{predict that they will }\DIFdelend \DIFaddbegin \DIFadd{predicted that they would }\DIFaddend be within the operating limits with a satisfactory margin. For the heater controller, it \DIFdelbegin \DIFdel{will be set }\DIFdelend \DIFaddbegin \DIFadd{would be set such }\DIFaddend that if the pump \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{fell }\DIFaddend below $15\degree{C}$\DIFdelbegin \DIFdel{it will }\DIFdelend \DIFaddbegin \DIFadd{, it would }\DIFaddend turn on. As for the flushing valve, the heater \DIFdelbegin \DIFdel{will }\DIFdelend \DIFaddbegin \DIFadd{would }\DIFaddend be set to turn on if the flushing valve and manifold \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{fell }\DIFaddend below $-5 \degree{C}$.
%The insulation elements will be attached to the rails and to each other through the use of glue as an adhesive. Due to its strength, glue can provide sufficiently strong bonds over small areas of contact between materials. This means only a handful of additional heat bridges to be factored and none of them particularly conductive. For remaining portions near the edges of each wall of insulation, the materials may be connected together with tape owing to its adhesive strength being similar to that of the glue. The tape would prevent convection (at the cost of slightly increased conduction at the points/edges of contact) at lower altitudes from occurring due to air flowing in between the otherwise exposed gaps between the Styrofoam, the aluminum sheets, and the rails. In the event that tape would not be a suitable candidate, the Styrofoam may be milled to match the groove profile of the supporting rails so that it may be fitted in between them directly - leaving the whole structure secure while only using glue for connecting the Styrofoam to the aluminum sheeting.
%The Styrofoam will be attached to the structure with thread. By attaching a thread to the 90 degree bracket reinforcements (seen in Figure \ref{fig:corner_bracket}) and then tie them together at the middle. Together with the aluminum sheet screwed into the structure it will make a slot to keep the Styrofoam in place so it can not move around. By having the thread tied in the middle it is possible tighten it up.
%Thread will be used to attache the styrofoam to the structure. It will loop around the 90 degree bracket reinforcements (seen in Figure \ref{fig:corner_bracket}) and its ends will be tied together in between them. Together with the aluminum sheet screwed into the structure, the thread will make a slot to keep the Styrofoam in place, keeping the styrofoam block held in place. By having the thread knot(s) in the middle it will be possible for the thread to be tightened to any degree necessary to prevent insulation instability.
\newpage
\section{Thermal Analysis MATLAB Code} \label{sec:appJ}
\subsection{Convection MATLAB Code}
\lstinputlisting[language=Matlab]{appendix/code/thermal/convecTUBULAR_2.m}
%\inputminted{matlab}{appendix/code/convecTUBULAR_1.m}
The resulting $h-ratio$ is then applied to the value of K in the main script written below:
\subsection{Main Thermal MATLAB Code}
\lstinputlisting[language=Matlab]{appendix/code/thermal/TUBULAR_v_1_5.m}
%\inputminted{matlab}{appendix/code/TUBULAR_v_1_4.m}
%The code will be uppdated but nice you have a way of get it into the SED looks good ^^
%Shall make it look pretty during the weekend only ;)
\newpage
\section{Budget Allocation and LaTeX Component Table Generator Google Script Code} \label{sec:appK}
\subsection{Budget Allocation Code}
\lstinputlisting[language=Matlab]{appendix/code/budget/BudgetAllocationCalculator.gs}
\newpage
\subsection{Latex Component Table Generator}
\lstinputlisting[language=Matlab]{appendix/code/budget/LatexComponentTableGenerator.gs}
\newpage
\newpage
\section{Center of Gravity Computation}
The Center of Gravity of the experiment has been calculated considering all the components' mass listed in Section \ref{components}.
\subsection{Code}
\lstinputlisting[language=Matlab]{appendix/code/centerofgravity/TUBULAR_COG.m}
\begin{landscape}
\section{Budget Spreadsheets}
\label{sec:appO}
\subsection{Structure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\linewidth]{appendix/img/budget/1-budget-spreadsheet-structure.png}
\end{align*}
\caption{Budget Table for Structure Components.}
\label{fig:budget-table-for-structure-components}
\end{figure}
\end{landscape}
\begin{landscape}
\subsection{Electronics Box}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.9\linewidth]{appendix/img/budget/2-budget-spreadsheet-electronics-box.png}
\end{align*}
\caption{Budget Table for Electronics Box Components.}
\label{fig:budget-table-for-electronics-box-components}
\end{figure}
\end{landscape}
\subsection{Cables and Sensors}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/budget/3-budget-spreadsheet-cables-and-sensors.png}
\end{align*}
\caption{Budget Table for Cables and Sensors Components.}
\label{fig:budget-table-for-cables-and-sensors-components}
\end{figure}
\begin{landscape}
\subsection{CAC}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/budget/4-budget-spreadsheet-cac.png}
\end{align*}
\caption{Budget Table for CAC Components.}
\label{fig:budget-table-for-structure-components}
\end{figure}
\end{landscape}
\begin{landscape}
\subsection{AAC}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\linewidth]{appendix/img/budget/5-budget-spreadsheet-aac.png}
\end{align*}
\caption{Budget Table for AAC Components.}
\label{fig:budget-table-for-aac-components}
\end{figure}
\end{landscape}
\begin{landscape}
\subsection{Tools, Travel, and Other}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/budget/6-budget-spreadsheet-tools-travel-other.png}
\end{align*}
\caption{Budget Table for Tools, Travel, and Other Components.}
\label{fig:budget-table-for-tools-travel-other-components}
\end{figure}
\end{landscape}
\section{Full List of Requirements}\label{sec:appFullListOfRequirements}
\subsection{Functional Requirements}
\begin{enumerate}
\item[F.1] \st{The experiment \textit{shall} collect air samples.}\footnote{Unnecessary requirement that has been removed.\label{fn:unnecessary-requirement}}
\item[F.2] The experiment \textit{shall} collect air samples by the CAC.
\item[F.3] The experiment \textit{shall} collect air samples by the AAC.
\item[F.4] \st{The experiment's AAC System \textit{shall} be able to collect air samples during the Ascent Phase.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.5] \st{The experiment's AAC System \textit{shall} be able to collect air samples during the Descent Phase.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.6] \st{The altitude from which a sampling bag will start sampling \textit{shall} be programmable.} \footnote{Moved to design requirements.\label{designRequirement}}
\item[F.7] \st{The altitude from which a sampling bag will stop sampling \textit{shall} be programmable.}\textsuperscript{\ref{designRequirement}}
\item[F.8] \st{The experiment \textit{shall} pump air into the AAC Sampling Bags.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.9] The experiment \textit{should} measure the air intake flow to the AAC.
\item[F.10] The experiment \textit{shall} measure the air pressure.
\item[F.11] The experiment \textit{shall} measure the temperature.
\item[F.12] \st{The experiment \textit{shall} collect data on the humidity.} \textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.13] \st{The experiment \textit{shall} measure the temperature inside the AAC Valve Box.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.14] \st{The experiment \textit{should} measure the humidity inside the AAC Valve Box.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.15] \st{The experiment \textit{shall} collect data on the time.}\footnote{Unverifiable requirement that has been removed.\label{fn:unverifiable-requirement}}
\item[F.16] \st{The experiment \textit{shall} accept telecommand instructions to program AAC sampling altitudes for each sampling bag.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.17] \st{The experiment \textit{shall} accept telecommand instructions to open designated valves.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.18] \st{The experiment \textit{shall} accept telecommand instructions to close designated valves.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.19] \st{The experiment \textit{may} accept telecommand instructions to change the sampling rate of the ambient pressure sensor.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.20] \st{The experiment \textit{may} accept telecommand instructions to change the sampling rate of the ambient temperature sensor.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.21] \st{The experiment \textit{may} accept telecommand instructions to change the sampling rate of the AAC Valve Box temperature sensor.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.22] \st{The experiment \textit{may} accept telecommand instructions to turn on the air pump.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.23] \st{The experiment \textit{may} accept telecommand instructions to turn off the air pump.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.24] \st{The experiment \textit{may} accept telecommand instructions to turn on the Valve Heater.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.25] \st{The experiment \textit{may} accept telecommand instructions to turn off the Valve Heater.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.26] \st{The experiment \textit{may} accept telecommand instructions to turn on the Electronics Box Heater.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[F.27] \st{The experiment \textit{may} accept telecommand instructions to turn off the Electronics Box Heater.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\end{enumerate}
\subsection{Performance Requirements}
\begin{enumerate}
\item[P.1] \st{The telecommand data rate \textit{shall} be 10 Kb/s.}\textsuperscript{\ref{designRequirement}}
\item[P.2] \st{The default sampling rate of the ambient pressure sensor during Standby mode \textit{shall} be 0.1 Hz.}\footnote{Replaced by P.23\label{replaceSampleRate}}
\item[P.3] \st{The default sampling rate of the ambient pressure sensor during Normal operation-ascent mode \textit{shall} be 0.2 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.4] \st{The default sampling rate of the ambient pressure sensor during Normal operation-descent mode \textit{shall} be 10 Hz.}\textsuperscript{\ref{replaceSampleRate}}
%\item The default sampling rate of the ambient pressure sensor \textit{shall} be TBD.
\item[P.5] \st{The default sampling rate of the AAC Valve Box temperature sensor \textit{shall} be 1 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.6] \st{The programmable sampling rate of the ambient pressure sensor \textit{shall} not be lesser than 0.1 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.7] \st{The programmable sampling rate of the ambient pressure sensor \textit{shall} not be greater than 100 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.8] \st{The programmable sampling rate of the Electronics Box temperature sensor \textit{shall} not be lesser than 1 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.9] \st{The programmable sampling rate of the Electronics Box temperature sensor \textit{shall} not be greater than 7 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.10] \st{The programmable sampling rate of the AAC Valve Box temperature sensor \textit{shall} not be lesser than 1 Hz.}\textsuperscript{\ref{replaceSampleRate}}
\item[P.11] \st{The programmable sampling rate of the AAC Valve Box temperature sensor \textit{shall} not be greater than 7 Hz.}\textsuperscript{\ref{replaceSampleRate}}
%\item The programmable sampling rate of the pressure sensor \textit{shall} not be lesser than .
\item[P.12] The accuracy of the ambient pressure measurements \textit{shall} be -1.5/+1.5 hPa for 25$\degree{C}$.
\item[P.13] The accuracy of temperature measurements \textit{shall} be +3.5/-3$\degree{C}$ (max) for condition of -55$\degree{C}$ to 150$\degree{C}$.
\item[P.14] \st{The accuracy of the ambient humidity measurements \textit{shall} be $\pm 3\%$ .} \cite{Humiditysensor}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[P.15] \st{The accuracy of the AAC Valve Box temperature measurements \textit{shall} be +3.5/-2$\degree{C}$(max).}\footnote{Combined with P13\label{fn:combi-p13}}
\item[P.16] \st{The air intake rate of the air pump \textit{shall} be minimum 3 L/min.}\textsuperscript{\ref{designRequirement}}
\item[P.17] \st{The temperature of the Electronics Box \textit{shall} be between 0$\degree{C}$ and 25$\degree{C}$.} \textsuperscript{\ref{designRequirement}}
\item[P.18] \st{The temperature of the Electronics Box \textit{shall} not exceed 25$\degree{C}$.}\footnote{Combined with P17 and moved to design requirements.\label{fn:combi-p17}}
\item[P.19] \st{The temperature of the AAC Valve Box \textit{shall} be between 0$\degree{C}$ and 25$\degree{C}$.}\textsuperscript{\ref{designRequirement}}
\item[P.20] \st{The temperature of the AAC Valve Box \textit{shall} not exceed 25$\degree{C}$.}\footnote{Combined with P19 and moved to design requirements.\label{fn:combi-p19}}
\item[P.21] \st{The air sampling systems \textit{shall} filter out all water molecules before filling the sampling containers.} \textsuperscript{\ref{designRequirement}}
\item[P.22] \st{The CAC air sampling \textit{shall} filter out all water molecules before filling the tube.}\footnote{Combined with P21 and moved to design requirements.\label{fn:combi-p21}}
\item[P.23] The sensors sampling rate \textit{shall} be 1 Hz.
\item[P.24] The temperature of the Pump \textit{shall} be between 5$\degree{C}$ and 40$\degree{C}$.
\item[P.25] The minimum volume of air in the bags for analysis \textit{shall} be 0.18 L at ground level.
\item[P.26] The equivalent flow rate of the pump \textit{shall} be between 8 to 3 L/min from ground level up to 24 km altitude.
\item[P.27] The accuracy range of the sampling time, or the resolution, \textit{shall} be less than 52.94 s, or 423.53 m.
\item[P.28] The sampling rate of the pressure sensor \textit{shall} be 1 Hz.
\item[P.29] The sampling rate of the airflow sensor \textit{shall} be 1 Hz.
\item[P.30] The accuracy of the pressure measurements inside the tubing and sampling bags \textit{shall} be -0.005/+0.005 bar for 25$\degree{C}$.
\end{enumerate}
\subsection{Design Requirements}
\begin{enumerate}
\item[D.1] The experiment \textit{shall} operate in the temperature profile of the BEXUS flight\cite{BexusManual}.
\item[D.2] The experiment \textit{shall} operate in the vibration profile of the BEXUS flight\cite{BexusManual}.
\item[D.3] The experiment \textit{shall} not have sharp edges that can harm the launch vehicle, other experiments, and people.%\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.4] The experiment's communication system \textit{shall} be compatible with the gondola's E-link system.
\item[D.5] The experiment's power supply \textit{shall} be compatible with the gondola's provided power.
\item[D.6] \st{The experiment \textit{shall} not disturb other experiments on the gondola.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.7] The total DC current draw \textit{should} be below 1.8 A.
\item[D.8] The total power consumption \textit{should} be below 374 Wh.
\item[D.9] \st{The experiment \textit{shall} be able to operate in low pressure conditions (10-15 hPa) up to 30 km altitude.}\footnote{Repeated in D18\label{fn:repeat-d18}}
\item[D.10] \st{The components of the experiment \textit{shall} operate within their temperature ranges.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.11] \st{The OBC \textit{shall} be able to autonomously control the heaters.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.12] \st{The ground station GC \textit{shall} be able to display some of the received data.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.13] \st{The experiment \textit{shall} be able to survive and operate between -30$\degree{C}$ and 60$\degree{C}$.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.14] \st{The external components that are directly exposed to the outside environment \textit{shall} be able to operate at -70$\degree{C}$.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.15] \st{The watchdog \textit{should} be able to reset the system.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[D.16] The experiment \textit{shall} be able to autonomously turn itself off just before landing.
\item[D.17] The experiment box \textit{shall} be placed with at least one face exposed to the outside.
\item[D.18] The experiment \textit{shall} operate in the pressure profile of the BEXUS flight\cite{BexusManual}.
\item[D.19] The experiment \textit{shall} operate in the vertical and horizontal accelerations profile of the BEXUS flight\cite{BexusManual}.
\item[D.20] \st{The experiment \textit{shall} operate in the
horizontal accelerations profile of the BEXUS flight.} \cite{BexusManual} \footnote{Combined with D19 \label{fn:combi-d19}}
\item[D.21] The experiment \textit{shall} be attached to the gondola's rails.
\item[D.22] The telecommand data rate \textit{shall} not be over 10 kb/s.
\item[D.23] The air intake rate of the air pump \textit{shall} be equivalent to a minimum of 3 L/min at 24 km altitude.
\item[D.24] The temperature of the Brain \textit{shall} be between -10$\degree{C}$ and 25$\degree{C}$.
\item[D.25] \st{The temperature of the Brain level 2 \textit{shall} be between 0$\degree{C}$ and 25$\degree{C}$.} \footnote{Combined with D24\label{fn:combi-d24}}
\item[D.26] The air sampling systems \textit{shall} filter out all water molecules before filling the sampling bags.
\item[D.27] The total weight of the experiment \textit{shall} be less than 28 kg.
\item[D.28] The AAC box \textit{shall} be able to fit at least $6$ air sampling bags.
\item[D.29] The CAC box \textit{shall} take less than 3 minutes to be removed from the gondola without removing the whole experiment.
\item[D.30] The AAC \textit{shall} be re-usable for future balloon flights.
\item[D.31] The altitude from which a sampling bag will start sampling \textit{shall} be programmable.
\item[D.32] The altitude from which a sampling bag will stop sampling \textit{shall} be programmable.
\end{enumerate}
\pagebreak
\subsection{Operational Requirements}
\begin{enumerate}
\item[O.1] \st{The TUBULAR Team \textit{shall} send telecommands from the ground station to the experiment before and during the flight.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.2] \st{The TUBULAR Team \textit{shall} receive telemetry from the experiment during the flight.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.3] \st{The experiment \textit{shall} change modes autonomously.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.4] \st{The heating mechanism \textit{shall} work autonomously.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.5] \st{The experiment \textit{shall} store data autonomously.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.6] \st{The Air sampling control system \textit{shall} work autonomously.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.7] \st{The valves in air sampling control system \textit{should} be controllable from the ground station.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.8] \st{The experiment \textit{should} be able to handle a timeout or drop in the network connection.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.9] \st{The heaters \textit{should} be controllable from the ground station.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.10] \st{The watchdog\footnote{Explained in subsection 4.8. Software Design} \textit{should} be able to reset the system.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.11] \st{The system \textit{should} be able to be reset with a command from the ground station.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.12] \st{The experiment \textit{should} enter different modes with a telecommand from the ground station.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[O.13] The experiment \textit{should} function automatically.
\item[O.14] The experiment's air sampling mechanisms \textit{shall} have a manual override.
\end{enumerate}
\subsection{Constraints}
\begin{enumerate}
\item[C.1] Constraints specified in the BEXUS User Manual.
\item[C.2] \st{The person-hours allocated to project implementation is limited by university related factors such as exams, assignments, and lectures.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[C.3] \st{Budget limited to TBD.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\item[C.4] \st{The dimensions show a minimum print area of 50 x 50 cm and 65 cm height experiment box.}\textsuperscript{\ref{fn:unnecessary-requirement}}
\end{enumerate}
\newpage
\section{Test Results} \label{sec:apptestres}
\subsection{Test 28: Pump Operations}
\label{sec:test28result}
The pump was connected via crocodile connections to a power supply set to 24 V. The power supply was then switched on and the current was read off. This set-up can be seen in Figure \ref{fig:pump-testing}.
It was found that when the power supply was switched on the current went up to 600 mA for less than one second. It then settled to 250 mA. By covering the air intake, simulating air intake from a lower pressure, the current drops to 200 mA. By covering the air output, simulating pushing air into a higher pressure, the current rises to 400 mA.
Therefore the power for each of these conditions is 14.4 W at turn on, 6 W in normal use, 4.8 W when sucking from low pressure, 9.6 W when pushing to high pressure.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/pump-testing.png}
\end{align*}
\caption{Photo Showing the Set-up for the Pump Testing in the Laboratory.} \label{fig:pump-testing}
\end{figure}
\subsection{Test 18: Pump Low Pressure}\label{subsection:pumplowpressuretest}
The pump was tested at low pressure using a small vacuum chamber that is capable of going down to \SI{1}{\hecto\pascal}. For this test the chamber was only taken down to \SI{30}{\hecto\pascal} as this is the expected pressure at 24 km, the highest altitude that will be sampled. The experiment set-up can be seen in Figure \ref{fig:pump-low-pressure-set-up}. The pump was connected to the power supply via two cables. It was also screwed into the base plate to prevent it from moving due to its own vibration during the test. A vacuum pump was connected to the chamber wall with a pressure sensor attached to monitor the pressure inside the chamber.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/low-pressure-set-up.png}
\end{align*}
\caption {Photo Showing the Set-up of the Vacuum Chamber, Power Supply and Vacuum Pump.}\label{fig:pump-low-pressure-set-up}
\end{figure}
The glass top and cage were then placed on top of the sampling bag and pump and the air slowly removed. Figure \ref{fig:pump-low-pressure-progress} shows the test as it was in progress.
As the air was removed from the chamber a new problem became immediately obvious. Air that was inside the bag before the test was expanding as the pressure decreased until the bag reached around $75\%$ of its total volume. The air had been pushed out of the sampling bag before the test but this had not been completed thoroughly enough. Therefore care must be taken to ensure that there is no, or very very small amounts, of air inside the bag before it enters a low pressure environment. For subsequent tests the pump was used in reverse to suck any remaining air out of the bags.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=6cm]{5-experiment-verification-and-testing/img/low-pressure-in-progress.png}
\end{align*}
\caption {Photo Showing the Pump and Sampling Bag in the Vacuum Chamber During the Test.} \label{fig:pump-low-pressure-progress}
\end{figure}
Repeating the test and using the pump to suck out excess air from the bags the chamber was taken to around \SI{30}{\hecto\pascal}. Once the chamber was at this pressure the pump was switched on and a stopwatch began. Once the bag stopped inflating the stopwatch was stopped. During this test there was also a drop in pressure to \SI{28}{\hecto\pascal} and during a repeat there was a drop to \SI{25}{\hecto\pascal}. This also occurred in later tests. This is not seen as a significant problem as during the flight this is exactly what will happen when testing during ascent. In addition the flow rate increases with increasing outside pressure therefore this is showing our worst case flow rate. It was found that the pump was able to successfully switch on and fill the bag at this altitude with a flow rate of approximately 3 L/min.
The test was repeated again at \SI{88}{\hecto\pascal}, representing 17 km altitude and \SI{220}{\hecto\pascal}, representing 11 km altitude. Here the flow rates were found to be 3.4 L/min and 4.9 L/min respectively. The results can also be seen in Table \ref{tab:pump-low-pressure-result} and Figure \ref{fig:pump-performance}.
As this test could only provide and approximation due to the lack of equipment such as flow-meters that would have made this test more precise it was later repeated. In the repeat of this test the flow rates were found to be within the same magnitude. The full results can be seen in Section \ref{sec:ExpecterResults} in Table \ref{tab:normal-flow-rates}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|l|l}
\cline{1-5}
\textbf{{\small Altitude(km)}}\par & \textbf{{\small Pressure Start(hPa)}}\par & \textbf{{\small Pressure End(hPa)}}\par & \textbf{{\small Time(sec)}}\par & \textbf{{\small Flow Rate(L/min)}}\par & \\ \cline{1-5}
24 & 30 & 23 & 60 & 3 & \\ \cline{1-5}
17 & 87 & 80 & 53 & 3.4 & \\ \cline{1-5}
11 & 220 & 190 & 37 & 4.9 & \\ \cline{1-5}
\end{tabular}
\caption{Table Showing the Time Taken Until the 3 L Bag Stopped Expanding at Various Different Pressures.}
\label{tab:pump-low-pressure-result}
\end{table}
\raggedbottom
\begin{figure}[H]
\begin{align*}
\includegraphics[width=11cm]{5-experiment-verification-and-testing/img/pump-performance.jpg}
\end{align*}
\caption {Obtained Pump Performance at Low Pressure.} \label{fig:pump-performance}
\end{figure}
\subsubsection{Test 30: Sampling Bag Bursting}
\label{sec:test30result}
A sampling bag was placed in a small vacuum chamber connected to the pump with the same set up as in Test 18, see Figures \ref{fig:pump-low-pressure-set-up} and \ref{fig:pump-low-pressure-progress}. The pump was run for 3 minutes with a full bag to see how the bag reacted. No changes were observed in the bag and no leaks appeared whilst it was in the testing chamber. Upon returning it to atmospheric levels it also appeared to be able to withstand the over pressure. The bag was then left, with the valve closed, on a table where it was handled a little during this time. Approximately 30 minutes after the test the bag made an audible popping noise and air leaked out. The damage that occurred to the bag during the burst can be seen in Figure \ref{fig:bag-burst-front} for the front of the bag and Figure \ref{fig:bag-burst-back} for the back of the bag.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.7\linewidth]{5-experiment-verification-and-testing/img/bag-burst-front_rescaled.png}
\end{align*}
\caption {Photo Showing the Extent of Damage on the Front of the Bag Due to Bursting.} \label{fig:bag-burst-front}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics{5-experiment-verification-and-testing/img/bag-burst-back_rescaled.png}
\end{align*}
\caption {Photo Showing the Extent of Damage on the Back of the Bag Due to Bursting.} \label{fig:bag-burst-back}
\end{figure}
This kind of bag failure could occur if bags are overfilled, particularly during ascent.
Next the system was set-up in the same way with a new bag. This time the pump was continuously run until failure occurred. This took around 6 minutes. The bag failed along the lower seam close to the valve and also at the valve connection. At the valve connection the bag ripped just above the valve. This time the burst was more energetic with the bottom of the bag moving outwards. Upon inspection the bottom of the bag was completely open and the part of the bag connected to the valve partially ripped open. In addition at the top of the bag small failures similar to those seen in Figure \ref{fig:bag-burst-front} were seen again. It is therefore thought that the bag was starting to fail at both the top and the bottom of the bag and but the bottom failed first.
The damage can be seen in Figures \ref{fig:seam-break} and \ref{fig:valve-rip}. It should be noted that the white bag valve was pulled off after the test and before photos were taken.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.45\linewidth]{5-experiment-verification-and-testing/img/bag-seam-break.jpg}
\end{align*}
\caption {Photo Showing the Damage Sustained to the Bottom of the Bag After Bursting Due to Continuous Pumping.} \label{fig:seam-break}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics{5-experiment-verification-and-testing/img/bag-valve-break_rescaled.png}
\end{align*}
\caption {Photo Showing Where the Bag Ripped Around the Valve.} \label{fig:valve-rip}
\end{figure}
This kind of bag failure could occur if there is a software error that results in the pump not switching off or a valve not closing, or if there is a malfunction in one of the valves which means it fails to close.
From the damage seen on the bags and from witnessing the burst it can be concluded that, as long as the bags are well secured to the valves at the bottom and through the metal ring at the top, bag bursting during flight would not cause damage to any other components on board. Even during the more energetic burst that occurs from continuous pumping the bag remained fixed to the valve connection and experienced no fragmentation. The consequences of a single bag burst would be limited to loss of data and a disturbance to audio frequencies.
\subsection{Test 29: Pump Current under Low Pressure}
\label{sec:test29result}
This test was set up in the same way as above in Test 18, see Figure \ref{fig:pump-low-pressure-set-up} and \ref{fig:pump-low-pressure-progress}. The addition to this test was a multimeter to read the current that the pump was drawing. The pump was tested once with the outlet attached to a bag and once with the outlet sealed. This provides the current when the pump is pumping into an ambient pressure and into a higher pressure.
In general it was found for both cases that decreasing the pressure, or increasing the altitude, lead to a decrease in pump current draw. It was noted that there was an increase in current draw in between sea level conditions and 11 km altitude conditions. However as the lowest sampling point it intended to be at 11 km this should not be a problem for the experiment. The full results can be seen in Table \ref{tab:pumpcurrentpressure}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Altitude (km)} & \textbf{Pressure (hPa)} & \textbf{Into Bag Current (mA)} & \textbf{Into Seal Current (mA)} \\ \hline
20 & 57 & 140 & 138 \\ \hline
18 & 68 & 150 & 141 \\ \hline
16 & 100 & 161 & 146 \\ \hline
12 & 190 & 185 & 175 \\ \hline
9 & 300 & - & 200 \\ \hline
6 & 500 & - & 242 \\ \hline
0 & 1013 & - & 218 \\ \hline
\end{tabular}
\caption{Table Showing How the Current Draw of the Pump Changed With Outside Air Pressure for Two Different Conditions. The First Pumping Into a Sampling Bag and the Second Pumping Into a Sealed Tube.}
\label{tab:pumpcurrentpressure}
\end{table}
A graphical representation of these results are shown in Figures \ref{fig:pumpcurpresbag} and \ref{fig:pumpcurpres}. From the table and figures it can be seen that the current draw is higher during the bag filling than during the sealed case. As the experiment will sample between 11 km and 24 km it can be concluded that the highest current draw will occur during the 11 km altitude sample and can be expected to be around 200 mA.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/pump-cureent-pressure.png}
\end{align*}
\caption {Graph Showing the Expected Current Values When the Pump is Pumping Air Into a Bag Based Upon the Results Obtained.} \label{fig:pumpcurpresbag}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/pump-current-pressure.png}
\end{align*}
\caption {Graph Showing the Expected Current Values When the Pump is Pumping Air Into a Sealed Outlet Based Upon the Results Obtained and the Data Shown In Figure \ref{fig:pumpflowcur}.} \label{fig:pumpcurpres}
\end{figure}
By looking at the data from both Test 18 and Test 29 a relationship can be seen between the outside air pressure, the flow rate of the pump and the current draw of the pump.
\subsection{Test 17: Sampling bags' holding times and samples' condensation verification}
\label{sec:test17result}
The main objective of this test was to flush eight 1 L sampling bags with nitrogen, the same way it will be done for the flight. After the flushing is done, fill them with a dry gas and leave them outside for 6, 14, 24 and 48 hours. Then analyze two sampling bags after each time duration and see if the concentration of gases inside has changed.
A dry gas is a gas of high concentration of $CO$ and low $H_2O$ and its exact concentration can be known by comparison to the calibrating gas in the Picarro analyzer. Therefore, the concentration when sampling the bags is known and it can be compared with the concentration after analysis. If the sampling bags can hold the samples for 48 hours then when analyzing, the concentration of gases should not change. If condensation occurs that will be seen as an increase in water vapour concentration.
Note that the size of the sampling bags was not the same as the size that will be used during the experiment. The reasons were availability of 1 L sampling bags at FMI and a first assumption that the size would not affect the results. The sampling bags were exactly the same model/material.
This test was realized at FMI in Sodankyl\"{a}. Eight Multi-Layer Foil bags of 1 L volume were connected to SMC valves as shown in Figure \ref{fig:bag-valve-quick-connector} and all together connected in series with stainless steel tubes as can be seen in Figure \ref{fig:bags-test-set-up}.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/bag-valve-quick-connector.jpg}
\end{align*}
\caption {1 L Sampling Bag With SMC Valve Attached to It. The Valve is at One of the Ends of the System so a Quick Connector is Connecting it to the Tube That Goes to the Nitrogen Bottle/Vacuum Pump.} \label{fig:bag-valve-quick-connector}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/bags-test-set-up.jpg}
\end{align*}
\caption {Sampling Bags System Connected in Series.} \label{fig:bags-test-set-up}
\end{figure}
Figure \ref{fig:bags-test-general-overview} shows a general overview of the experiment set up before the sampling bags were attached to the SMC valves. The picture shows the eight SMC valves hanging on a bar and red and black cables connecting them to the switches. It can also be seen a nitrogen bottle standing at the right side of the table and a vacuum pump under the table. Figure \ref{fig:nitrogen-vacuum-valve} shows the pressure sensor on the table, a flow-metre, a needle valve that adjusts the flow rate and a valve. This valve was used to control the filling and flushing of the sampling bags realized with nitrogen. The position shown in Figure \ref{fig:nitrogen-vacuum-valve} is for vacuuming, the pump is sucking the air from the sampling bags and the nitrogen tube is closed. The valve position for filling is the opposite, opening the nitrogen tube and closing the vacuum. There is also an intermediate position that closes both, nitrogen and vacuum.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/bags-test-general-overview.jpg}
\end{align*}
\caption {General Overview of the test Set up Before the Sampling Bags Were Attached to the Valves} \label{fig:bags-test-general-overview}
\end{figure}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/nitrogen-vacuum-valve.jpg}
\end{align*}
\caption {Valve that Controls Filling/Vacuum in of the Sampling Bags. Pressure Sensor, Flow-metre and Needle Valve.} \label{fig:nitrogen-vacuum-valve}
\end{figure}
The procedure during the test was as follows:
\begin{itemize}
\item Set up all the connections between pump, nitrogen bottle, valves system in series.
\item Attach the sampling bags to the SMC valves.
\item Start flushing the tubes with nitrogen. For this all the sampling bags' valves are closed.
\item Adjust the flow rate of nitrogen at 500 ml/min.
\item Open sampling bags' manual valves (not to be confused with the SMC valves which are still all closed).
\item Turn on valve 1. Fill sampling bag number 1 for 2 minutes. Turn off valve 1. Repeat it for the seven sampling bags left.
\item Change the valve seen in Figure \ref{fig:nitrogen-vacuum-valve} to vacuum position and empty the bags.
\item Flush the tubes after all the sampling bags have been emptied. This is to remove as much air as possible that could be left inside the sampling bags.
\item Repeat the flushing for two more times.
\item Change the nitrogen bottle for the dry gas bottle.
\item Flush the tubes with nitrogen.
\item Fill the eight sampling bags one by one.
\item Take the sampling bags outside as shown in Figure \ref{fig:bags-outside} to simulate the conditions at which they will be exposed after landing.
\end{itemize}
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.75\linewidth]{5-experiment-verification-and-testing/img/bags-outside_rescaled.png}
\end{align*}
\caption {Sampling Bags Left Outside Waiting to be Analyzed.} \label{fig:bags-outside}
\end{figure}
After each of the mentioned times, 6, 14, 24 and 48 hours, two sampling bags were taken inside the laboratory to be analyzed. The procedure to analyze was:
\begin{itemize}
\item Have the dry gas flowing through the Picarro analyzer for at least one hour before the analysis. This is to avoid having moisture inside the tubes and have stable measurements of concentrations.
\item Flush the tubes in between the two sampling bags with dry gas. For that the dry gas has to be disconnected from the analyzer and moisture would get into the Picarro. To avoid this, calibrating gas is flowing through the analyzer while the tubes are being flushed.
\item Connect the system formed by two sampling bags with one end to the dry gas bottle and the other to the Picarro inlet.
\item Wait for one hour until the readings of dry gas concentrations are stable.
\item Open the valve of the first sampling bag.
\item Right after the first sampling bag is empty, close its valve and open the valve for the next one.
\item Keep the dry gas flowing for one more hour after analysis.
\end{itemize}
After analyzing the sampling bags the obtained results are presented in Figure \ref{fig:test17-results}.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/test17-results.jpg}
\end{align*}
\caption {Obtained Variation in Concentration for (a) $CO_2$ in ppm, (b) $CO$ in ppb, (c) $CH_4$ in ppb and (d) $H_2O$ in ppb.} \label{fig:test17-results}
\end{figure}
It should be mentioned that the results were not at all what was expected. If the sampling bags held the gases for 48 hours, the analyzed concentration should have been the same as the dry gas used to fill them or the variation should have been smaller.
A possible explanation for this results could be that the emptying of the sampling bags was not done rigorously enough and that some air/nitrogen was left inside which diluted in the dry gas and changed the concentrations. This effect is even increased due to the smaller size of the used sampling bags (1 L instead of 3 L). This would also explain why the results don't follow any pattern.
The general outcome of this test was that the team realized that the flushing of the sampling bags is a very delicate process. This test was also useful to decide that the flushing of the sampling bags should be done with dry gas instead of nitrogen in order to minimize the effects of the nitrogen diluting in the samples.
This test had to and was repeated, using the set-up described in Section 4, with some differences. This time 3L bags were flushed with dry gas and left outside for 15, 24, 48 hours. After the flushing was done, two bags for each time were filled with 0.5 L and 1L of dry gas and left outside. Then they were analyzed and checked if the sample concentrations were the same or close enough with the reference values of the filled dry gas.
The obtained results are shown in Figure \ref{fig:test17-resultsSEP}. The blue points represent the sampling bags with the 0.5L sample, while the red points show the sampling bags with the 1L sample. Sampling Bag No1 with the sampling bag No4 were analyzed after 15 hours. The pair of sampling bags No2 and No5 were analyzed after 24 hours and the last pair of sampling bags, No4 and No6 after 48 hours.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{5-experiment-verification-and-testing/img/test17resultsSEP.jpg}
\end{align*}
\caption {Obtained Variation in Concentration for (a) $CO_2$ in ppm, (b) $CO$ in ppb, (c) $CH_4$ in ppb and (d) $H_2O$ in $\%$.} \label{fig:test17-resultsSEP}
\end{figure}
The results were very good in general with the $CO_2$ concentration differences not higher than 2 ppm. The bags with the 0.5L sample gave bigger $CO_2$ concentration differences and higher humidity for all the tested times. For the bags that analyzed after 48 hours, the humidity was two times higher for the 0.5L sample compared to the 1L sample. If water goes through the walls of the bags at the same rate for both bags then it is normal that sampling bags with larger amounts of sampled air have lower humidity concentrations. Therefore, for better results, the air left in the sampling bags at sea level pressure must be the maximum possible.
% Flushing with nitrogen may alter the results.
\subsubsection{Test 4: Low Pressure}
\label{sec:test4results}
\textbf{Styrofoam}
The same vacuum chamber was used as in Tests 18 and 29. The Styrofoam was measured on each side before it was placed in the chamber. It was then taken down to 5 hPa and held there for 75 minutes. It was then removed and the sides were measured again. It was found that there was no significant change in dimensions. The results can be seen in Table \ref{tab:styrofoam-test-result}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|}
\hline
Side & Before (cm) & After (cm) \\ \hline
A & 9.610 & 9.580 \\ \hline
B & 9.555 & 9.550 \\ \hline
C & 9.560 & 9.565 \\ \hline
D & 9.615 & 9.610 \\ \hline
E & 9.615 & 9.615 \\ \hline
F & 9.555 & 9.550 \\ \hline
G & 9.605 & 9.605 \\ \hline
H & 5.020 & 5.020 \\ \hline
I & 5.025 & 5.025 \\ \hline
J & 5.015 & 5.015 \\ \hline
K & 5.020 & 5.025 \\ \hline
\end{tabular}
\caption{Styrofoam Size Before and After Vacuum.}
\label{tab:styrofoam-test-result}
\end{table}
%\input{5-experiment-verification-and-testing/tables/test-results/styrofoamvacuum.tex}
As some sides are measured slightly bigger after and some slightly smaller it is thought this is due to the measuring technique and not due to changes in the Styrofoam. It is thought the result from side A could be due to deforming the Styrofoam with calipers or a misread original length.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.4\linewidth]{appendix/img/test-results/styrofoam-round-two.jpg}
\end{align*}
\caption {Picture Showing how the Styrofoam was Labeled for the Test.} \label{fig:styrofoam-test-labeling}
\end{figure}
\textbf{Airflow}
After the first airflow in vacuum test failed due to datalogging errors the airflow test was repeated. In this repeated test all of the Brain was placed into the vacuum chamber and one bag attached. It was not possible to attach more than one bag due to space restrictions. A view inside the vacuum chamber can be seen in Figure \ref{fig:insidevacuum}.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.67\linewidth]{appendix/img/test-results/vacuum-test/inside-chamber-square.jpg}
\end{align*}
\caption {Picture Showing inside of the Vacuum Chamber During the Test.} \label{fig:insidevacuum}
\end{figure}
To confirm the airflow rates the vacuum chamber was then taken down to from 400 hPa to 5 hPa in steps and the airflow rate logged. The results can be seen in Figure \ref{fig:airflowvacuum}.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/test-results/vacuum-test/airflowvacuumtestgraph.jpg}
\end{align*}
\caption {Graph Showing how the Airflow Rate is Changing with Ambient Pressure.} \label{fig:airflowvacuum}
\end{figure}
Sampling will take place between 200 hPa and 22 hPa, this area is seen in greater detail in Figure \ref{fig:airflowvacuumregion}. From this graph it can be seen that the airflow rate is varying from 1.2 LPM to 0.1 LPM.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=1\linewidth]{appendix/img/test-results/vacuum-test/airflow-vacuum-graph-sampling-area.jpg}
\end{align*}
\caption {Graph Showing how the Airflow Rate is Changing with Ambient Pressure in the Sampling Region.} \label{fig:airflowvacuumregion}
\end{figure}
It was noted that the airflow rate seemed to be very low when compared to the rate at which the bag was inflating. For example for the first sampling point the flow rate was around 0.4 LPM and was filled for 44 seconds. This would imply that the bag was 10\% full with 0.3 L, however from visual inspection it was clear the bag was at least 75\% full. When the bag was brought back to sea level pressure the amount of remaining air in the bag was inspected. It appeared there was approximately 0.3 L left in the bag, as seen in Figure \ref{fig:remainingair}. This led to the conclusion that the airflow rate displayed is the equivalent airflow at sea level.
\begin{figure}[H]
\begin{align*}
\includegraphics{appendix/img/test-results/vacuum-test/remaining-air-table_rescaled.png}
\end{align*}
\caption {Picture Showing the Air Remaining in the Bag After Returning to Sea Level Pressure.} \label{fig:remainingair}
\end{figure}
\textbf{Software}
With the same set-up as the airflow low pressure testing the software was tested to verify if it was operating as intended and that the conditions for stopping sampling were working.
First the software was run as it will be during flight. As the pressure inside the chamber was dropped it was possible to see through the LEDs on board the PCB the system going through the flight actions. When the first sampling altitude was reached the lights came on for the flushing valve and pump indicating the system was flushing. After one minute these lights went out and the first valve opened. For the first sampling point it was possible to see the bag inflate providing extra visual confirmation that the system is operating as intended.
The next check was to see if our conditions for stopping sampling were working. Testing with the initial sampling schedule meant that the time stopper always occurred first and worked well. The pressure threshold stopper also worked well with the system stopping sampling if the defined pressure range was left. Our third stopper is based on pressure and compares the pressure inside the bag to the ambient pressure to ensure we do not overpressure the bags. Interestingly it was found that this pressure was not being reached even after filling the bags continuously for three minutes, three times our time limit.
This shows that the maximum allowed pressure for the bags is not the pressure when the bag is 3L full but when the air in the bag has been compressed. This reduces the risk of bags bursting significantly.
\textbf{Temperatures}
As it is not possible to complete a thermal vacuum test in addition to the thermal testing temperatures were also monitored inside the vacuum chamber. Particular attention was paid to the CAC valve which will be on for the entire flight. From Figure \ref{fig:CACvalvetemp} it can be seen that after 2 hours the temperature is leveling off at around 68$\degree{C}$. This is still well within the operating temperature range of the valve. In addition this was also held at 5 hPa which is a lower pressure than the minimum expected. However this is close to the melting point of the Styrofoam which is at 75$\degree{C}$ so care must be taken to ensure there is no contact between the valve and Styrofoam. In the actual flight the valve is expected to be cooler than this due to the ambient temperature being lower.
\begin{figure}[H]
\begin{align*}
\includegraphics[width=0.8\linewidth]{appendix/img/test-results/vacuum-test/valve-temperature.jpg}
\end{align*}
\caption {Graph Showing how the Temperature of the CAC Flushing Valve Changes over Time at 5 hPa.} \label{fig:CACvalvetemp}
\end{figure}
The temperature of the pressure sensor, PCB, Pump and Manifold was also monitored with continuous use. After one hour and 48 minutes during the same test as the valve temperature the pressure sensor was found to reach 39$\degree{C}$. After one hour and 24 minutes during the flow rate monitoring test where the sensors, pump, and one manifold valve were on continuously the PCB temperature sensor was at 43$\degree{C}$, the pump at 42$\degree{C}$ and the manifold at 33$\degree{C}$. As the pump will never be on for more than a few minutes at a time there is not any concern that this temperature will ever be reached during flight.
\subsubsection{Test 24: Software and Electronics Integration}
The different type of sensors were integrated one at time with the Arduino. The airflow sensor was first sensor to be integrated. The only problem with this sensor was the lack of calibration to give the correct data. Whilst at FMI this sensor was calibrated using another airflow sensor at FMI. The next sensor to be integrated was the temperature sensor. After several failed attempts to establish a connection to the sensor, a library, based on the information from the sensors datasheet was made. With this library communication was successfully established. During testing some problems were discovered with the temperature sensors as they stopped giving data back when exposed to colder temperatures. This was fixed by implementing a new piece of software code. Before the integration of the pressure sensor it had been expected the need of a self made library, despite this several changes was needed to the library to make the sensor responsive. \par
It was discovered that the pressure sensor on board the PCB wouldn't function while the other pressure sensors when connected to the SPI bus. Parasitic capacitance was suggested to be the culprit when looking on the SPI bus with an oscilloscope. During an telephone conference with our mentors another more reasonable theory was put forwards. Since the SPI was designed to function for short distances only the long cables connecting the outside pressure sensors caused reflections in the bus. The solution was to disregard the pressure sensor on the PCB since it was not a critical sensor.\par
When all the sensors had been integrated the sensors where tested together. The result was all the sensors, exept the PCB pressure sensor, working without interfering with each other.
\subsubsection{Test 5: Thermal Test}\label{thermaltestresults}
The thermal chamber used was the one at FMI \DIFdelbegin \DIFdel{. It }\DIFdelend \DIFaddbegin \DIFadd{and Esrange. At FMI it }\DIFaddend could go down to between $-40\degree{C}$ and $-90\degree{C}$ \DIFaddbegin \DIFadd{and at Esrange it were tested down to \mbox{%DIFAUXCMD
$-60\degree{C}$
}%DIFAUXCMD
}\DIFaddend . A few long run tests were done slowly going down to a temperature and stabilizing before lowering the temperature again to safely test that everything worked when it was below $-20\degree{C}$ outside and if it would handle a 4h $-40\degree{C}$ test. The long run test for $-40\degree{C}$ still needs to be done, because the temperature sensors gave an error and stopped showing data meaning the test was interrupted. With no temperature data the heaters could not be operated properly and the risk for damage was high enough to stop the testing to make sure no components were harmed.
To start with, only the AAC was put into the freezer. In the following Figures the vertical dotted line indicates where the sensor started to throw the error.
The first test slowly went down to $-20\degree{C}$ before it stabilized, then it went down to $-30\degree{C}$ and then the communication received an error after a while as seen in Figure \ref{fig:test-2-thermal}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{appendix/img/test-results/Thermal-Test-2.jpg}
\caption{First Thermal Chamber Test.}
\label{fig:test-2-thermal}
\end{figure}
The second test went straight down to $-30\degree{C}$ and stabilized there to try long run test. After a while the communication got error again. The test can be seen in Figure \ref{fig:test-3-thermal}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{appendix/img/test-results/Thermal-Test-3.jpg}
\caption{Second Thermal Chamber Test.}
\label{fig:test-3-thermal}
\end{figure}
The third test went down to $-40\degree{C}$ and was left to stabilize. After approximately half an hour the communication error happened again as seen in Figure \ref{fig:test-4-thermal}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{appendix/img/test-results/Thermal-Test-4.jpg}
\caption{Third Thermal Chamber Test.}
\label{fig:test-4-thermal}
\end{figure}
The fourth test was a repeat of the third test with $-40\degree{C}$ and left to stabilize and survived approximately 50min as seen in Figure \ref{fig:test-5-thermal}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{appendix/img/test-results/Thermal-Test-5.jpg}
\caption{Fourth Thermal Chamber Test.}
\label{fig:test-5-thermal}
\end{figure}
A separate test was completed afterwards with only the CAC inside the freezer and the AAC outside with cables going to the CAC. The freezer was at $-26\degree{C}$ and the communication error occurred as seen in Figure \ref{fig:CAC-thermal-chamber}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{appendix/img/test-results/CAC-only-freezer-test.jpg}
\caption{CAC Thermal Chamber Test.}
\label{fig:CAC-thermal-chamber}
\end{figure}
The conclusion of the test is that the heating regulations for the pump and the manifold are working as they should keeping the critical components operating. It was concluded that the PCB is not the issue for the temperature sensors to give back error.
%The reason for the error is under investigation because no clear correlation between temperature or time can be concluded for why the error happens.
The communication error was solved and a 8h freezer test were done at IRF to see if it could work in $-20\degree{C}$ temperature. At the same time three different resistors were put for the outside pressure sensors. The resistors produced 0.1W, 0.5W and 1W to the pressure sensors to determine the temperature difference from ambient temperature. In Figure \ref{fig:freezer_test_IRF} the temperatures is shown over time.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{appendix/img/test-results/Freez-20.JPG}
\caption{Freezer Test for 8h at IRF.}
\label{fig:freezer_test_IRF}
\end{figure}
The average temperature difference between the pressure sensor and the ambient temperature was as can be seen in table \ref{tab:resistor-temp-dif}.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Watt from resistor (W) & 0.1 & 0.5 & 1 \\ \hline
Temperature difference ($\degree{C}$) & 14.0488 & 31.336 & 56.9809 \\ \hline
\end{tabular}
\caption{Temperature Difference on Pressure Sensor from Ambient.}
\label{tab:resistor-temp-dif}
\end{table}
\DIFdelbegin \DIFdel{Further testing has to be done to complete a }\DIFdelend \DIFaddbegin \DIFadd{A final thermal test were done at Esrange. The test were 3h and 30min at \mbox{%DIFAUXCMD
$-50\degree{C}$
}%DIFAUXCMD
where all functions were tested and then went down to \mbox{%DIFAUXCMD
$-60\degree{C}$
}%DIFAUXCMD
to try the experiment a little more. In the end the test lasted }\DIFaddend 4h \DIFdelbegin \DIFdel{\mbox{%DIFAUXCMD
$-50\degree{C}$
}%DIFAUXCMD
operating test now when the issue of temperature sensors giving error is solved}\DIFdelend \DIFaddbegin \DIFadd{40min.
%DIF > Natalie can add some about the setup with images here.
}
\DIFadd{As seen in Figure \ref{fig:thermal-test-esrange-O-4-3} the temperature of the pump (pink) and the valve (black) have their heating cycles fully working. After a little more then an hour the flushing were tested. Then soon after the pump and it can be seen that the temperature of the pump drop bellow zero. It were confirmed then if the pump is operating it can drop bellow zero and keep going. It were found out that when it were \mbox{%DIFAUXCMD
$-55\degree{C}$
}%DIFAUXCMD
and colder the valve heater could not keep up fully and started to drop in temperature when the valves were not operating. Even while the temperature were dropping it were so slow and that it will not drop bellow operating temperature during flight and if the valves are operating they generate some heat as well.
}\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{5-experiment-verification-and-testing/img/Thermal-test-esrange.jpg}
\caption{\DIFaddFL{Thermal Chamber Test.}}
\label{fig:thermal-test-esrange-O-4-3}
\end{figure}
\DIFadd{In Figure \ref{fig:thermal-test-esrange-O-4-3} it can be seen a lot of spikes and it is when the software returns an error value. It is not an issue because the error rate is so low and goes back to working directly after so it does not affect the heating system}\DIFaddend .
\subsubsection{Test 20: Switching Circuit Testing and Verification}
This has begun on breadboards with LEDs replacing the valves until the valves arrive.
So far DC-DCs have been set up and tested. Sensors have been connected electronically and the next step is to get them to communicate with the Arduino.
Mosfets connecting to the pump and the heaters have been tested for switching on and off with good results.
\subsubsection{Test 32: Software Failure}
So far testing has revealed that losing the SD card does not interrupt ground station data, it just means no data will be written to the SD card. However, if you reconnect the SD after removing it currently it will not connect back to the SD card and it as if the SD card has been permanently lost.
\pagebreak
\subsection{Test 33: Electrical Component Testing}
\label{sec:test33result}
The components were separately tested and later on tested together inside the full system. The separate component testing can be seen in Table \ref{tab:test33-result-electrical-component}. The results of these tests can be seen in Section \ref{sec:Test-Results-Electrical-Component-Testing}. There was also a full assembly off all components on the bench connected on a breadboard. This test was carried out with nominal results. Furthermore there were some PCB tests which can be seen in Table \ref{tab:test33-result-PCB-Tests}. The results of these tests can be seen in \ref{sec:Test-Results-PCB-Testing}
\begin{longtable}{|m{0.12\textwidth}|m{0.1\textwidth}|m{0.2\textwidth}|m{0.55\textwidth}|}
\hline
Complete & Test \# & Test & Description \\ \hline
YES & 1 & Test Voltage divider (Airflow + Pressure sensor) & Test the airflow sense with voltage divider and check the voltage output with divider vs without to control that signal does not get interfered \\ \hline
YES & 2 & Test MOSFET & Test the MOSFETS by applying 3.3v to the gate from something else than Arduino \\ \hline
YES & 3 & Test LED Configuration & Test the resistance and zener diode configuration\\ \hline
YES & 4 & Test Valves with MOSFET & Test to open and close valves through the MOSFETS without Arduino 3.3v \\ \hline
YES & 5 & Test Pump + MOSFET & Same as valves but for pump \\ \hline
YES & 6 & Test DCDCs in parallel with LED & Test that the parallel configuration of the dcdc converters with the indication lights and make sure it works as expected \\ \hline
YES & 7 & Test interface connections & Test the dsub and power cables that will be inside the brain by checking for connectivity and wire resistances. \\ \hline
YES & 8 & Test Potentiometer trimming for DCDC & Test the DCDC trimming by using the potentiometers \\ \hline
YES & 9 & Test grounding for analog components & Test the grounding configuration for the analog components and compare it with non isolated ground wile turning on other power hungry components \\ \hline
YES & 10 & Heater testing & Supply 28.8v to the heaters in paralel and see if they work outside their limits described in the datasheet. \\ \hline
\caption{Electrical Component Testing Detailed Descriptions}
\label{tab:test33-result-electrical-component}
\end{longtable}
\begin{longtable}{|m{0.12\textwidth}|m{0.1\textwidth}|m{0.2\textwidth}|m{0.55\textwidth}|}
\hline
Complete & Test \# & Test & Description \\ \hline
YES & 12 & Check conections & Check that all soldering points are connected to the right parts of the board \\ \hline
YES & 13 & Assembly & Solder everything in it's places and turn the board on and chech that everything is working nominaly
\\ \hline
\caption{PCB Tests with Detailed Descriptions}
\label{tab:test33-result-PCB-Tests}
\end{longtable}
\subsubsection{Test results for Electrical component testing}
\label{sec:Test-Results-Electrical-Component-Testing}
\begin{itemize}
\item Test 1: The max output from the airflow sensor was not 10V as advertised, it was instead 10.82V. The resistors in the voltage divider will be changed accordingly to take care of this problem and not to damage the Arduino. There will be 240k Ohms on the upper part of the voltage divider and 100k on the bottom. This will limit the max output to approximately 3.2V which will be safe for the Arduino. Pressure sensor max output was approximately 9.54V at ground level. The sensor is sensitive to the voltage divider. It seems that a larger resistance lowers the output compared to an open circuit. After testing with three resistors at 330k, 100k, 33k and 3.3k Ohm resistor it was seen that using 330k Ohm the output was measured to 7V over the whole bridge. With the 3.3k Ohm voltage divider the output was measured to be 9.52V which is 0.2$\%$ below open circuit which is smaller than the measurement error of the sensor itself. Furthermore the output had a lot of very small spikes which gave a voltage ripple of 300-400mV, adding a 0.47uF capacitance in parallel to the output and ground decreased the ripple to 60-70mV. Larger capacitors did not lower the ripple although smaller capacitors increased the ripple. Furthermore the start up spikes of the senor has to be rectified, therefore a 1uF capacitor will be added to the sensor output on the wiring since there is no space dedicated for this on the main PCB. Furthermore, huge spikes of upwards to 30V peak to peak was discovered on turn on on the Vin on the pressure sensor. Therefore a 100uF electrolytic capacitor will be added in parallel to the 12 system
\item Test 2: Using the MOSFET as a grounding switch and supplying 3.2V to the gate the resulting drain to source voltage was below 0.3V for components using 24V or 28V
\item Test 3: 1k Ohm on 12V 3.9k Ohm for 24V as pull-up resistors
\item Test 4: The circuit worked as expected. The circuit pulled 127mA at 24V 3.048W for one manifold valve. The valves on the tubing gave the same results
\item Test 5: Supplying 3.2V to the gate on the MOSFET started the pump as expected.
\item Test 6: The circuit functioned as expected. The indications are indicating each DCDC individually
\item Test 7: All interface connections have been checked with good result by using the continuity measurement on a multimeter
\item Test 8: Trimming works, extra resistors had to be added in series with the potentiometer to get the resistance required for the voltage output that was required. 26+390k Ohm for 24V. 15 + 32k Ohm for 12V
\item Test 9: The proposed way of grounding analog sensors works. Although, if there is a faulty grounding somewhere else, the Arduino sits at risk since all the grounding current goes through the Arduino and might burn the Arduino
\item Test 10: Using the heaters in parallel at 28.8V worked fine as long as they had some material to dissipate the heat into. Otherwise they are overheating. This is true at 28V as well which is the specified max voltage. Supplying 28.8V will not be a problem. The heaters did use a little extra power. The specific power draws were: 28.62V and depending on the temperature the power draw was 0.37-0.36A. This results in a power draw of 10.59W which is 0.59W or 5.9$\%$ more than expected.
\end{itemize}
\subsubsection{Test Results for PCB Testing}
\label{sec:Test-Results-PCB-Testing}
\begin{itemize}
\item Test 12: All connections were checked and it was discovered that some connections were not connected and some were faulty connected. There was one MOSFET gate pin that was not connected. The 28.8V power that goes to the heaters was connected to the 24V power. And the 24V power was not connected to the dc-dc. This was solved by adding a separate wire to connect the 24V power to one of the pins 24V pins on the D-Subs, since the D-Subs were connected together. The 28.8V power was solved by removing the cable from the d-sub and adding a separate connector that goes to 28.8V power. The MOSFET gate problem was solved by adding a wire from the correct Arduino pin to the corresponding MOSFET gate pin.
\item Test 13: After the connections was checked and the problems resolved, the board was turned on. The functionality was checked and everything worked nominally.
\end{itemize}
\subsubsection{Test 27: Shock test}
The entire pneumatic system and electrical system was mounted in the AAC box along with the walls and styrofoam attached. It was then dropped from a height of approximately one meter three times. Nothing came loose or was damaged after this drop test. All electronics were verified to still work.
\subsubsection{Test 9: Vibration test}
The entire experiment was placed in the tailgate of a car, while the test was carried out on a 18 km long rough terrain. An emergency brake was also implemented during the test. The experiment's functionality and structural integrity were capable of handling the vibrations and the stopping force.
No damages or issues were detected after this test.
\subsubsection{Test 25: Structure test}
A team member was placed on top of each box's structure, see Figure \ref{fig:structure-test}. Both the CAC and AAC box was able to fully support the member's weight without showing any instability or deflections. No damages or issues were detected after this test.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\linewidth]{appendix/img/test-results/structure-test.jpeg}
\caption{Structure Test for AAC Box}
\label{fig:structure-test}
\end{figure}
\subsubsection{Test 12: Removal test}
For a non team member to perform the removal of the CAC box based on the given instructions, it took that person 6 min and 25 sec. One problem that occurred during this test was that the person had problems to distinguishing the CAC from the AAC box. To resolve this the boxes will now have clear labels on them. The set up for this test can be seen in Figure \ref{}.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\linewidth]{appendix/img/test-results/fully-attached.jpg}
\caption{Picture Showing how the Experiment was Mounted on the Bench.}
\label{fig:structure-test}
\end{figure}
There were additional confusions during the test due to the fact the experiment was not actually fixed to the gondola and it had to be explained what was meant by gondola attachment points. The items to be unscrewed were also not yet clearly marked which also added time to the test. It is expected that the time will be lower for the recovery team due to this. Therefore the team finds this time to be satisfactory.
\subsubsection{Test 2: Data collection test}
The full software was run in auto mode to check everything operated as expected over a full test flight. At the end of the simulated flight the experiment was to shutdown automatically. This was tested both on the bench and in the vacuum chamber. In the vacuum chamber tests, see Section \ref{lowpressure}, the bench test, see Appendix \ref{benchtest} and the thermal test, see Appendix \ref{thermaltestresults} data collection was also monitored. It was found that the physical samples were being collected properly and all the sensors were returning good data.
\subsubsection{Test 7: Bench test}\label{benchtest}
The experiment was run for 5 hours simulating 1 hour on ground, 1.5 hours in ascent, 2 hours in float and 0.5 hours in descent. The experiment was found to be operating as intended at all points. Additionally, the temperature sensors have been tested at ambient conditions for over 6 hours. No problems were found with the temperature sensors on the bench.
\subsubsection{Test 16: Sampling test}
The system was tested while already mounted as this test was pushed back due to the late arrival of the static pressure sensor.
The Arduino successfully controlled all valves and the pump and through the static pressure and airflow sensor readings alone it could be confirmed if a bag was sampling.
%\input{appendix/appendix-cleaning-checklists.tex}
%\input{appendix/appendix-o.tex}
%\input{appendix/appendix-p.tex}
\end{appendices}
% \glossarystyle{list}
% \printglossaries
% \printglossary[title=Abbreviations,toctitle=Acronyms]
%\addcontentsline{toc}{section}{Bibliography}
%\bibliography{refs}
%\bibliographystyle{plain}
\restoregeometry
\end{document} | {
"alphanum_fraction": 0.7355015525,
"avg_line_length": 86.7443251159,
"ext": "tex",
"hexsha": "ae1cbd4304ac2d2f5fb0848c2ff465da34cd3377",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "georgeslabreche/tubular-bexus-sed",
"max_forks_repo_path": "diffs/10-diff-v4.0-v5.0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "georgeslabreche/tubular-bexus-sed",
"max_issues_repo_path": "diffs/10-diff-v4.0-v5.0.tex",
"max_line_length": 7433,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "georgeslabreche/tubular-bexus-sed",
"max_stars_repo_path": "diffs/10-diff-v4.0-v5.0.tex",
"max_stars_repo_stars_event_max_datetime": "2018-01-17T10:38:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-17T10:38:07.000Z",
"num_tokens": 212650,
"size": 710783
} |
Yampa is a Haskell library for functional reactive programming. Functional reactive programming is a high-level declarative style of programming for systems which must respond to continuous streams of input, which are often time dependant, without undue delay. Example of such systems include video games, robots, animations and simulations. Haskell with its lazy evaluation, and separation of pure and impure code doesn't make it obvious how to work in these kinds of domains, without adopting a procedural style, so Yampa was developed to provide a more idiomatic approach.
\section{How does Yampa operate?}
Functional Reactive Programming is about processing signals. At its core, Yampa takes a varying input signal (in applications this might be, for example, the temperature from a temperature sensor), and processes it in some way, providing a corresponding varying output signal on the other side (continuing our imaginary example, this might include information on whether a heater should be on or not). The upper part of Figure \ref{fig:overview} illustrates this idea.
So Yampa is concerned with building objects which can take a continuously varying input and provide a corresponding continuously varying output. If we refer to values which continuously vary as \emph{signals}, then Yampa is a library concerned with building and using \emph{signal functions}.
The bulk of Yampa is concerned with building signal functions. However, it is useful to see how a signal function is actually used in real world Haskell code to process signals. The tool that integrates signal functions into normal Haskell code is called \hask{reactimate}. \hask{reactimate} is a sample-process-output loop. The bottom part of Figure \ref{fig:overview} illustrates an idealised loop. \hask{reactimate} has three important inputs.
First \hask{reactimate} has an input of type \hask{IO a}, whose role is to take a sample of a signal which takes values of type \hask{a}. It might get the position of a mouse cursor, determine whether a key on the keyboard has been pressed, or get the temperature from a temperature sensor. Secondly, \hask{reactimate} needs a signal function, which is described using Yampa code. After collecting its input sample, of type \hask{a}, \hask{reactimate} processes it with the signal function and obtains an output sample, of type \hask{b}, say. \hask{reactimate} then has an output action, which knows what to do with this value of type \hask{b} in the outside world. For example, our output sample might be a set of coordinates to position an image on the screen, and our output function will take these coordinates and render the image there. The output \hask{IO} action comes back with a value of type \hask{Bool}, which encodes whether reactimate should continue looping or stop. So the input to \hask{reactimate} which describes how our signal should be output to the word is of type \hask{b -> IO Bool}.
Clearly, \hask{reactimate} should be capable of input and output, and so it should be a function in the \hask{IO} monad, and indeed, \hask{reactimate} returns a value of type \hask{IO ()}. In addition, \hask{reactimate} should receive some initialisation information, again of type \hask{IO a}.
\includegraphics[height=300pt]{Diagrams/overview.png}
So \hask{reactimate} continuously samples the input, processes it with a signal function, and performs some output, until our output function says to stop. The above captures the essence of what reactimate does, but in reality the type of reactimate is a little more opaque:
\begin{lstlisting}
reactimate :: IO a
-> (Bool -> IO (DTime, Maybe a))
-> (Bool -> b -> IO Bool)
-> SF a b
-> IO ()
\end{lstlisting}
The first \hask{IO a} here is the initialisation input, and the value of type \hask{a} is the value of the first sample. The second input, of type \hask{Bool -> IO (DTime, Maybe a)} is our input function. In the definition of \hask{reactimate} the input value of \hask{Bool} is unused, so really one just needs to specify a value of type \hask{IO (DTime, Maybe a)}. In fact to make this simpler, lets define
\begin{lstlisting}
sInput :: IO (DTime, Maybe a) -> Bool -> IO (DTime, Maybe a)
sInput inp _ = inp
\end{lstlisting}
\noindent to convert a value of type \hask{IO (DTime, Maybe a)} in a value of type \hask{Bool -> IO (DTime, Maybe a)}. Now lets examine the values wrapped in the \hask{IO} type. First \hask{DTime}: \hask{reactimate} doesn't have a built in time tracking system, so on each sample of the input signal, one is required to input the elapsed time since the last sample was taken. \hask{DTime} is a synonym for \hask{Double} used to represent this. Later, we define our own reactimate which should work on any POSIX system, and hides this time tracking. Presumably it is kept visible for systems with less uniform or custom time tracking requirements. The second value here, of type \hask{Maybe a}, is the input sample, wrapped in \hask{Maybe} since there may be no input signal, or our sampling may fail. If a value of \hask{Nothing} is fed in, then the value from the previous sample is used.
The third input to \hask{reactimate} specifies how to deal with output. Again, the first \hask{Bool} in \hask{(Bool -> b -> IO Bool)} is unused, so lets define
\begin{lstlisting}
sOutput :: (b -> IO Bool) -> Bool -> b -> IO Bool
sOutput out _ = out
\end{lstlisting}
\noindent to wrap a value of type \hask{b -> IO Bool} in the type required for \hask{reactimate}. The value of type \hask{b -> IO Bool} works as described above, where a value of \hask{True} from the output function indicates to \hask{reactimate} that it should stop. The fourth value of type \hask{SF a b} is the signal function, processing a signal taking values of type \hask{a} into a signal taking values of type \hask{b}. Yampa is concerned with building these signal functions. This will be the subject of (most of) the remainder of this guide.
Using our simplified input and output wrappers, we define a simplified version of \hask{reactimate}:
\begin{lstlisting}
sReactimate :: IO a -> IO (DTime, Maybe a) -> (b -> IO Bool) -> SF a b -> IO ()
sReactimate init inp out sigFun = reactimate init (sInput inp) (sOutput out) sigFun
\end{lstlisting}
\noindent which looks a little clearer, and more like what we discussed above. In fact, we will define a function \yampaMain
\begin{lstlisting}
yampaMain :: IO a -> IO (Maybe a) -> (b -> IO Bool) -> SF a b -> IO ()
\end{lstlisting}
\noindent which also deals with the timing in POSIX compatible systems. We write this in a module \hask{YampaUtils.hs} which we will use in the rest of this guide. Listing \ref{lst:yampaUtils} gives the complete contents of the module. Note \yampaMain is essentially the same as \hask{sReactimate} but wrapped in a time tracking system.
\lstinputlisting[caption={YampaUtils.hs}, frame=single, label=lst:yampaUtils]{./src/YampaUtils.hs}
\section{The structure of a Yampa program}
One can think of Yampa programs as programs of the form illustrated in Listing \ref{lst:generalForm}. To write a Yampa program, we need to specify an initialization, input and an output function, and also construct a signal function to do the required transformations. We then feed all this in to \yampaMain which does the processing we require.
\begin{lstlisting}[caption={General form of a Yampa program}, label={lst:generalForm}]
import FRP.Yampa as Y
import YampaUtils
init :: IO a
-- Do some initialisation
input :: IO (Maybe a)
-- Get some input
output :: b -> IO Bool
-- Do some output
sigFun :: SF a b
-- Do some signal transformations
main :: IO ()
main = yampaMain init input output sigFun
\end{lstlisting}
Of course, real programs will take many different forms, but the idealised above form illustrates what we need to build in order to have an executable program.
\section{Signals and signal functions}
Yampa is a library for building and using signal functions. We have deliberately avoided making precise what is meant by a signal for two reasons
One can think of a signal taking values of type \hask{a}, or more succintly a signal of type \hask{a}, as a value of type \hask{a} with a context. Indeed, one can really think of it as being a value of type \hask{IO a} (see for instance \hask{getPOSIXTime} and compare it with the output from the \hask{time} signal function later). Like any other value of type \hask{IO a}, extracting values at different times can yield different results.
| {
"alphanum_fraction": 0.7658018868,
"avg_line_length": 91.1827956989,
"ext": "tex",
"hexsha": "9c23766f0bac95406b006855e53b352cc89846e8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c5d6c462cc54a36a0100163d27c4db321b30ecad",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "rlupton20/yampaTutorial",
"max_forks_repo_path": "intro.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "c5d6c462cc54a36a0100163d27c4db321b30ecad",
"max_issues_repo_issues_event_max_datetime": "2016-05-21T16:45:18.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-02-06T14:42:30.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "rlupton20/yampaTutorial",
"max_issues_repo_path": "intro.tex",
"max_line_length": 1107,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "c5d6c462cc54a36a0100163d27c4db321b30ecad",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "rlupton20/yampaTutorial",
"max_stars_repo_path": "intro.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-11T17:56:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-02-06T14:37:31.000Z",
"num_tokens": 2106,
"size": 8480
} |
\chapter{An application to sequences}
\label{ch:time_series}
Recall that the optimisation problem is stated as
\[
P^*_{opt} = \argmin_{P \in \mathcal{P}^*} \sum_{T_i \in \mathcal{T}} \epsilon^*(s(T_i), O^*(P, T_i)),
\]
where the objects that need to defined are the input and solution formats $[D]$ and $[S]$, the problem set $\mathcal{P}^*$, the test data $\mathcal{T}$, the error function $\epsilon^*$, and the oracle $O$. The framework presented in the previous chapter makes the assumption that all relevant problems $P$ can be decomposed into a set of functions $P = (T_D, F_E, C, F_R, M, \Sigma, T_S)$.
In this chapter, the objects mentioned above are defined for the two most common tasks in anomaly detection in sequences. Furthermore, suitable choices of and restrictions on $T_D$, $F_E$, $C$, $F_R$, $M$, $\Sigma$ and $T_S$ for anomaly detection are discussed in depth.
From here on, a \emph{sequence} will be taken to mean any list ($[X]$ for some set $X$) for which the list order reflects the natural ordering of the elements. A \emph{time series} is defined to be any sequence in $[(\mathbb{R}^+, X)]$, where the elements are ordered such that their first component (the timestamp) is increasing.
In Section~\ref{sect:tasks}, the two anomaly detection tasks we will study are presented. Corresponding oracles are presented.
The remaining sections consist of a discussion of how the components $T_D$, $F_E$, $C$, $F_R$, $M$, $\Sigma$, and $T_S$ relate to anomaly detection in sequences, and which component choices are appropriate.
\section{Tasks}
\label{sect:tasks}
Two main tasks can be distinguished in anomaly detection in sequences: \emph{finding anomalous sequences in a set of sequences}, and \emph{finding anomalous subsequences in a long sequence}~\cite{chandola}. The former task can be seen as the detection of point anomalies in an unstructured set of sequences, while the latter corresponds to finding contextual anomalies in a totally ordered set.
\subsection{Finding anomalous sequences}
The task of finding anomalous sequences in a set of sequences involves taking a list of similar sequences and producing a list of corresponding anomaly scores. The input elements are not related, i.e.\ the input data is unstructured. Thus, the task can be seen as one of detection of point anomalies in a collection of sequences. This task has been the subject of intense research. Thorough reviews are found in~\cite{chandola2} and~\cite{chandola3}.
For an example of this task, see figure \ref{fig:example1}. Here, dataset consists of a set of sequences of user commands extracted from a system log, and task corresponds to detecting individual anomalous sequences in this dataset. While sequences $\mathbf{S_1}$ through $\mathbf{S_4}$ are originate from ordinary user sessions, sequence $\mathbf{S_5}$ could indicate an attack. Accurately detecting such anomalous sequences is an important problem in computer security.
The input data has the format $[D]$, where $D$ is itself a set of sequences, i.e.\ $D = [X]$ for some set $X$. In the example above $X$ is a set of commands, but it could just as well be $\mathbb{R}$ or any other set. The solution format is $[S]$, where either $S = \mathbb{R}^+$ or $S = \{0, 1\}$ depending on the application requirements.
Since the input data is unstructured, any transformation $T_D$ must produce lists with the same length as it is given. Correspondingly, we can let $S' = S$. This renders $T_S$ redundant, so it can be ignored.
Since the task deals with unstructured data, the components $F_E$, $F_R$, and $\Sigma$ can be ignored. An oracle can then be formulated as:
\begin{algorithmic}
\Require{Some $X \in [D]$.}
\State{$X' \gets T_D(X)$}
\State{$A \gets []$ \Comment{initialize anomaly scores to empty list}}
\For{$E \in X'$} \Comment{iterate over elements}
\State{$append(A, M(E, C(X', E)))$ \Comment{compute and store anomaly scores}}
\EndFor{}
\State{\Return{$A$ \Comment{aggregate scores to form anomaly vector}}}
\end{algorithmic}
As per the discussion in the previous chapter, $C$ can only be one of two functions, corresponding to unsupervised and semi-supervised anomaly detection, respectively.
\subsection{Finding anomalous subsequences}
The task of finding anomalous subsequences of long sequences corresponds to finding anomalous contiguous sublists of the input data $[D]$. In contrast to the task of finding anomalous sequences, the input data is structured, and the sequence ordering naturally gives rise to concepts of proximity and context. This task is relatively poorly understood, but is highly relevant in many application domains. As a consequence, automated methods can be expected to be very useful for this task. Essentially any monitoring or diagnosis application could benefit from a better understanding of the task.
For examples of sequences to which this task might be applied, see Figures~\ref{fig:example2} and~\ref{fig:anomaly_types}. These are all real-valued sequences which contain anomalous items or subsequences.
As with the previous task, either $S = \mathbb{R}^+$ or $S = \{0, 1\}$ depending on the application. However, it here makes sense to allow $T_D$ to compress the data (i.e.\ return a shorter list than it is given). Correspondingly, a corresponding $T_S$ is required in order to transform the preliminary solution (in $[S']$) to a list of anomaly scores with the same length as the input data.
Since all components must be used for this task, the oracle is identical to the one presented in Section~\ref{sect:oracle}.
\section{The input data format \texorpdfstring{$\mathcal{D}$}{D}}
Categorical, discrete, and real-valued sequences have all been extensively studied. Categorical sequences arise naturally in applications such as bioinfomatics and intrusion detection. Discrete sequences are typically encountered when monitoring the frequency of events over time. Finally, real-valued sequences are encountered in any application that involves measuring physical phenomena (such as audio, video and other sensor-based applications).
Which the origins and nature of the data obviously differs heavily between these different categories of sequences, the choice of components (with the exception of the transformations and anomaly measures) can be made independently of the type of data. For this reason, we can disregard the data format in most of the following discussion.
\section{The transformations \texorpdfstring{$T_D$}{TD} and \texorpdfstring{$T_S$}{TS}}
\begin{figure}[htb]
\begin{center}
\leavevmode
\includegraphics[width=\textwidth]{resources/types_of_data}
\end{center}
\caption{\small{Illustration of numerosity and dimensionality reduction in a conversion of a real-valued sequence to a symbolic sequence. The top frame shows a real-valued sequence sampled from a random walk. The second frame shows the resulting series after a (piecewise constant) dimensionality reduction has been performed. In the third frame, the series from the second frame has been numerosity-reduced through rounding. The bottom frame shows how a conversion to a symbolic sequence might work; the elements from the third series is mapped to the set $\{a,b,c,d,e,f\}$.}}
\label{fig:types_of_data}
\end{figure}
Transformations are commonly used to facilitate the analysis of sequences, and a large number of different such transformations are found in the literature.
Feature extraction is commonly performed to reduce the dimensionality of sequences, and especially of real-valued ones. Essentially, the task of feature extraction in real-valued sequences corresponds to, given a sequence $s = [s_1, s_2, \dots, s_n]$, finding a collection of basis functions $[\phi_1, \phi_2, \dots, \phi_m]$ where $m < n$ that $s$ can be projected onto, such that $s$ can be recovered with little error. Many different methods for obtaining such bases have been proposed, including the discrete Fourier transform~\cite{faloutsos1}, discrete wavelet transforms~\cite{pong}~\cite{fu}, various piecewise linear and piecewise constant functions~\cite{keogh3}~\cite{geurts}, and singular value decomposition~\cite{keogh3}. An overview of different representations is provided in~\cite{fabian}.
Arguably the simplest of these bases are piecewise constant functions $[\phi_1, \phi_2, \dots, \phi_n]$:
\[
\phi_i(t) = \left\{
\begin{array}{l l}
1 & \quad \text{ if } \tau_i < t < \tau_{i+1} \\
0 & \quad \text{ otherwise.} \\
\end{array} \right.
\]
where $(\tau_1, \tau_2, \dots \tau_n)$ is a partition of $[t_1, t_n]$.
Different piecewise constant representations have been proposed, corresponding to different partitions. The simplest of these, corresponding to a partition with constant $\tau_{i+1} - \tau_i$ is proposed in~\cite{keogh4} and~\cite{faloutsos2} and is usually referred to as \emph{piecewise aggregate approximation (PAA)}. As shown in~\cite{keogh5},~\cite{keogh3} and~\cite{faloutsos2}, PAA rivals the more sophisticated representations listed above.
Numerosity reduction is also commonly utilised in analysis of real-valued sequences. One scheme that combines numerosity and dimensionality reduction in order to give real-valued sequences into a categorical representation is \emph{symbolic aggregate approximation} (SAX)~\cite{sax}. This representation has been used to apply categorical anomaly measures to real-valued data with good results. A simplified variant of SAX is demonstraded in figure~\ref{fig:types_of_data}.
In general, real-valued sequences are much easier to deal with than time series. For this reason, irregular time series are commonly transformed to form regular time series, which can be treated as sequences. Formally, such transformations map a sequence in $[(\mathbb{R}^+, X)]$ to a sequence in $[X]$.
The simplest such transformation involves simply dropping the timestamp component of each item. This is useful when the order of items is important, but how far apart they are in time is not. This is often the case when dealing with categorical sequences. An example of such an application is shown in figure~\ref{fig:example1}.
Another common class of transformations involves estimating the (weighted) frequency of events. This is useful in many scenarios, especially in applications involving machine-generated data. Several methods can be used to generate sequences appropriate for this task from time series, such as histograms, sliding averages, etc.
% Given a time series $[(t_1, x_1), (t_2, x_2), \dots, (t_n, x_n)]$ in $[(\mathbb{R}^+, X)]$, with associated weights $w_i$ and some envelope function $e(s, t): X \times \mathbb{R} \rightarrow X$, as well as a spacing and offset $\Delta, t_0 \in \mathbb{R}^+$, a sequence $[(s_{1}^{'}, \tau_1), (s_{2}^{'}, \tau_2), \dots]$ is constructed where $\tau_i = t_0 + \Delta \cdot i$ and $s_{i}^{'} = \sum_{(s_j, t_j) \in S} s_i w_i e(t_j - \tau_i)$.
% The $\tau_i$ can then be discarded and the time series treated as a sequence\footnote{Note that this method requires multiplication and addition to be defined for $X$, and is thus not applicable to most symbolic/categorical data. Also note that $\mathbf{s}'$ is really just a sequence of samples of the convolution $f_S \ast e$ where $f_S = \sum_i \delta(t_i) s_i w_i$.}. Histograms are recovered if $e(s, t) = 1$ when $|t| < \Delta/2$ and $e(x, t) = 0$ otherwise.
% How this aggregation is performed has a large and often poorly understood impact on the resulting sequence. As an example, when constructing histograms, the bin width and offset have implications for the speed and accuracy of the analysis. A small bin width leads to both small features and noise being more pronounced, while a large bin width might obscure smaller features. Similarly, the offset can greatly affect the appearance of the histograms, especially if the bin width is large. There is no `optimal' way to select these parameters, and various rules of thumb are typically used~\cite{density_estimation}.
% Furthermore, noisy data is often resampled to form regular time series. In this case, any of a number of resampling methods from the digital signal processing literature~\cite{TODO} may be employed.
% One commonly used transformation for real-valued data is the Z-normalization transform, which modifies a sequence to exhibit zero empirical mean and unit variance.\footnote{It has been argued that comparing time series is meaningless unless the Z-normalization transform is used~\cite{keogh5}. However, this is doubtful, as the transform masks sequences that are anomalous because they are displaced or scaled relative to other sequences.}
Transformations that transform the data into some alternative domain can also be useful. For example, transformations based on the \emph{discrete Fourier transform} (DFT) and \emph{discrete wavelet transform} (DWT)~\cite{fu} have shown promise. The DFT is parameter-free, while the DWT can be said to be parametrised due to the variety of possible wavelet transforms.
\section{The filters \texorpdfstring{$F_E$}{FE} and \texorpdfstring{$F_R$}{FR}}
\label{sect:filters}
As was previously mentioned, filters are really only interesting for the task of finding anomalous subsequences. Here, the role of the filter is to map a sequence in $[D']$ to a list candidate anomalies (subsequences of the input sequence).
By far the most frequently used filters are \emph{sliding window} filters. These map a sequence $X = [x_1, x_2, \dots, x_n]$ to
\[
F_E(X) = [[x_1, x_2, \dots, x_w], [x_{s + 1}, x_{s + 2}, \dots, x_{s + w}], \dots, [x_{n - w}, x_{n - w + 1}, \dots, x_n]],
\]
where $w$ and $s$ are arbitrary integers (typically $s \leq w$)\footnote{We here assume that $s | n - w$. Otherwise, the last element above might look a bit different.}.
\section{The context function \texorpdfstring{$C$}{C}}
\label{sect:context}
The ordering present in sequences naturally give rise to a few interesting contexts, which are now demonstrated for a sequence $s = [s_1, s_2, \dots, s_n]$ and a candidate anomaly $s' [s_i, s_{i + 1}, \dots, s_j]$, where $1 \leq i \leq j \leq n$. It is here assumed that all candidate anomalies are contiguous. As mentioned in the previous chapter, contexts can be used to generalise the concept of training data. Semi-supervised anomaly detection corresponds to the \emph{semi-supervised context} $C(s, s') = T$, where $T$ is some fixed set of training data.
Likewise, traditional unsupervised anomaly detection for subsequences can be formulated using the \emph{trivial context} $C(s, s') = [[s_1, s_2, \dots, s_{i - 1}], [s_{j + 1}, s_{j + 2}, \dots, s_n]]$. This corresponds to finding either point anomalies or collective anomalies in a sequence.
Another interesting context is the \emph{novelty context} $C(s, s') = [[s_1, s_2, \dots, s_{i - 1}]]$. This context captures the task of novelty detection in sequences, which is especially interesting in monitoring applications.
Finally, a family of \emph{local contexts}
\[
C(s, s') = [[s_{\max(1, i - a)}, s_{\max(2, i - a + 1)}, \dots, s_{i-1}], [s_{j+1}, s_{j+2}, \ldots s_{min(n, j+b)}]]
\]
may be defined for $a, b \in \mathbb{N}$, in order to handle anomalies such as the one in the last sequence of figure~\ref{fig:anomaly_types}.
\section{The anomaly measure \texorpdfstring{$M$}{M}}
\label{sect:anomeasure}
As was mentioned previously, an anomaly measure is a function $M: ([D'], [[D']]) \rightarrow \mathbb{R}^+$ that takes a candidate anomaly $c \in [D']$ and a reference set $R \in [[D']]$, and produces an anomaly score based on how anomalous $C$ is with regards to $R$. A large number of such measures have been used in the literature, and a thorough discussion of these is not possible within the scope of this report. Instead, we will limit our consideration to distance-based anomaly measures, since these are flexible and have been shown to perform well in general for sequences~\cite{chandola3}.
Distance-based anomaly measures operate by, given some distance measure $\delta: [D'] \times [D'] \rightarrow \mathbb{R}^+$, somehow aggregating the set $\{\delta(c, r) | r \in R\}$ into an anomaly score in $\mathbb{R}^+$. A class of especially interesting distance-based anomaly measure is k-nearest-neighbour-based anomaly measures. These assign as anomaly score the mean of the $k$ largest values of $\{\delta(c, r) | r \in R\}$.
Possible interesting choices of $\delta$ for real-valued data include the \emph{Euclidean distance} or the more general \emph{Minkowski distance}; measures focused on time series, such as \emph{dynamic time warping}~\cite{dtw}, \emph{autocorrelation measures}~\cite{autocorrelation}, or the \emph{Linear Predictive Coding cepstrum}~\cite{cepstrum}.
There are also several or measures developed for series of categorical data, such as the \emph{compression-based dissimilarity measure}~\cite{keogh2}. Often, coupling a distance measure with a data transformation step (in order to, for instance, to apply a distance measure defined on categorial data to a real-valued seqeuence) can yield good results.
\section{The aggregation function \texorpdfstring{$\Sigma$}{Sigma}}
\label{sect:aggregation_function_2}
Examples of anomaly detection problems which involve aggregation are hard to find in the literature. For this reason, suggesting appropriate choices of $\Sigma$ is difficult. A few choices which are likely to produce good results are $\Sigma$ on the form suggested in Section~\ref{sect:aggregation_function}, with $\sigma$ that produce either the \emph{maximum}, \emph{minimum}, \emph{median}, or \emph{mean} of its input values. The resulting $\Sigma$ will henceforth be referred to as $\Sigma_{max}$, $\Sigma_{min}$, $\Sigma_{median}$ and $\Sigma_{mean}$, respectively.
| {
"alphanum_fraction": 0.7583084969,
"avg_line_length": 117.7682119205,
"ext": "tex",
"hexsha": "5ce99808f6e340b94527567674e8676e113b8d2d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-16T21:50:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-16T21:50:52.000Z",
"max_forks_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wsgan001/AnomalyDetection",
"max_forks_repo_path": "report/references/reference_research/time_series.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wsgan001/AnomalyDetection",
"max_issues_repo_path": "report/references/reference_research/time_series.tex",
"max_line_length": 806,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wsgan001/AnomalyDetection",
"max_stars_repo_path": "report/references/reference_research/time_series.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4504,
"size": 17783
} |
\documentclass[12pt]{article}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is the preamble of the .tex file and is the place where you should specify which packages to include and what settings you want for particular commands.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{adjustbox, amsmath, amssymb, amsthm, blindtext, bm, bbm, dblfloatfix, esint, fancyhdr, float, graphicx, letltxmacro, marginnote, mathtools, subcaption, xcolor, titlesec, esint, mdframed}
\usepackage[margin=1.0in]{geometry}
\usepackage{chngcntr}
\usepackage[space]{grffile}
\usepackage[labelfont=bf]{caption}
\usepackage[shortlabels]{enumitem}
\usepackage{setspace}
% the listings package can be used to include direct code snippets
\usepackage{listings}
\usepackage{color}
\onehalfspacing
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=mystyle}
\usepackage[colorlinks = true,
linkcolor = blue,
urlcolor = blue,
citecolor = blue,
anchorcolor = blue]{hyperref}
% the following lines take care of the headers and footers for each page
\makeatletter
\newcommand{\rightorleftmark}{%
\begingroup\protected@edef\x{\rightmark}%
\ifx\x\@empty
\endgroup\nouppercase{\leftmark}
\else
\endgroup\rightmark
\fi}
\makeatother
\makeatletter
\begingroup
\catcode`\_=\active
\protected\gdef_{\@ifnextchar|\subtextup\sb}
\endgroup
\def\subtextup|#1|{\sb{\textup{#1}}}
\AtBeginDocument{\catcode`\_=12 \mathcode`\_=32768}
\makeatother
\pagestyle{fancyplain}
\fancyhf{}
\cfoot{ \fancyplain{}{\thepage} }
\renewcommand{\footrulewidth}{0.4pt}
\renewcommand{\vec}[1]{\boldsymbol{\mathbf{#1}}}
\allowdisplaybreaks
\begin{document}
\title{\vspace{-10mm}Federated Machine Learning}
\author{Oh Joon Kwon}
\date{November 1, 2019}
\maketitle
\section{Introduction}
There has been a recent surge of interest in machine learning in both academia and industry. Now, it has become harder to find areas where machine learning has not been applied as the areas of applications are no longer confined by disciplines, from engineering to history. However, the need for better and more scalable machine learning models and algorithms often leads to ethical concerns.
\vspace{3mm}
\noindent For instance, some implementations without the full understanding of the dataset and model have perpetuated unfairness in socio-economic contexts \cite{nnorient}. Generative algorithms, such as Generative Adversarial Nets and GPT-2, have raised some concerns of dispersing misinformation by malicious parties \cite{leak}. Moreover, fundamental data collection methods for training and validating models have been often overlooked as necessary components of large-scale machine learning, though many practices have been deemed to have violated individual privacy.
\vspace{3mm}
\noindent With these problems in mind, researchers are now focusing on the balanced development of ethics and technologies that could help mitigate these troublesome aspects of machine learning. Recent papers suggest some fundamental solutions in terms of homomorphic encryption protocols, but it is computationally expensive and algebraically limiting at times while providing the most secure methods of privacy preserving machine learning \cite{ppml}. The most plausible solution to privacy concerns is Federated Machine Learning, a multi-party computation (MPC) framework that could guarantee individual privacy without compromising the accuracy of a generalized model.
\vspace{3mm}
\noindent This project aims to implement the recent privacy-preserving multi-party computation frameworks such as Federated Averaging and Split Neural Nets without networking components. As it is a relatively new field of machine learning, we also plan to investigate possible attacks on Federated Machine Learning schema using information leakage. Time permitting, we plan to implement these information leakage attacks in our demo.
\section{Background}
Federated Machine Learning proposes a solution to privacy-preserving large-scale machine learning by training models locally on an individual client device then updating the model parameters or finishing the rest of the model training on the central server.
\vspace{3mm}
\subsection*{Problem Statement.}
\noindent We would like to first define the fundamental objective in machine learning called the ``finite-sum problem:
$$ \min_{\vec{\theta} \in \mathbb{R}^d} \frac{1}{n} \sum_{i=1}^n f_i(\vec{\theta})\text{"} $$
\noindent In the context of machine learning, we would like to minimize our loss of predictions over examples, so we can take $f$ to be a loss function, $f_i(\vec{\theta}) = l(x_i, y_i; \vec{\theta})$ where $l$ is a loss on the prediction made on example $(x_i, y_i)$ with a set of model parameters $\vec{\theta}$. For the multi-party computation scheme with data partitioned over $K$-clients. Let $S_k$ be the set of indices of data points on client $k \in \{1, 2, \dots K \}$ with $n_k := |S_k|$. We can restate the problem to ``the Federated Machine Learning context:
$$ \min_{\vec{\theta} \in \mathbb{R}^d} f(\vec{\theta}) = \sum_{k=1}^K \frac{n_k}{n} F_k(\vec{\theta}) \qquad \qquad F_k(\vec{\theta}) := \frac{1}{n_k} \sum_{i \in S_k} f_i(\vec{\theta}) \qquad \qquad n = \sum_{k=1}^K n_k$$
\vspace{3mm}
\noindent Moreover, the optimization problem implicit in federated learning scheme is often based on the following assumptions \cite{fml}. This problem is often termed Federated Optimization.
\begin{enumerate}
\item[(1)] \textbf{Non-IID}. The training data on a client is based on the usage by a user, which is often not representative of the population.
\item[(2)] \textbf{Unbalanced}. Some users can be heavy users while others are not.
\item[(3)] \textbf{Massively Distributed}. The number of clients participating in an optimization is much larger than the average number of examples per client. That is, $|C_t| \gg \sum_{k=1}^K n_k / K$
\item[(4)] \textbf{Limited Communication}. Oftentimes, clients are offline or on limited bandwidth/expensive connections.
\end{enumerate}
\vspace{3mm}
\subsection*{Algorithms.}
\noindent There are several proposed algorithms for solving the Federated Optimization problem. One of the most promising algorithms is \textbf{Federated Averaging}, which is already being used in the Google Keyboard app. The algorithm not only provides privacy by design, but also personalizes models to individual users. It also claims to be more resource efficient in terms of communication rounds. Federated Stochastic Gradient Descent (FedSGD) is a special case of Federated Averaging, where the algorithm is given below:\cite{fml}
\begin{mdframed}
\textbf{Server Executes} :
\begin{enumerate}[\qquad]
\item \text{Initialize the parameters $\vec{\theta}_0$.}
\item \textbf{for }\text{each round $t=1, 2, \dots $} \textbf{do}
\item \text{\qquad Pick random sample of clients $C_t$ where $1 \leq |C_t| \leq K$.}
\item \qquad \textbf{for }\text{each client $k \in C_t$} \textbf{in parallel do}
\item \qquad \qquad $\vec{\theta}_{t+1}^k \leftarrow \text{ClientUpdate} (k, \vec{\theta}_t)$
\item \qquad $\vec{\theta}_{t+1} \leftarrow \sum\limits_{k=1}^K \dfrac{n_k}{n} \vec{\theta}_{t+1}^k$
\end{enumerate}
\textbf{Each Client Executes ClientUpdate}$(k, \vec{\theta})$:
\begin{enumerate}[\qquad]
\item \text{Batch data $S_k$ into collection $\mathcal{B}$.}
\item \textbf{for }\text{each local epoch $i=1, 2, \dots, E$} \textbf{do}
\item \qquad \textbf{for }\text{each batch $b \in \mathcal{B}$} \textbf{do}
\item \qquad \qquad $\vec{\theta} \leftarrow \vec{\theta} - \alpha \cdot \nabla F_k (\vec{\theta}; b)$
\item \text{Return $\vec{\theta}$ to server}
\end{enumerate}
\end{mdframed}
\noindent Another algorithm is \textbf{Split Neural Nets (SplitNN)}, where a part of training (upto a ``cut layer'') is done locally then the rest is done on the central server. For notation, let $\vec{A}_t$ be the client-side network weights with $\varphi$ as random intializing distribution, $\vec{S}_t$ be the server-side network weights, and $\vec{W}_t$ be all network weights, $[\vec{A}_t, \vec{S}_t]$. Here, we will use $l$ for denoting a general loss function to be minimized. The algorithm is claimed to have achieved computation resource and bandwidth efficiency \cite{splitnn}. The vanilla configuration of the algorithm is done in the following way:\cite{nopeek}
\begin{mdframed}
\textbf{Server Executes} :
\begin{enumerate}[\qquad]
\item \textbf{for }\text{each round $t=1, 2, \dots $} \textbf{do}
\item \qquad \textbf{for }\text{each client $k \in C_t$} \textbf{in parallel do}
\item \qquad \qquad $\vec{A}_t^k \leftarrow \text{ClientUpdate}(k,t)$
\item \qquad \qquad $\hat{\vec{Y}}_t^k \leftarrow f(f(b, \vec{A}_t^k), \vec{S}_t^k)$
\item \qquad \qquad $\vec{W}_t \leftarrow \vec{W}_t - \alpha \cdot \nabla F_k(\vec{W}_t; \hat{\vec{Y}}_t^k)$
\item \qquad \qquad \text{Send $\nabla F_k(\vec{A}_t^k; \hat{\vec{Y}}_t^k)$ to client $k$ for ClientBackprop$(k,t)$}
\end{enumerate}
\textbf{Each Client Executes ClientUpdate}$(k, t)$:
\begin{enumerate}[\qquad]
\item $\vec{A}_t^k = \varphi$
\item \textbf{for }\text{each batch $b \in \mathcal{B}$} \textbf{do}
\item \qquad \text{Concatenate logits $f(b, \vec{A}_t^k)$ to $\vec{A}_t^k$ (Batch $b$ includes labels)}
\item \text{Return $(f(b, \vec{A}_t^k), \vec{A}_t^k)$ to server}
\end{enumerate}
\textbf{Each Client Executes ClientBackprop}$(k,t)$:
\begin{enumerate}[\qquad]
\item \textbf{for }\text{each batch $b \in \mathcal{B}$} \textbf{do}
\item \qquad $\vec{A}_t^k \leftarrow \vec{A}_t^k - \alpha \cdot \nabla F_k(\vec{A}_t^k; \hat{\vec{Y}}_t^k)$
\end{enumerate}
\end{mdframed}
\section{Proposed Methodology}
We plan to replicate the proposed federated learning algorithms as a demo. Since the project is more focused on the theoretical aspects of Federated Machine Learning, we will not be implementing the remote networking component of the algorithms. Instead, we will ``simulate" the networking environment in a local machine by creating multiple functions representing individual clients and computing server-side computation in \texttt{__main__()} function.
\vspace{3mm}
\noindent We will investigate the computation and communication efficiency by counting the number of communication rounds when each function is called. We will be timing each computation components done in each client and server to measure computation resource efficiency. Time and resource permitting, we would also like to investigate the scalability of each algorithm.
\vspace{3mm}
\noindent For the dataset, we will be using the MNIST handwritten digits dataset. MNIST dataset will provide sufficient number of examples without resource-heavy features. In order to simulate the non-IID assumption, we will sort the dataset according to the labels and split the dataset into subsets in order (possibly of different size), then assign them to virtual clients. We would like to see how biased individual client data could affect the performance of the algorithm.
\vspace{3mm}
\noindent Time permitting, we would also like to study the possible attacks on distributed learning scheme. Recent studies are now looking into possible attacks taking advantage of information leakage that may be inherent in algorithm/implementation design \cite{reducingleak}. We would like to see how malicious parties can corrupt training data with fake(generated) dataset and extract private information in distributed learning scheme where some information is leaked by design.
% bibliography style is set to SIAM here. you can experiment with other styles
\bibliographystyle{siam}
% this line includes all references cited in the document.
\bibliography{references.bib}
\end{document}
| {
"alphanum_fraction": 0.729171593,
"avg_line_length": 62.1911764706,
"ext": "tex",
"hexsha": "75d4e5287071bad98c5d07bac8d68a55cccbbb6d",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-04-08T18:22:07.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-28T03:55:06.000Z",
"max_forks_repo_head_hexsha": "edf823a1f9399765cec9f03d05b82aa10b163285",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "zhaoyang626/fed_ml",
"max_forks_repo_path": "proposal/proposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "edf823a1f9399765cec9f03d05b82aa10b163285",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "zhaoyang626/fed_ml",
"max_issues_repo_path": "proposal/proposal.tex",
"max_line_length": 677,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "edf823a1f9399765cec9f03d05b82aa10b163285",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pulpiction/APMA-DRP-Project",
"max_stars_repo_path": "proposal/proposal.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-03T11:56:52.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-03T14:50:16.000Z",
"num_tokens": 3510,
"size": 12687
} |
% !TEX root = thesis.tex
\chapter{Conclusions and future work}
\label{ch:conclusions}
% Introduction / about topic?
There are a great number of possible representations for spatial information, each of which with its own benefits and drawbacks and only some of which have been discussed in this thesis.
From an engineering point of view, choosing an appropriate representation for a given system or task is a fundamental issue, as this choice cascades down to almost every engineering decision and indirectly affects a GIS program's every functionality.
Among other aspects, it affects the type of objects that can be efficiently stored and the operations that can be easily performed on them, and it also has important computational consequences in terms of both memory and processing time.
The 2D representations that dominate the GIS world work well for 2D datasets and problems that are essentially two-dimensional, but many problems arise when they are adapted to model 3D, spatiotemporal and multi-scale geographic information.
Most research in GIS is devoted to improving these adaptations, as well as to developing new methods that build on them to solve problems both old and new.
This thesis pushes GIS research in a different direction, starting from the assumption that many issues in GIS can probably be better solved by using a new, fundamentally different modelling approach---modelling both spatial and non-spatial characteristics as dimensions in the geometric sense, thus using higher-dimensional representations to create, manipulate and visualise geographic information.
Accordingly, this thesis' main research objective was to \textbf{realise the fundamental aspects of a higher-dimensional Geographic Information System}, and therefore focusing on the development of higher-dimensional representations and methods for GIS.\@
This concluding chapter starts with a short outlook on higher-dimensional GIS in \refse{se:isitworthit}, describing concisely
% some of the more enticing possibilities that can be opened up by higher-dimensional models in a GIS, and
when and where it makes sense to use higher-dimensional models.
Afterwards, \refse{se:conclusions} describes in detail the lessons learned by pursuing this thesis' research objective.
Finally, \refse{se:contributions} lists the main contributions of this thesis, and \refse{se:futurework} discusses the topics that I think would be most useful for future research on this topic.
\section{An outlook on higher-dimensional GIS}
\label{se:isitworthit}
As this thesis has shown, there are many potential advantages to the use of higher-dimensional modelling in GIS.\@
This approach provides \emph{a simple and consistent way to store geometry, attributes and topological relationships between objects of any dimension}.
This generic technique can be easily extended to handle other non-spatial characteristics, enabling better data management and more powerful operations.
At the same time, higher-dimensional representations are undoubtedly memory-intensive and often hard to work with, both due to their level of abstraction and the unintuitiveness of working with dimensions higher than three.
Admittedly, \emph{current GIS use cases do not easily justify the higher-dimensional approach}.
% Envisioning things
However, a far stronger case for higher-dimensional representations emerges when considering what sort of \emph{new tools could be developed using this type of representations}, both in GIS and in related fields.
For instance, 3D modelling software could consider time-varying topology, much as \citet{Dalstein15} do for 2D vector drawings.
4D topology (as 3D+time) could then be used to automatically generate smooth transitions for animations.
Similarly, CAD tools could use 4D topology to model buildings at different scales and timeframes automatically, keeping track of all relationships between objects and providing immediate user input during interactive editing.
For example, a program could display how different changes affect construction time, the size of the model at predefined LODs and when certain safety constraints were violated.
At a lower-level, 4D geometric modelling operators could be developed to operate directly on 4D primitives, such as splitting and merging 4-cells.
These could also be used intuitively in an interactive environment, allowing for instance to change a building's configuration by adding and removing walls while always ensuring that the representation remains a valid 4D space partition.
Considering that current GIS are very often used to manage large heterogeneous datasets and keep them up to date, a future system using a higher-dimensional underlying representation could be used to enforce certain validity constraints at the data structure level, such as avoiding 4D intersections or preserving a certain degree of continuity at LOD transitions.
% Requirements, perfect data
\subsection*{Is the higher-dimensional approach worthwhile?}
In short, yes, but only given certain conditions.
\emph{Higher-dimensional modelling is advantageous, but only when the added functionality that will be built with them---as compared to the simpler and more compact 2D/3D models---justifies it}, such as when queries across space and time can be implemented as higher-dimensional geometric/topological operations.
As discussed in \refse{ss:dimensions}, higher-dimensional modelling makes sense only when the characteristics depicted as dimensions are \emph{parametrisable and independent from each other}, as is the case for space/time/scale, and where objects occur along a dimension as intervals, not as discrete points (which can be easily stored as attributes).
Based on current hardware, software and the typical GIS datasets, there is another important practical requirement: the manageable number of dimensions is limited to 6--8.
Finally, it is also worth noting that higher-dimensional models are not incompatible with standard 2D/3D data structures and methods---the best tool for the job can be chosen depending on the need at hand and both approaches can be combined.
\section{Lessons learned}
\label{se:conclusions}
\begin{description}
% Chapter 2 + 3
\item[Current representations are not suitable in higher dimensions]
The mathematical foundations of spatial data modelling (\refch{ch:modelling-mathematics}) are defined in a dimension-independent manner, including all the basic tenets of geometry and topology.
This dimensional independence is also true for all spatial data models or representation schemes (\refch{ch:modelling-background})---at least when they are analysed at a high level---but is generally not preserved when they are implemented into more concrete data structures.
Most 2D and 3D data structures in GIS thus only encode a few chosen geometric and topological properties (\eg\ the coordinates of each point and the adjacencies between polygons), which are often defined with a formulation that is different per dimension.
This modelling approach can make for custom structures that are compact and efficient when used exactly as intended (\ie\ for a particular class of objects of a given dimension), but it can limit functionality or introduce inefficiencies when the data structures are adapted to be used under a different set of circumstances.
Case in point, the typical data structures of 2D GIS are frequently used with minimal changes for 3D, spatiotemporal and multi-scale GIS.\@
This results various problems, such as inefficient representations (\eg\ due to duplicate elements), an inability to represent common 3D objects (\eg\ those with a non-2-manifold boundary), difficulties in expressing 3D topological relationships (\eg\ adjacencies between solids), and the widespread availability of invalid datasets (\eg\ 3D models that do not formally enclose any space), among others.
% Chapter 4 + 5
\item[Higher-dimensional modelling as a solution]
An alternative to the use of ad hoc adaptations to 2D data structures is to model both spatial and non-spatial characteristics as dimensions in the geometric sense (\refch{ch:nd-modelling}).
While this approach can be memory-intensive, it provides a generic solution that can be applied to the representation of $n$D space, time, scale and any other parametrisable characteristics.
A tuple of $n$ parametrisable spatial and non-spatial characteristics can thus define a coordinate system in $\mathbb{R}^n$, and lower-dimensional 0D--3D objects existing across these characteristics can thus be modelled as higher-dimensional 0D--$n$D objects embedded in higher-dimensional space.
As GIS objects are usually non-overlapping\footnote{At least in theory.}, they should form an $n$D space partition and can thus be represented using $n$D topological data structures, reducing the total number of elements and ensuring that it is easy to navigate between the objects.
However, even when the objects do not form a space partition, the objects themselves can be partitioned by using an intermediate representation where the original objects are transformed into a set of non-overlapping regions, such that each of these regions represents \emph{a set of the original objects} \citep{Rossignac89}.
$n$D space partitions are also ideal in a practical sense, as they simplify many of the operations that can be defined with them, including simple point-in-polytope queries and all of the constructions methods presented in \refchs{ch:extrusion}--\ref{ch:linking-lods}.
$n$D point clouds, while outside the scope of this thesis, are a good complement to $n$D space partitions, as they are close to data as it is acquired and are relatively easy to store and manipulate.
\item[The most promising higher-dimensional representations]
Spatial data structures often consist of two aspects: 1.\ a combinatorial part, which consists of a set of primitives and some topological relationships between them, and 2.\ an embedding that links these primitives to their geometry and attributes \citep{Lienhardt94}.
As argued in this thesis, the knowledge of higher-dimensional topological relationships in a data structure is the main aspect that differentiates higher-dimensional data structures from 2D/3D ones.
Some data structures that are frequently used in 2D/3D GIS have straightforward extensions to higher dimensions, such as $n$D \emph{rasters} and \emph{hierarchies of trees}.
These use similar structures and algorithms as their 2D/3D counterparts and are therefore easy to understand and implement.
Other data structures implement the models of an $n$D simplicial complex or an $n$D cell complex.
As the simplices in a simplicial complex have a known number of adjacent simplices and bounding facets, they are most efficiently stored using simplex-based data structures.
Meanwhile, cell complexes can be easily stored using incidence graphs and related structures.
However, Nef polyhedra \citep{Bieri88} are probably the most promising representation for an $n$D cell complex, as they provide a good base to develop Boolean set operations, enabling a wide range of geometric operations.
Ordered topological models such as the cell-tuple \citep{Brisson93} and generalised/combinatorial maps \citep{Lienhardt94} also deserve a special mention.
By combining the strong algebra and easy navigation of a simplicial complex with the easy representation of a cell complex, they provide the most important benefits of both.
They are rather memory-intensive, but it is important to note that can still be more compact than a non-topological approach (\refch{ch:operations-background}).
Nevertheless, non-topological higher-dimensional representations do have a clear role to play as exchange formats, much as is the case for those based around Simple Features in 2D \citep{SimpleFeatures1}, and CityGML \citep{CityGML2} and IFC\footnote{\url{http://www.buildingsmart-tech.org/specifications/ifc-releases}} in 3D.
\item[Three construction methods for higher-dimensional objects]
Creating computer representations of higher-dimensional objects can be complex.
Common construction methods used in 2D and 3D, such as directly manipulating combinatorial primitives, or using primitive-level construction operations (\eg\ Euler operators \citep{Mantyla88}), rely on our intuition of 2D/3D geometry, and thus do not work well in higher dimensions.
It is therefore all too easy to create invalid objects, which then cannot be easily interpreted or fixed---a problem that is already exceedingly apparent in most 3D datasets.
As an alternative to the use of simple operations on combinatorial primitives, this thesis thus proposed three higher-level methods, all of which are relatively easy to use and attempt to create valid output.
% Chapter 6
\item[Method I.\ constructing objects using $n$D extrusion]
\hspace{15mm}
Extrusion as used in GIS has a natural extension into a dimension-independent formulation (\refch{ch:extrusion}).
Starting from an $(n-1)$-dimensional space partition as an $(n-1)$-dimensional cell complex and a set of intervals per cell, it is possible to extrude them to create an $n$-dimensional cell complex.
It is the easiest method to load existing 2D or 3D data into a higher-dimensional structure, representing a set of cells that exist along a given dimension, such as a length of time or a range of scales.
It is also easy to guarantee that the output cell complex is valid and can be used as a base for further operations, such as dimension-independent generalisation algorithms.
The extrusion algorithm developed in this thesis works on the basis of a generalised map representation of the cell complex and is relatively fast, with a worst case complexity of $O(ndr)$ in the main algorithm, where $n$ is the extrusion dimension, $d$ is the total number of darts in the input map and $r$ is the total number of intervals in the input, but offers better complexity in practice.
It is also memory-efficient, as only three layers of darts (of the size of the input cell complex) need to be kept in memory at the same time.
% Chapter 7
\item[Method II.\ constructing $n$D objects incrementally]
Based on the Jordan-Brouwer separation theorem \citep{Lebesgue11,Brouwer11}, it is known that an $i$-cell can be described based on a set of its bounding $(i-1)$-cells (\refch{ch:incremental-construction}).
Since individual $(i-1)$-cells are easier to describe than the $i$-cell, this can be used to subdivide a complex representation problem into a set of simpler, more intuitive ones.
This method can be incrementally applied to construct cell complexes of any dimension, starting from a set of vertices in $\mathbb{R}^n$ defined by a $n$-tuple of their coordinates, and continuing with cells of increasing dimension---creating edges from vertices, faces from vertices or edges, volumes from faces and so on.
The incremental construction algorithm developed in this thesis solves this problem in a practical setting by computing the topological relationships connecting the bounding $(i-1)$-cells.
It uses indices on the lexicographically smallest vertex of every cell per dimension, as well as an added index using the lexicographically smallest vertex of the ridges around the bounding facets of the cell that is being built.
It generates an $i$-cell in $O(d^{2})$ in the worst case, with $d$ the total number of darts in the cell.
However, it fares markedly better in real-world datasets, as cells do not generally share the same lexicographically smallest vertex.
By checking all matching ridges within a cell's facets, the algorithm can optionally verify that the cell being constructed forms a combinatorially valid quasi-manifold, avoiding the construction of invalid configurations.
% Chapter 8
\item[Method III.\ linking 3D models at different LODs into a 4D model]
As an example high-level higher-dimensional object construction method, a 4D model can be constructed from a series of different 3D models at different LODs (\refch{ch:linking-lods}).
The method presented in this thesis consists of three steps: identifying corresponding elements in different LODs, deciding how these should be connected according to a linking scheme, and finally linking relevant 3-cells into 4-cells.
Different linking schemes yield 4D models having different properties, such as objects that suddenly appear and disappear, gradually change in size or morph into different objects along the fourth dimension.
By modelling the LOD as a dimension, the correspondences between equivalent objects across LODs become geometric primitives, making it possible to perform geometric operations with them (\eg\ extracting an intermediate LOD for visualisation purposes) or to attach attributes to them (\eg\ general semantics or the meaning of these correspondences), just as is done to other geometric primitives.
These topological relationships and correspondences can then be used for multiple applications, such as updating and maintaining series of 3D models at different LODs, or testing the consistency of multi-LOD models (\eg\ by using the validity checks in \citet{Groger11}).
% Chapter 9
\item[Extracting 2D/3D subsets from an $n$D model]
The process to obtain a lower-dimensional subset of a higher-dimensional dataset can be regarded as a function that maps a subset of $\mathbb{R}^n$ to a subset of $\mathbb{R}^m$, $m < n$, which is obtained by cutting through the dataset in a geometrically meaningful way (\refch{ch:slicing}).
Broadly, this process consists of two steps: (i) selecting a subset of the objects in the model and (ii) projecting this subset to a lower dimension.
Both of these steps can vary substantially.
Selecting a subset of the objects can be as simple as obtaining those within a axis-aligned bounding box, or can be as complex as a Boolean set intersection operation, such as for the computation of cross-sections.
Meanwhile, there are a wide variety of transformations that apply different projections with different properties, such as the $n$-dimensional to ($n-1$)-dimensional orthographic and perspective projections derived in this thesis and the $\mathbb{R}^n$ to $S^{n-1}$ spherical projection used in its cover.
% Chapter 10 + Appendix A
\item[Methods to create valid objects and space partitions in 2D and 3D]
Most algorithms described in computational geometry and GIS assume that their input datasets are flawless and they are processable using real numbers.
However, invalid datasets are widespread in GIS (\refch{ch:cleaning}), and they are represented and processed using limited-precision arithmetic (\refap{ch:implementation}).
In order to cope with 2D invalid datasets, this thesis further developed methods to create valid polygons and planar partitions using a constrained triangulation of the input.
These were based on the work done in \citet{ArroyoOhori10}, improving the reconstruction algorithm, fixing edge cases, implementing an odd-even constraint counting mechanism and improving the quality of the implementation.
Similarly, a method to repair 3D objects and space subdivisions was developed by snapping together lower-dimensional primitives and removing overlaps using Boolean set operations on Nef polyhedra \citep{Bieri88,Hachenberger06}.
These methods were used in this thesis in order to use real-world datasets in practice, such as when applying the construction algorithms.
% There is no systemic reason to tolerate invalid data
\end{description}
\section{Contributions}
\label{se:contributions}
The main contribution of this thesis is the realisation of the fundamental aspects of a higher-dimensional Geographic Information System.
By approaching this problem in a practical manner, many of the technical issues of its development were investigated, including an analysis of its possible internal (in-memory) and external (exchange format) representations, the development of basic algorithms for object construction and visualisation, and the development of GIS data repair tools for 2D and 3D datasets.
By taking a model that was previously only described at a conceptual level \citep{vanOosterom10} and realising it, it is now possible to more fully evaluate the consequences of this higher-dimensional approach.
In more concrete terms, there were several smaller contributions that were necessary to be able to achieve this realisation.
The most significant ones are:
\begin{description}
\item[Survey and analysis of higher-dimensional models and structures] I conducted a survey of all of the main data models and data structures used in GIS, geometric modelling and related fields, considering 2D, 3D and $n$D data structures.
These were analysed in terms of their feasibility for the higher-dimensional modelling of geographic information, including how they could handle different geometry classes, topology and attributes, either in their current form or through modifications, as well as their ease of implementation in practice.
\item[Three construction methods] I developed three easy-to-use construction methods for objects of any dimension.
The two methods lower-level methods---extrusion and incremental construction---were implemented using CGAL Combinatorial Maps and tested using real-world datasets.
The third method has been tested with a few synthetic datasets, but more work is necessary to fully realise it and automate it.
All of the implementations were made available publicly under an open source licence.
\item[Higher-dimensional real-world models] As part of this thesis, I created higher-dimensional models from real-world 2D and 3D datasets.
To the best of my knowledge, these are the only datasets consisting of realistic higher-dimensional objects in a GIS setting.
\item[Combinatorial map reversal] As part of the development of the incremental construction operation, I developed additional functions that were added to CGAL Combinatorial Maps after being approved by the CGAL Editorial Board.
These functions involved reversing the orientation of a combinatorial map of any dimension.
\item[Simple formulation of $n$D to ($n-1$)D projections] I developed intuitive formulations of $n$-dimensional to ($n-1$)-dimensional orthographic and perspective projections.
While other formulations exist, my formulations based on normal vectors are in my opinion the easiest to understand and manipulate, at least for a GIS audience.
\item[Repair methods tools] I developed methods to automatically repair 2D polygons and planar partitions, as well as 3D polyhedra and space partitions.
The 2D methods were also released publicly under an open source licence as \texttt{prepair} and \texttt{pprepair}.
The 3D methods will also be released publicly after further improvements are made, together with more thorough testing and the addition of basic documentation.
\end{description}
\section{Future work}
\label{se:futurework}
As this thesis challenges many of the assumptions underpinning current GIS, there are many potential lines of research that can be formulated for higher-dimensional GIS and higher-dimensional modelling in general.
Most algorithms for 2D/3D GIS have a dimension-dependent formulation and would result in open problems in an $n$D context.
However, while these are worthy of attention, I would like to focus on what are still significant gaps in knowledge for the implementation of a higher-dimensional GIS and that could not be solved in this thesis' timeframe.
While some of these research topics are not within the main subject matter of GIS research, they are what I consider to be the key steps for a more complete implementation of a working system.
They are:
\begin{description}
\item[Low-level linking algorithms] The linking schemes from \refch{ch:linking-lods} have only been described in terms of high-level algorithms.
These should be further developed by finding adequate low-level algorithms to identify matching elements using customisable constraints.
For instance, it is reasonable to attempt to minimise a certain distance function between two models (\eg\ Earth mover's distance), but it is also important to do so in a manner that preserves the topological relationships between the objects.
The special treatment of holes of different dimensions should also be investigated, together with adequate methods to ensure that holes are linked correctly between themselves and to other primitives.
\item[High-level construction algorithms] The three object construction methods described in this thesis operate mostly on lower-dimensional primitives and consequently cover only a few use cases.
There is a need to develop intuitive methods that operate directly on higher-dimensional primitives, which should preferably be usable in an interactive environment.
Note however that this does not mean that the end user would be viewing the higher-dimensional model directly, as it could be shown in simplified form or as a 2D or 3D representative subset.
\item[$n$D constrained triangulator] There are high-quality robust implementations of constrained Delaunay triangulations in 2D \citep{Shewchuk96} and 3D \citep{Si05}, as well as good descriptions of a constrained Delaunay triangulation in $n$D \citep{Shewchuk07}.
However, in order to realise a higher-dimensional GIS based on a simplicial complex model, it is necessary to have a robust $n$D constrained triangulator which should be preferably Delaunay.
\item[Hyperspherical projective geometry kernel] I deemed Nef polyhedra one of the most promising models for a higher-dimensional GIS, but so far they have only been implemented in 2D and 3D.\@
In order to implement $n$-dimensional Nef polyhedra, it is necessary to develop a hyperspherical projective geometry kernel.
This kernel would then be used to compute the local $(n-1)$-dimensional pyramids around every vertex.
While the projective mathematics for this are relatively simple, it is a complex engineering problem to implement it robustly.
\item[$n$D Boolean set operations] Using the above mentioned hyperspherical projective kernel or another method, it is necessary to develop robust algorithms to compute $n$D Boolean set operations.
For instance, an $n$D Nef polyhedra implementation using recursive boundary definitions could compute these operations at the local pyramid level.
$n$D Boolean set operations would be an excellent base for most geometric operations in a higher-dimensional GIS.\@
\item[$n$D to/from 2D and 3D projective kernel] Many operations that are required for object manipulation in $n$D are actually well-defined operations applied to 2D and 3D objects that are merely embedded in $n$D.
A robust kernel that could handle on-the-fly conversions of $n$D geometries to 2D and 3D without loss of precision would enable many of these operations to be applied.
Relatedly, CGAL currently has simple kernels that apply 3D to 2D orthographic projections using the planes defined by the $xy$, $yz$ or $zx$ planes.
These are useful as they can be wrapped around the basic 2D kernels (\ie\ $\mathbb{R}^2$ with floating-point, interval or exact representations) using the traits programming paradigm available throughout CGAL, and so they can be used to apply 2D operations to 2D objects that are embedded in 3D.
However, these kernels do not handle the conversions from 2D back to 3D automatically nor are easily extensible to higher dimensions.
\item[Visualisation of higher-dimensional geographic information] \refch{ch:slicing} described how data selection and projection methods can work in arbitrary dimensions, while \refse{se:ndmath} described the $n$D mathematics behind the basic manipulation operations for higher-dimensional objects.
However, there are significant issues to tackle in order to create a useful visualiser for higher-dimensional datasets.
Namely, there are significant hurdles in user interaction, dealing with large datasets, computing higher-dimensional cross-sections and the definition of useful visual cues in higher dimensions.
The simple implementation used for the cover of this thesis make several hard-coded assumptions for the dataset that was used and is far from optimal, but it will be published together with the rest of the open source tools developed for this thesis once it is improved to handle more general input in the form of $n$D linear cell complexes.
\item[2D/3D/$n$D repair methods with quality guarantees] The data repair methods described in \refch{ch:cleaning} are able to recover from most simple invalid 2D and 3D configurations.
However, they are not easily extensible to higher dimensions due to the lack of a robust $n$D constrained triangulator (see above), they do not provide quality guarantees in their output, and can be numerically unstable when there are many nearly coplanar planes.
In order to develop robust systems, it is highly desirable to be able to specify a robustness criterion (\eg\ a minimum distance between vertices or ensuring that the geometries are not collapsed/flipped in a floating-point representation), which is guaranteed by a data repair algorithm.
Additional geometric constraints could also be implemented, such as guaranteeing that coplanar planes stay coplanar after repair.
\item[Higher-dimensional modification operations] By necessity, this thesis focused on operations for object creation in order to generate initial higher-dimensional datasets.
Now that it is possible to generate them, it is important to think of intuitive higher-dimensional object modification operations.
For instance, how can a 4-cell split operation be intuitively defined, or how can collapsed geometries (\eg\ from the extruded and collapsed models in \refse{se:extrusion-generalisation}) be processed to remove combinatorial elements.
Ideally, these operations should also be intuitively usable in an interactive environment, such as a geometric modeller.
\item[Real-world 4D spatiotemporal datasets] Using timestamped 3D volumetric datasets, it should be possible to create true 4D datasets using spatiotemporal information.
However, obtaining reasonably clean volumetric datasets is nearly impossible at this point.
Every dataset that was found during this project had only surfaces embedded in 3D, had severe validity problems up to the point that it would require substantial manual work to fix, or was missing the temporal information.
Exporting the temporal information that is present in some closed commercial formats is also an issue, as doing do in a na{\"\i}ve way generally means losing the links to the timestamps' corresponding geometries.
\end{description} | {
"alphanum_fraction": 0.8136130445,
"avg_line_length": 112.5110294118,
"ext": "tex",
"hexsha": "a2e110971af68e0df76cb20300f83e99f73612c8",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2021-08-28T23:58:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-11T04:08:07.000Z",
"max_forks_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "kenohori/thesis",
"max_forks_repo_path": "conclusions.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5",
"max_issues_repo_issues_event_max_datetime": "2019-02-27T10:12:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-02-23T16:34:29.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "kenohori/thesis",
"max_issues_repo_path": "conclusions.tex",
"max_line_length": 403,
"max_stars_count": 38,
"max_stars_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "kenohori/thesis",
"max_stars_repo_path": "conclusions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T07:28:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-04T13:55:59.000Z",
"num_tokens": 6309,
"size": 30603
} |
\documentclass[a4paper,12pt]{article}
\usepackage[left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm]{geometry}
\usepackage{color}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage{amsmath,amssymb,amsthm,algorithm,algorithmic,graphicx,yhmath,url,enumitem,lscape}
\usepackage{wrapfig,subfigure}
\newcounter{problem}
\newenvironment{problem}{\refstepcounter{problem} \noindent {\bf Problem \arabic{problem}}}{\vspace{0.5cm}}
\newenvironment{solution}{\vspace{0.3cm} \par \noindent {\bf Solution}}{}
\newenvironment{verification}{\vspace{0.3cm} \par \noindent {\bf Verification}}{}
\newenvironment{hint}{\vspace{0.3cm} \par {\bf Hint:}}{}
\newcounter{remark}
\newenvironment{remark}{\refstepcounter{remark} \vspace{0.3cm} \par \noindent {\bf Remark \arabic{remark}}}{\vspace{0.3cm}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Rn}{\mathbb{R}^n}
\newcommand{\Rnn}{\mathbb{R}^{n \times n}}
\newcommand{\bes}{\begin{equation*}}
\newcommand{\ees}{\end{equation*}}
\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\eps}{\epsilon}
\newcommand{\fl}{\text{fl}}
\title{Teknisk vetenskabliga ber{\"a}kningar, Fall 2018 \\ Lab Session 7}
\author{Carl Christian Kjelgaard Mikkelsen}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}
This note contains the list of problems for our lab session
\begin{center}
Wednesday, December 19th, (kl. 13.00-16.00), Room MA416-426.
\end{center}
\section{The problems}
\begin{problem}
\begin{enumerate} \item Copy script {\tt rode\_mwe1} in {\tt work/l7p1.m} and adapt it to solving the initial value problem
\bes
y'(t) = 1 - y(t)^2, \quad y(0) = 0
\ees
for $t \in [0,2]$.
\item For each value of $t$ examine Richardson's fraction. Are they behaving in a manner which is consistint with the existence of an asymptotic error expansion?
\item Show that $y(t) = \tanh(t)$ is the solution of this initial value problem and include this information in your script.
\item Is there good agreement between the error estimates and the true error?
\end{enumerate}
\end{problem}
\begin{remark} The differential equation
\be
y'(t) = 1 - y(t)^2
\ee
may appear uninteresting. However, if $y$ is a solution, then $v(t) = a y(bt)$ solves the differential equation
\be
v'(t) = ab - \frac{b}{a} v(t)^2 = - g + k v(t)^2
\ee
provided $a = \sqrt{\frac{g}{k}}$ and $b = \sqrt{gk}$. This is the differential equation which corresponds to a rock falling straight down through a homogeneous atmosphere.
\end{remark}
\begin{problem} {\bf Implementing and testing new method}
\begin{enumerate}
\item Study the implementation of {\tt rk.m} and at least one of the dependencies, say, {\tt phi4.m} to the point were you can implement the method
\begin{align}
k_1 &= f(t_j,v_j)\\
k_2 &= f(t_j + \frac{1}{2} h, v_j + \frac{1}{2} h k_1),\\
k_3 &= f(t_j + h, v_j - h k_1 + 2 hk_2), \\
v_{j+1} &= v_j + \frac{1}{6} h \left( k_1 + 4 k_2 + k_3\right).
\end{align}
as {\tt work/psi.m}.
\item Develop a script {\tt work/l7p2.m} which demonstrates that the global error for this method is $O(h^3)$.
\item Compare the error estimates to the exact error for a differential equation where you know the solution. Can you trust the error estimates?
\end{enumerate}
\end{problem}
\begin{problem} {\bf The connection between regular integration and solving ordinary differential equations} Consider the problem of computing the standard normal distribution function, i.e
\be
F(t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^t e^{-\frac{1}{2} x^2} dx
\ee
\begin{enumerate}
\item Show that $t \rightarrow F(t)$ is the unique solution of the initial value problem
\begin{align}
y'(t) &= f(t,y), \\
y(0) &= \frac{1}{2},
\end{align}
where $f : \R \times \R \rightarrow \R$ given by
\be
f(t,y) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} t^2},
\ee
is independent of $y$.
\item Develop a script {\tt work/l7p3.m} which computes a table of $t \rightarrow F(t)$ for 21 equidistant points in the interval $[0,2]$. Your relative error must be less that $\tau = 10^{-7}$.
\item This goal can be accomplished using a smallest stepsize of $h=1/640$. Which stepsize did you use?
\end{enumerate}
\end{problem}
\begin{remark} At this point you can compute reliable error estimates. Regardless, the true solution is available in {\tt MATLAB} as {\tt sol=@(t)0.5*(1+erf(t/sqrt(2)))}, where {\tt erf} is {\tt MATLAB}'s implementation of the standard error function.
\end{remark}
\begin{problem} {\bf The SIR model of infectious diseases.} Infectious diseases such Ebola in can modelled using differential equations. In this problem we consider the standard SIR model for a epidemic from which everybody eventually recovers. It has three groups of people, susceptible (S), infected (I), and recovered (R). Susceptible people become infected at a rate which is proportional to the product of their number and the number of infected. Infected people recover at a constant rate and develop natural immunity. The corresponding system of ordinary differential equations is
\begin{align}
S' &= - \alpha I S, \\
I' &= \alpha I S - \beta I, \\
R' &= \beta I,
\end{align}
where $\alpha, \beta > 0$ are constants which determine the rate of infection/rate of recovery.
\begin{enumerate}
\item Implement a function {\tt work/viral.m} similar to {\tt shell4.m} which models the above system of ordinary differential equations. The call sequence must be
\begin{verbatim}
y=viral(t,x)
\end{verbatim}
where
\begin{enumerate}
\item {\tt t} is a dummy variable which must be present in order for {\tt viral.m} to be compatible with the main program {\tt rk.m}.
\item {\tt x} is a column vector, such that {\tt x(1)} is the fraction of the population which is susceptible, {\tt x(2)} is the fraction of the population which is infected, {\tt x(3)} is the fraction of the population which has recovered and is immune.
\item {\tt y} is a column vector such that {\tt y(i)} is the time derivative of {\tt x(i)}.
\end{enumerate}
\item Develop a script {\tt work/l7p4.m} which simulates an epidemic for which $\alpha = 0.5 \text{ days}^{-1}$ and $\beta = 0.04 \text{ days}^{-1}$. Initially, $99 \%$ of the population is susceptible and $1 \%$ is infected. Track the disease using {\tt rk.m} for $60$ days. Verify, that more than $85 \%$ of the population has recovered by the end of the period.
\item An outbreak is said to be contained if the number of sick people is not increasing. Show that any outbreak will be contained if and only if $S(0) < \frac{\beta}{\alpha}$.
\item Return to example and verify that the number of infected peaked at the time $T$ where $S(T) = \frac{\beta}{\alpha}$. Verify that more than $70 \%$ of the population was ill at the time!
\item The purpose of vaccination is ensure that outbreaks are always contained. If a fraction $z$ of the population is immune to the disease, then the model changes to
\begin{align}
S' &= - \alpha I (1-z) S \\
I' &= \alpha I (1-z) S - \beta I \\
R' &= \beta I
\end{align}
Change your implementation of {\tt viral} to contain a variable {\tt z} which specifies the fraction of the population which is immune to the disease.
\item (Herd immunity) Show that outbreaks are always contained if and only if $S(0) < \frac{1}{1 - z}\frac{\beta}{\alpha}$.
\item Return to the previous example, but assume that $95\%$ of the population has be vaccinated at birth and track the outbreak for $120$ days. Verify, that the disease can never threatend the fabric of society, because the sick are always so few in number that adequate care can be provide while normal functions continue.
\end{enumerate}
\end{problem}
\end{document}
| {
"alphanum_fraction": 0.7170476813,
"avg_line_length": 54.2907801418,
"ext": "tex",
"hexsha": "d2fca2605cb3e6fcfbb928c4c77194ff09e42fe3",
"lang": "TeX",
"max_forks_count": 24,
"max_forks_repo_forks_event_max_datetime": "2022-03-10T10:23:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-06T07:05:38.000Z",
"max_forks_repo_head_hexsha": "7669c4460be02b8bbaea2ae79182af2667e9e6b2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "itismesam/Courses-1",
"max_forks_repo_path": "Courses/Numerical Analysis/lab exercises/lab7/specification/lab7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7669c4460be02b8bbaea2ae79182af2667e9e6b2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "itismesam/Courses-1",
"max_issues_repo_path": "Courses/Numerical Analysis/lab exercises/lab7/specification/lab7.tex",
"max_line_length": 587,
"max_stars_count": 40,
"max_stars_repo_head_hexsha": "7669c4460be02b8bbaea2ae79182af2667e9e6b2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "itismesam/Courses-1",
"max_stars_repo_path": "Courses/Numerical Analysis/lab exercises/lab7/specification/lab7.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-10T10:22:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-30T13:45:50.000Z",
"num_tokens": 2283,
"size": 7655
} |
\subsection{Determination of the band integral intensity}
\label{band_intensities}
One of the important spectroscopic information is the integral intensity of
particular Raman bands from the spectrum.
The band's shape is usually modeled by Gaussian or Lorentzian curves or as
their combination.
The combination of the Lorentzian $\func{L}$ and Gaussian $\func{G}$ curve can
be expressed as
\begin{equation}
\func{S}(\wn; I_\text{m}, \mu, \sigma) =
c_\text{L} \cdot \func{L}(\wn; I_\text{m}, \mu, \sigma)
+ (1 - c_\text{L}) \cdot \func{G}(\wn; I_\text{m}, \mu, \sigma),
\label{\eqnlabel{band_intensities:single_shape}}
\end{equation}
where $c_\text{L}$ is the Lorentzian curve fraction coefficient, $I_m$ is the
height, $\mu$ is the band position, $\sigma$ is the Gaussian root mean square
width, and $\wn$ is the wavenumber.
The Gaussian function is taken in the form of
\begin{equation*}
\func{G}(\wn; I_\text{m}, \mu, \sigma) =
I_\text{m} e^{-\frac{(\wn - \mu)^2}{2\sigma^2}}.
\end{equation*}
It is beneficial to select the parameters of the Lorentzian curve to match the
parameters of the Gaussian curve.
This matching is straightforward for the height $I_\text{m}$ and position
$\mu$ parameters but less evident for the width $\sigma$ parameter.
To give the proper meaning to the Lorentzian curve fraction coefficient
$c_\text{L}$, we decided to make the full width at half maximum (FWHM) between
the Gaussian and Lorentzian curve equal for the same $\sigma$.
Therefore we needed to calculate the FWHM of the Gaussian curve.
Let us assume, without loss of generality,
$I_\text{m} = 1$
and
$\mu = 0$,
and only the positive solution of the quadratic equation
\begin{align}
\frac{1}{2} &= \func{G}(\wn; 1, 0, \sigma)
= e^{-\frac{\wn^2}{2\sigma^2}}, \nonumber \\
\func{ln}2 &= \frac{\wn^2}{2\sigma^2}, \nonumber \\
\wn &= \sigma\sqrt{2\ln2},
\label{\eqnlabel{band_intensities:HWHM}}\\
\func{FWHD}_\text{G}(\sigma)
&= 2\wn = 2\sigma\sqrt{2\ln2}. \nonumber
\end{align}
The matching width parameter of the Lorentzian curve can be calculated as
\begin{align*}
\frac{1}{2} &= \frac{1}{1 + p\wn^2}, \\
1 + p\wn^2 &= 2, \\
\wn &= \frac{1}{\sqrt{p}}.
\end{align*}
Then with the result from \eqnref{band_intensities:HWHM}, we get
\begin{align*}
\sigma\sqrt{2\ln2}
&= \frac{1}{\sqrt{p}}, \\
p &= \frac{1}{2\sigma^2\ln2}.
\end{align*}
With this result, the final form of the Lorentzian curve is
\begin{equation*}
\func{L}(\wn; I_\text{m}, \mu, \sigma) =
\frac{I_m}{1 + \frac{(\wn - \mu)^2}{2\sigma^2\ln2}}.
\end{equation*}
This definition of the Lorentzian curve means that with the Lorentzian curve
fraction coefficient $c_\text{L} = 0.5$, the contributions to the FWHM of the
Lorentzian and the Gaussian curve to the combined band shape are equal.
The integral intensity of the band can be calculated as the sum of the
contributions of both bands.
It obviously does not depend on the position of the band $\mu$, so we can,
without loss of generality, set it to zero during further calculations
\begin{equation}
\func{I}(I_\text{m}, \sigma) =
c_\text{L} \func{I}_\text{L}(I_\text{m}, \sigma)
+ (1 - c_\text{L}) \func{I}_\text{G}(I_\text{m}, \sigma),
\end{equation}
where
\begin{align*}
\func{I}_\text{G}(I_\text{m}; \sigma)
&= I_\text{m}\int_{-\infty}^{\infty}
{e^{-\frac{\wn^2}{2\sigma^2}}\text{d}\wn}
= \begin{vmatrix}
\wn = \sigma\sqrt{2}x \\
\text{d}\wn = \sigma\sqrt{2}\text{d}x
\end{vmatrix}
= I_\text{m}\sigma\sqrt{2}\int_{-\infty}^{\infty}{e^{-x^2}\text{d}x} \\
&= I_\text{m}\sigma\sqrt{2\text{\g{p}}}
\end{align*}
and
\begin{align*}
\func{I}_\text{L}(I_\text{m}, \sigma)
&= \int_{-\infty}^{\infty}
{\frac{I_\text{m}}{1 + \frac{\wn^2}{2\sigma^2\ln2}}\text{d}\wn}
= \begin{vmatrix}
\wn = \sigma\sqrt{2ln2}x \\
\text{d}\wn = \sigma\sqrt{2\ln2}\text{d}x
\end{vmatrix}
= I_\text{m}\sigma\sqrt{2\ln2}\int_{-\infty}^{\infty}
{\frac{1}{1 + x^2}\text{d}x} \\
&= I_\text{m}\sigma\text{\g{p}}\sqrt{2\ln2}.
\end{align*}
Raman bands in a typical Raman spectrum of complex samples are overlapped.
We solved this problem by modeling the band as a combination of more
band-shape functions from
\eqnref{band_intensities:single_shape}
\begin{equation}
\func{S}(\wn; I_{\text{m},1..n}, \mu_{1..n}, \sigma_{1..n}) =
\sum_{i = 1}^n = \func{S}_i(\wn; I_{\text{m},i}, \mu_i, \sigma_i),
\label{\eqnlabel{band_intensities:shape}}
\end{equation}
where $n$ is the number of the overlapping bands.
This band shape was fitted to the measured spectra. The slightly enhanced
Marquardt-Levenberg method
(see \cref{minimization})
was used for the nonlinear regression, and analytical derivatives was used for
gradient computation.
\\
| {
"alphanum_fraction": 0.6767314189,
"avg_line_length": 39.1404958678,
"ext": "tex",
"hexsha": "208fbf870fc1ea1bc2c65340f731dc268feca9e0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lumik/phd_thesis",
"max_forks_repo_path": "src/results_and_discussion/band_intensities.tex",
"max_issues_count": 41,
"max_issues_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e",
"max_issues_repo_issues_event_max_datetime": "2021-10-07T03:00:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-08-13T12:27:09.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lumik/phd_thesis",
"max_issues_repo_path": "src/results_and_discussion/band_intensities.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lumik/phd_thesis",
"max_stars_repo_path": "src/results_and_discussion/band_intensities.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1715,
"size": 4736
} |
\section{Fuselage}
\begin{tabularx}{\textwidth}{ | L | c | c | }
\hline
\textbf{Parameter} & \textbf{Value} & \textbf{Reference} \\ \hline
\endfirsthead
\hline
\textbf{Parameter} & \textbf{Value} & \textbf{Reference} \\ \hline
\endhead
Fuselage length & 15.26 m & \cite{Janes20042005,NASA-CR-166309} \\ \hline
Fuselage width & 2.36 m & \cite{UH60_OperatorsManual,Janes20042005} \\ \hline
Fuselage aerodynamic reference point stationline & 8.78 m & \cite{NASA-CR-166309,NASA-TM-85890} \\ \hline
Fuselage aerodynamic reference point waterline & 5.94 m & \cite{NASA-CR-166309,NASA-TM-85890} \\ \hline
\caption{Fuselage data}
\end{tabularx}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\subsection{Symbols}
\begin{tabularx}{\textwidth}{ L l l l }
\hline
\textbf{Symbol} & \textbf{Mnemonic} & \textbf{Unit} & \textbf{Description} \\ \hline
\endfirsthead
\hline
\textbf{Symbol} & \textbf{Mnemonic} & \textbf{Unit} & \textbf{Description} \\ \hline
\endhead
$EK_{XWF}$ & EKXWF & - & Rotor wash interference factor (inplane) \\
$EK_{YWF}$ & EKYWF & - & Rotor wash interference factor (sidewash) \\
$EK_{ZWF}$ & EKZWF & - & Rotor wash interference factor (downwash) \\
& & & \\
$\chi_{PMR}$ & CHIPMR & deg & Rotor wake skew angle \\
$D_{WO}$ & DWSHMR & - & Main rotor uniform downwash \\
$\Omega_T$ & OMGTMR & rad/s & Rotor speed \\
$R_T$ & RMR & m & Rotor radius \\
& & & \\
$q_{WF}$ & QWF & Pa & dynamic pressure at the body \\
& & & \\
$\alpha_{WF}$ & ALFWF & deg & angle of attack \\
$\beta_{WF}$ & BETAWF & deg & sideslip angle \\
$\psi_{WF}$ & PSIWF & deg & W/T model yaw angle ($\psi_{WF} = -\beta_{WF}$) \\ \hline
\caption{Fuselage symbols}
\end{tabularx}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\subsection{Inplane Component of Rotor Wash on the Fuselage}
\csvreader[
no head,
longtable=cccc,
table head=
\toprule
$\chi_{PMR}$ & \multicolumn{3}{c}{$EK_{XWF}$} \\
{[deg]} & {AA1FMR=-6.0} & {AA1FMR=0.0} & {AA1FMR=6.0} \\ \midrule
\endfirsthead
$\chi_{PMR}$ & \multicolumn{3}{c}{$EK_{XWF}$} \\
{[deg]} & {AA1FMR=-6.0} & {AA1FMR=0.0} & {AA1FMR=6.0} \\ \midrule
\endhead,
before first line={},
late after line=\\,
late after last line=\\ \bottomrule \caption{Inplane component of rotor wash on the fuselage \cite{NASA-CR-166309}},
before reading={},
after reading={}
]
{csv/uh60_fuselage_chipmr_ekxwf.csv}
{1=\colchi,2=\colii,3=\coliii,4=\coliv}
{\colchi & \colii & \coliii & \coliv}
\begin{figure}[h!]
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_chipmr_ekxwf.eps}
\caption{Inplane component of rotor wash on the fuselage \cite{NASA-CR-166309}}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\subsection{Downwash Component of Rotor Wash on the Fuselage}
\csvreader[
no head,
longtable=cccc,
table head=
\toprule
$\chi_{PMR}$ & \multicolumn{3}{c}{$EK_{ZWF}$} \\
{[deg]} & {AA1FMR=-6.0} & {AA1FMR=0.0} & {AA1FMR=6.0} \\ \midrule
\endfirsthead
$\chi_{PMR}$ & \multicolumn{3}{c}{$EK_{ZWF}$} \\
{[deg]} & {AA1FMR=-6.0} & {AA1FMR=0.0} & {AA1FMR=6.0} \\ \midrule
\endhead,
before first line={},
late after line=\\,
late after last line=\\ \bottomrule \caption{Downwash component of rotor wash on the fuselage \cite{NASA-CR-166309}},
before reading={},
after reading={}
]
{csv/uh60_fuselage_chipmr_ekzwf.csv}
{1=\colchi,2=\colii,3=\coliii,4=\coliv}
{\colchi & \colii & \coliii & \coliv}
\begin{figure}[h!]
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_chipmr_ekzwf.eps}
\caption{Downwash component of rotor wash on the fuselage \cite{NASA-CR-166309}}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\subsection{Basic Fuselage Aerodynamic Characteristics}
\csvreader[
no head,
longtable=cccc,
table head=
\toprule
$\alpha_{WF}$ & $D/q$ & $L/q$ & $M/q$ \\
{[deg]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{3}]} \\ \midrule
\endfirsthead
$\alpha_{WF}$ & $D/q$ & $L/q$ & $M/q$ \\
{[deg]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{3}]} \\ \midrule
\endhead,
before first line={},
late after line=\\,
late after last line=\\ \bottomrule \caption{Fuselage aerodynamic characteristics due to angle of attack \cite{NASA-CR-166309}},
before reading={},
after reading={}
]
{csv/uh60_aero_fuselage_alfwf.csv}
{1=\colaoa,2=\colcx,3=\colcz,4=\colcm}
{\colaoa & \colcx & \colcz & \colcm}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_alfwf_dqfmp.eps}
\caption{Fuselage drag due to angle of attack \cite{NASA-CR-166309}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_alfwf_lqfmp.eps}
\caption{Fuselage lift due to angle of attack \cite{NASA-CR-166309}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_alfwf_mqfmp.eps}
\caption{Fuselage pitching moment due to angle of attack \cite{NASA-CR-166309}}
\end{figure}
\clearpage
\csvreader[
no head,
longtable=cccc,
table head=
\toprule
$\psi_{WF}$ & $Y/q$ & $R/q$ & $N/q$ \\
{[deg]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{3}]} & {[m\textsuperscript{3}]} \\ \midrule
\endfirsthead
$\psi_{WF}$ & $Y/q$ & $R/q$ & $N/q$ \\
{[deg]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{3}]} & {[m\textsuperscript{3}]} \\ \midrule
\endhead,
before first line={},
late after line=\\,
late after last line=\\ \bottomrule \caption{Fuselage aerodynamic characteristics due to sideslip \cite{NASA-CR-166309}},
before reading={},
after reading={}
]
{csv/uh60_aero_fuselage_psiwf.csv}
{1=\colbeta,2=\colcy,3=\colcl,4=\colcn,5=\coldcx}
{\colbeta & \colcy & \colcl & \colcn}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_psiwf_yqfmp.eps}
\caption{Fuselage side force due to sideslip \cite{NASA-CR-166309}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_psiwf_rqfmp.eps}
\caption{Fuselage rolling moment due to sideslip \cite{NASA-CR-166309}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_psiwf_nqfmp.eps}
\caption{Fuselage yawing moment due to sideslip \cite{NASA-CR-166309}}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\clearpage
\subsection{Fuselage Incremental Aerodynamic Characteristics}
\csvreader[
no head,
longtable=cccc,
table head=
\toprule
$\psi_{WF}$ & $\Delta D/q$ & $\Delta L/q$ & $\Delta M/q$ \\
{[deg]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{3}]} \\ \midrule
\endfirsthead
$\psi_{WF}$ & $\Delta D/q$ & $\Delta L/q$ & $\Delta M/q$ \\
{[deg]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{2}]} & {[m\textsuperscript{3}]} \\ \midrule
\endhead,
before first line={},
late after line=\\,
late after last line=\\ \bottomrule \caption{Fuselage incremental aerodynamic characteristics \cite{NASA-CR-166309}},
before reading={},
after reading={}
]
{csv/uh60_aero_fuselage_psiwf3.csv}
{1=\colbeta,2=\coldcx,3=\coldcz,4=\coldcm}
{\colbeta & \coldcx & \coldcz & \coldcm}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_psiwf_ddqfmp.eps}
\caption{Fuselage incremental drag due to sideslip \cite{NASA-CR-166309}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_psiwf_dlqfmp.eps}
\caption{Fuselage incremental lift due to sideslip \cite{NASA-CR-166309}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=140mm]{eps/uh60_fuselage_psiwf_dmqfmp.eps}
\caption{Fuselage incremental pitching moment due to sideslip \cite{NASA-CR-166309}}
\end{figure}
| {
"alphanum_fraction": 0.6196617593,
"avg_line_length": 34.5336134454,
"ext": "tex",
"hexsha": "bc610287861641b26a7878e8fa5867675cc90892",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-12-01T19:41:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-12-01T10:56:23.000Z",
"max_forks_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "marek-cel/mscsim-docs",
"max_forks_repo_path": "tex/fdm_uh60_4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "marek-cel/mscsim-docs",
"max_issues_repo_path": "tex/fdm_uh60_4.tex",
"max_line_length": 130,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "marek-cel/mscsim-docs",
"max_stars_repo_path": "tex/fdm_uh60_4.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-09T07:02:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-12-01T02:27:28.000Z",
"num_tokens": 2860,
"size": 8219
} |
\documentclass{article}
\usepackage{fullpage}
\usepackage{nopageno}
\usepackage{amsmath}
\usepackage{amssymb}
\allowdisplaybreaks
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\begin{document}
\title{Notes}
\date{February 3, 2014}
\maketitle
\subsection*{homework 2 number 36}
pick 0,1,2,\dots,$n_i$ things of type $i$. $(n_1+1)(n_2+1)\cdots(n_k+1)$
\subsection*{homework 3 number 47}
$\pi_n$ is the set of partitions of $\{1,\dots,n\}$ into nonempty subsets. $1|25|34$. Top is $1234$ bottom is $1|2|3|4$ $1|2|3|4\to 12|3|4$. $123|45$ and $1|25|34$ are incomparable.
\section*{go}
\begin{align*}
{s_i}^2&=1\\
s_is_j&=s_js_i \qquad |i-j|>1\\
s_is_{i+1}s_i&=s_{i+1}s_is_{i+1}
\end{align*}
$314624\to s_2\to 215634\to 125634\to124635\to124536\to123546\to123456$
$s_4s_5s_3s_4s_1s_2$
$s_4s_3s_1s_2s_5s_4$
these are called "reduced words for 315624"
also try to bring to $123456$ byswapping \#'s in adjacent positions.
How many ways can we make a permutation? infinite. How many ways to make reduced words for a permutation? this is hard.
\subsection*{Theorem of Stanley}
The number of reduced words for (the longest permutation) $n,n-1,n-2,\cdots,3,2,1$ equals the number of standard Young tableaux of shape (half grid with sides of $n-1$, makes a kind of right triangle). Fill with integers $\{1,\dots,\text{\# boxes}\}$
\subsubsection*{note}
there exists a nice counting formula for the standard young tableux called the hook length formula.
\subsubsection*{note to self}
definition of determinant here. put it down!
\subsection*{4.3 generating combinations (subsets)}
how do we represent subsets of $\{x_{n-1},x_{n-2}\dots,x_1,x_0\}$?
\# subsets is $2^n$. each $x_i$ is either therrre or not there so we can represent subsets with lenght $n$ binary strings.
\subsection*{example}
$x=\{x_7,\dots,x_1,x_0\}->01010110$. What number is this in binary? $2+4+16+64=86$
so we can generate subsets or combinations lexicographically:
\subsubsection*{example}
generate all subsets of $x_1,x_2,x_3$.
$\{\}=000$ and to $001=1=\{x_0\},010=2=\{x_1\},011=3=\{x_0,x_1\},100=4,101=5,110=6,111=7$
note that this is not ideal if you want to minimize change from one item to the next. notice how from $3\to4$ everything changes.
\subsubsection*{definition}
a \emph{gray code} is a sequence of subsets such that the change when you move from one subset to the next is minimal.
\begin{align*}
0&1-&-1&1\\
&|&&|\\
0&0-&-0&1
\end{align*}
notice that walking along the square is a gray code.
\end{document}
| {
"alphanum_fraction": 0.7195704057,
"avg_line_length": 36.4347826087,
"ext": "tex",
"hexsha": "d437b59b6958aa5580f14582861d4484b6812ca1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ylixir/school",
"max_forks_repo_path": "combinatorics/combinatorics-notes-2014-02-03.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ylixir/school",
"max_issues_repo_path": "combinatorics/combinatorics-notes-2014-02-03.tex",
"max_line_length": 250,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ylixir/school",
"max_stars_repo_path": "combinatorics/combinatorics-notes-2014-02-03.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 900,
"size": 2514
} |
\subsubsection{Quadratic Factors}
\noindent
If a quadratic doesn't have real roots, then we have a quadratic factor. Here, we'll assume that the quadratic factor isn't repeated.
So, $Q(x) = R(x)(ax^2+bx+c)$, $b^2-4ac < 0$, and $R(x)$ is not evenly divisible by $ax^2+bx+c$.
In this case, we say
\begin{equation*}
\frac{P(x)}{R(x)(ax^2+bx+c)} = \left(\text{Decomposition of }R(x)\right)+\frac{A_1x+B_1}{ax^2+bx+c}.
\end{equation*}
We then solve for the constants in the numerator, possibly having to solve a system of equations or using previous results and less convenient values for $x$.
\begin{example}
Find the partial fraction decomposition of the following expression:
\begin{equation*}
\frac{6x^2+21x+11}{x^3+5x^2+3x+15}.
\end{equation*}
\end{example}
\noindent
Factoring,
\begin{equation*}
x^2+5x^2+3x+15 = (x+5)(x^2+3).
\end{equation*}
So,
\begin{equation*}
\frac{6x^2+21x+11}{x^3+5x^2+3x+15} = \frac{A_1}{x+5}+\frac{A_2x+B_2}{x^2+3}.
\end{equation*}
Multiplying each side by the denominator,
\begin{equation*}
6x^2+21x+11 = A_1(x^2+3)+(A_2x+B_2)(x+5).
\end{equation*}
At $x=-5$,
\begin{equation*}
56 = 28A_1 \implies A_1 = 2.
\end{equation*}
Now we'll use the previous result and another value for $x$. We can use $x=0$ to not have to worry about the $A_2$ term.
At $x=0$,
\begin{equation*}
11 = 2(3) + (B_2)(5) \implies B_2 = 1.
\end{equation*}
Now we'll use the previous 2 results to find $A_2$. $x=1$ is a good choice to keep the numbers small.
At $x=1$,
\begin{equation*}
38 = 2(1+3)+(A_2+1)(6) \implies A_2 = 4.
\end{equation*}
So,
\begin{equation*}
\frac{6x^2+21x+11}{x^3+5x^2+3x+15} = \frac{2}{x+5}+\frac{4x+1}{x^2+3}.
\end{equation*}
| {
"alphanum_fraction": 0.6544074723,
"avg_line_length": 35.6875,
"ext": "tex",
"hexsha": "48c9f4d19f86862148694ae171650da8274f1146",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z",
"max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "aneziac/Math-Summaries",
"max_forks_repo_path": "common/algebraPreCalc/quadraticFactors.tex",
"max_issues_count": 26,
"max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f",
"max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "aneziac/Math-Summaries",
"max_issues_repo_path": "common/algebraPreCalc/quadraticFactors.tex",
"max_line_length": 159,
"max_stars_count": 39,
"max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "aneziac/Math-Summaries",
"max_stars_repo_path": "common/algebraPreCalc/quadraticFactors.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z",
"num_tokens": 707,
"size": 1713
} |
\section{Brot}
\input{Sections/Bread/zopf.tex}
\newpage
\input{Sections/Bread/speckzopf.tex}
\newpage
\input{Sections/Bread/pizzateig.tex}
\newpage
\input{Sections/Bread/zuercherbrot.tex}
\newpage
\input{Sections/Bread/brioche.tex}
\newpage
\input{Sections/Bread/sweet_pretzels.tex} | {
"alphanum_fraction": 0.7881944444,
"avg_line_length": 16,
"ext": "tex",
"hexsha": "5220221b7ce831e84740b93bcc27a8a6fdd75505",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d07bcea099e1028873f2fcac0f7d76f9b31ee9c7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "huserben/cookbook",
"max_forks_repo_path": "Sections/Bread.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d07bcea099e1028873f2fcac0f7d76f9b31ee9c7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "huserben/cookbook",
"max_issues_repo_path": "Sections/Bread.tex",
"max_line_length": 41,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d07bcea099e1028873f2fcac0f7d76f9b31ee9c7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "huserben/cookbook",
"max_stars_repo_path": "Sections/Bread.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 101,
"size": 288
} |
\clearpage\newpage
\subsection{Flowcharts}
The processes of the app are optimized to be notably simple because it has to be easy-to-use. Most of the common actions are 1 to 3 clicks away from the dashboard.
\subsubsection*{Add subject}
This process consists of adding subjects to the dashboard. It is precisely represented in the \textbf{Figure \ref{flowchart-add-subject}}. Check it out carefully to understand all the paths.
Although it may look complex, usually, it's stunningly simple. With 4 interactions a subject can be added:
\begin{enumerate}
\item Click the add button in the dashboard. To open the search screen.
\item Type the name of the subject. The search field is focused automatically, so the user doesn't have to click it.
\item Click the desired search result's checkbox. % switching from \inlineicon{button-search-gray.png} to \inlineicon{button-search-green.png}.
\item Click the add button in the search screen.
\end{enumerate}
\noindent
The process becomes difficult when the subject is not already in the database. When that happens the user has to create the subject. There's also the possibility of editing a subject before adding it, but it's totally optional.
% \begin{itemize}
% \item Google or receive url from a friend
% \item See tutorial, + button animation
% \item Search
% \begin{enumerate}[label=\Alph*]
% \item Finds subject and continues
% \item Doesn't find the subject closes the app
% \item Doesn't find the subject, and tries to create it
% \begin{enumerate}[label=\Alph*]
% \item Manages to create it
% \item Doesn't understand it and closes the app
% \end{enumerate}
% \end{enumerate}
% \item fills the grades
% \begin{enumerate}[label=\Alph*]
% \item Understands the meaning of the gray numbers
% \item Doesn't understand it and just uses the app to store the grades.
% \end{enumerate}
% \end{itemize}
\clearpage\newpage
\vfill
\begin{figure}[ht!]
\center
\includegraphics[height=\textheight-1cm]{media/diagrams/flowchart-add-subject.pdf}
% [height=\textheight-2.15cm] with a \subsubsection on the same page
\caption{Flowchart of adding a subject}
\label{flowchart-add-subject}
\end{figure}
\vfill
\clearpage\newpage
\subsubsection*{More relevant processes}
\vfill
\begin{figure}[ht!]
\vspace*{-1.5in}
\includegraphics[scale=1]{media/diagrams/flowchart-login.pdf}
\vspace*{-0.125in}
\caption{Flowchart of logging-in}
\label{flowchart-login}
\end{figure}
\vfill
\vspace*{-1.5in}
\begin{figure}[ht!]
\includegraphics[scale=1]{media/diagrams/flowchart-logout.pdf}
\vspace*{-0.125in}
\caption{Flowchart of logging-out}
\label{flowchart-logout}
\end{figure}
\vfill
\begin{figure}[ht!]
\vspace*{-1.5in}
\includegraphics[scale=1]{media/diagrams/flowchart-edit-subject.pdf}
\vspace*{-0.125in}
\caption{Flowchart of editing a subject card}
\label{flowchart-edit-subject}
\end{figure}
\vfill
\begin{figure}[ht!]
\vspace*{-1.5in}
\includegraphics[scale=1]{media/diagrams/flowchart-delete-subject.pdf}
\vspace*{-0.125in}
\caption{Flowchart of deleting a subject card}
\label{flowchart-delete-subject}
\end{figure}
\vfill
\begin{figure}[ht!]
\vspace*{-1.5in}
\includegraphics[scale=1]{media/diagrams/flowchart-edit-grade.pdf}
\vspace*{-0.125in}
\caption{Flowchart of editing a grade}
\label{flowchart-edit-grade}
\end{figure}
\vfill
\begin{figure}[ht!]
\vspace*{-1.5in}
\includegraphics[scale=1]{media/diagrams/flowchart-install.pdf}
\vspace*{-0.125in}
\caption{Flowchart of the installation}
\label{flowchart-install}
\end{figure}
\vspace*{-1.5in}
\vfill
% \clearpage\newpage
% \subsubsection{Add subject}
% \begin{figure}[ht!]
% \center
% \includegraphics[width=1\columnwidth]{media/diagrams/flow-search.pdf}
% \caption{Flow search and add subject}
% \label{fig:flow-search}
% \end{figure}
% \begin{figure}[ht!]
% \center
% \includegraphics[width=1\columnwidth]{media/diagrams/flow-create.pdf}
% \caption{Flow create subject}
% \label{fig:flow-create}
% \end{figure}
% \clearpage\newpage
% \subsubsection{Login}
% \begin{figure}[ht!]
% \center
% \includegraphics[width=1\columnwidth]{media/diagrams/flow-login.pdf}
% \caption{Flow login or singing}
% \label{fig:flow-login}
% \end{figure}
% \clearpage\newpage
% \subsubsection{Install}
% \begin{figure}[ht!]
% \center
% \includegraphics[width=1\columnwidth]{media/diagrams/flow-install.pdf}
% \caption{Flow install app}
% \label{fig:flow-install}
% \end{figure}
| {
"alphanum_fraction": 0.701533873,
"avg_line_length": 32.5972222222,
"ext": "tex",
"hexsha": "65ccc3654fe7c1065a67ea7abddd8dc3e33bd6ad",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mauriciabad/TFG",
"max_forks_repo_path": "sections/flowcharts.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mauriciabad/TFG",
"max_issues_repo_path": "sections/flowcharts.tex",
"max_line_length": 227,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mauriciabad/TFG",
"max_stars_repo_path": "sections/flowcharts.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1362,
"size": 4694
} |
\documentclass{article}
\usepackage[letterpaper, margin=1in]{geometry}
\usepackage[version=4]{mhchem}
\usepackage{amsmath}
\usepackage{systeme,mathtools}
\usepackage{url}
\setcounter{MaxMatrixCols}{20}
\let\oldquote\quote
\let\endoldquote\endquote
\renewenvironment{quote}[2][]
{\if\relax\detokenize{#1}\relax
\def\quoteauthor{#2}%
\else
\def\quoteauthor{#2~---~#1}%
\fi
\oldquote}
{\par\nobreak\smallskip\hfill(\quoteauthor)%
\endoldquote\addvspace{\bigskipamount}}
\begin{document}
\title{(Ab)Using Linear Algebra to Balance Chemical Reaction Equations \\\& \\ A Love Letter to Mathematics}
\author{Liang Wang}
\date{\today}
\maketitle
\section{Mechanical Processes Should be Mechanical}
Balancing chemical reaction equations is often taught as an imprecise process that involves lots of idiosyncratic tricks. While trying to balance them by hand is usually doable when the reaction is relatively simple, the process becomes less straightforward when more complex reactions are presented.
However, balancing should be a fairly boring and mechanical process, and \emph{mechanical processes should be mechanical}. Using linear algebra, we could balance any arbitrary reactions without the need for tricks, instead, the process becomes simple and repeatable.
\section{Abusing Linear Algebra to do Simple Math}
The goal of balancing chemical equations is to find the appropriate coefficients such that the number of each molecule is the same for the reactants and the products. The word `balancing' should remind you of solving equations, where equality is preserved as long as the operation is done to both sides of equations.
Indeed, balancing chemical equations is just solving systems of linear equations. Solving a system of linear equations by substitution is often tedious and error-prone. Fortunately, linear algebra provides us with simpler ways of solving such problems.
\subsection{Homogeneous Systems}
In linear algebra, a homogeneous system refers to a equation with the form $A\vec{x} = \vec{0}$, where $A$ is a $m \times n$ matrix and $\vec{0}$ is the zero vector. When $m = n$, i.e.\ the matrix $A$ is square, the system has infinite non-trivial solutions if $\mathrm{det}(A) \neq 0$. When the matrix is rank-deficient, there are infinite non-trivial solutions.
\section{Simple Example}
The following\footnote{Taken from \url{http://myweb.astate.edu/mdraganj/BalanceEqn.html}} is an example of an unbalanced chemical equation that might require so time to solve.
\begin{center}
\ce{S + HNO3 -> H2SO4 + NO2 + H2O}
\end{center}
The complexity lies in that the number of reactants does not equate to the number of products. We shall see how linear algebra simplifies the process.
We assign each $x_i$ to each compound that appears in the equation from left to right. Hence we have the following system of equations.
\begin{equation*}
\ce{$x_1$S + $x_2$HNO3 -> $x_3$H2SO4 + $x_4$NO2 + $x_5$H2O} \implies
\left\{
\quad
\begin{aligned}
x_1 &= x_3 &&\text{Sulphur}\\
x_2 &= 2x_3 + 2x_5 &&\text{Hydrogen}\\
x_2 &= x_4 &&\text{Nitrogen}\\
3x_2 &= 4x_3 + 2x_4 + x_5 &&\text{Oxygen} \\
\end{aligned}
\right.
\end{equation*}
Which is equivalent to the following matrix.
\begin{equation*}
\begin{bmatrix}
1 & 0 & -1 & 0 & 0 \\
0 & 1 & -2 & 0 & -2 \\
0 & 1 & 0 & -1 & 0 \\
0 & 3 & -4 & -2 & -1 \\
\end{bmatrix}
\end{equation*}
Finding the solution set entails finding the null space basis vectors. After row operations, we get the row reduced echelon form to be.
\begin{equation*}
\begin{bmatrix}
1 & 0 &0& 0 &-0.5 \\
0 & 1 &0& 0 &-3 \\
0 & 0 &1& 0 &-0.5 \\
0& 0 &0& 1 &-3 \\
\end{bmatrix}
\end{equation*}
Meaning that there is only one free variable, $x_5$.
\begin{equation*}
\begin{split}
\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
x_4 \\
x_5 \\
\end{bmatrix} =
\begin{bmatrix}
0.5x_5 \\
3x_5 \\
0.5x_3 \\
3x_5 \\
x_5 \\
\end{bmatrix} =
x_5
\begin{bmatrix}
0.5 \\
3 \\
0.5 \\
3 \\
1\\
\end{bmatrix}
\end{split}
\end{equation*}
Since we want integer solutions, and $0.5x_5$ is a entry for our basis vector, we pick $x_5 = 2$ to get the smallest possible positive solution.
$$ \mathrm{solution} =
\begin{bmatrix}
1 \\
6 \\
1 \\
6 \\
2 \\
\end{bmatrix}
$$
Meaning, \ce{S + 6HNO3 -> H2SO4 + 6NO2 + 2H2O}. We see this is indeed the desired solution.
\section{Complex Example}
One can say the previous example is still relatively easy and can be attempted without the use of matrices. The section provides a complex example\footnote{Taken from \url{https://www.chembuddy.com/?left=balancing-stoichiometry-questions&right=balancing-questions}} where doing it by hand is borderline torture.
\begin{center}
\ce{K4[Fe(SCN)6] + K2Cr2O7 + H2SO4 -> Fe2(SO4)3 + Cr2(SO4)3 + CO2 + H2O + K2SO4 + KNO3}
\end{center}
As usual, we assign $x_i$ to be the coefficient of each compounds from left to right. We obtain the following system of equations.
\begin{equation*}
\left\{
\quad
\begin{aligned}
4x_1 + 2x_2 - 2x_8 - x_9&= 0 &&\text{Potassium}\\
x_1 - 2x_4 &= 0 &&\text{Iron}\\
6x_1 + x_3- 3x_4 - 3x_5 - x_8 &= 0 &&\text{Sulphur}\\
6x_1 - x_6 &= 0 &&\text{Carbon} \\
6x_1 - x_9 &= 0 &&\text{Nitrogen}\\
2x_2 - 2x_5 &= 0 &&\text{Chromium}\\
7x_2 + 4x_3 - 12x_4 - 12x_5 - 2x_6 -x_7 - 4x_8 - 3x_9 &= 0 &&\text{Oxygen} \\
2x_3 - 2x_7 &= 0 && \text{Hydrogen} \\
\end{aligned}
\right.
\end{equation*}
Converting it to matrix form gives us the following.
$$
\begin{bmatrix}
4 & 2 & 0 & 0 & 0 & 0 & 0 & -2 & -1 \\
1 & 0 & 0 & -2 & 0 & 0 & 0 & 0 & 0\\
6 & 0 & 1 & -3 & -3 & 0 & 0 & -1 & 0\\
6 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0\\
6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1\\
0 & 2 & 0 & 0 & -2 & 0 & 0 & 0 & 0\\
0 & 7 & 4 & -12 & -12 & -2 & -1 & -4 & -3\\
0 & 0 & 2 & 0 & 0 & 0 & -2 & 0 & 0\\
\end{bmatrix}
$$
We simplify the matrix to its row reduced echelon form.
$$
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1/6\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -97/36\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -355/36\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1/12\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & -97/36\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -355/36\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -91/36\\
\end{bmatrix}
$$
We write the null space basis vector in terms of our one free variable, $x_9$.
\begin{equation*}
\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
x_4 \\
x_5 \\
x_6 \\
x_7 \\
x_8 \\
x_9 \\
\end{bmatrix}
=
x_9
\begin{bmatrix}
1/6 \\
97/36 \\
355/36 \\
1/12\\
97/36\\
1\\
355/36\\
91/36\\
1 \\
\end{bmatrix}
\end{equation*}
We pick $x_9$ to the be the inverse of the least common factor, $\mathrm{lcm}\left(\dfrac{1}{6},\: \dfrac{97}{36},\: \dfrac{355}{36},\: \dfrac{1}{12},\: \dfrac{97}{36},\: 1,\: \dfrac{355}{36},\: \dfrac{91}{36},\: 1\right)^{-1} = 36$. We obtain the solution set of:
\begin{equation*}
\mathrm{solution} =
\begin{bmatrix}
6 \\
97 \\
355 \\
3 \\
97 \\
36 \\
355 \\
91 \\
36 \\
\end{bmatrix}
\end{equation*}
Henceforth, the complex reaction is as follows:
\begin{center}
\ce{6 K4[Fe(SCN)6] + 97 K2Cr2O7 + 355 H2SO4 -> 3 Fe2(SO4)3 + 97 Cr2(SO4)3 + 36 CO2 + 355 H2O + 91 K2SO4 + 36 KNO3}
\end{center}
Feel free to verify the result using the \emph{magic of internet}.
\section{A Love Letter to Mathematics}
\begin{quote}{Henri Poincaré}
The scientist does not study nature because it is useful to do so. He studies it because he takes pleasure in it, and he takes pleasure in it because it is beautiful. If nature were not beautiful it would not be worth knowing, and life would not be worth living
\end{quote}
\begin{quote}{Richard Dawkins}
After sleeping through a hundred million centuries we have finally opened our eyes on a sumptuous planet, sparkling with colour, bountiful with life. Within decades we must close our eyes again. Isn’t it a noble, an enlightened way of spending our brief time in the sun, to work at understanding the universe and how we have come to wake up in it?
\end{quote}
The world is a wonderful place filled with wonders, it is the noblest pursuit to try to understand it. One may prefer traditional science in an attempt to directly understand the world, one may find philosophy intriguing in understanding humanity. As someone majoring in software, I have always found mathematics to be the beauty that unlocks the understanding of the world.
Mathematics has the interesting duality of both being immensely useful while desperately trying to be abstract and devoid of the real world. G.H.\ Hardy, the famous English mathematician that worked with the great Srinivasa Ramanujan---famously depicted in the film \textit{The Man Who Knew Infinity} once said:
\begin{quote}{G.H.\ Hardy}
We have concluded that trivial mathematics is, on the whole, useful, and that the real mathematics, on the whole, is not.
\end{quote}
Most mathematics, such as Group Theory, Category Theory, and Topology try to be as abstract as possible. Yet they all have proved to be critical tools in understanding large software. The concept of a monoid seems completely out of touch with reality until you realize it's everywhere in programming languages.
Embrace mathematics both for its abstract beauty and the applications, for which it is the language of both the abstract world and the world we live in.
\end{document} | {
"alphanum_fraction": 0.6122547134,
"avg_line_length": 44.6180257511,
"ext": "tex",
"hexsha": "bfe9084ef7ffd7a10d172b01755cc21f09bb63a7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "61582a27e9c8c1667f044bee7eefc62791eabbe7",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "Internal-Compiler-Error/linear-albegra-to-balance-chemical-equations",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "61582a27e9c8c1667f044bee7eefc62791eabbe7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "Internal-Compiler-Error/linear-albegra-to-balance-chemical-equations",
"max_issues_repo_path": "main.tex",
"max_line_length": 376,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "61582a27e9c8c1667f044bee7eefc62791eabbe7",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "Internal-Compiler-Error/linear-albegra-to-balance-chemical-equations",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3593,
"size": 10396
} |
\section{Related Work\label{sec-related}}
There is a vast body of literature available on the topic of formal verification,
including verification of hardware processing cores and low-level software programs.
Our work builds in a substantial way on a few known ideas that we will review in
this section. We thank the formal verification and programming languages
communities and hope that the formal semantics of the REDFIN processing core
will provide a new interesting benchmark for future studies.
We model the REDFIN microarchitecture using a~\emph{monadic state transformer
metalanguage} -- an idea with a long history.
\citet{fox2010trustworthy} formalise the Arm v7 instruction
set architecture in HOL4 and give a careful account to bit-accurate proofs of
the instruction decoder correctness. Later, \citet{kennedy2013coq}
formalised a subset of the x86 architecture in Coq, using monads for instruction
execution semantics, and \textsf{do}-notation for assembly language embedding.
% Both these models are formalised in proof assistants, thus are powered by full
% dependent types, which allow the usage of mechanised program correctness proofs.
\citet{degenbaev2012formal} formally specified the \emph{complete} x86
instruction set -- a truly monumental effort! -- using a custom domain-specific
language that can be translated to a formal proof system. Arm's Architecture
Specification Language (ASL) has been developed for the same purpose to formalise the
Arm v8 instruction set~\cite{reid2016cav}. The SAIL language~\cite{SAIL-lang} has
been designed as a generic language for ISA specification and was used to
specify the semantics of ARMv8-A, \mbox{RISC-V}, and \mbox{CHERI-MIPS}.
Our specification approach is similar to these three works, but we operate on a
much smaller scale of the REDFIN core and focus on verifying whole programs.
Our metalanguage is embedded in Haskell and does not have a rigorous
formalisation, i.e. we cannot prove the correctness of the REDFIN semantics
itself, which is a common concern, e.g. see~\citet{reid2017oopsla}. Moreover, our
verification workflow mainly relies on \emph{automated} theorem proving, rather
than on \emph{interactive} one. This is motivated by the cost of precise proof
assistant formalisations in terms of human resources: automated techniques are
more CPU-intensive, but cause less ``human-scaling issues''~\cite{reid2016cav}.
Our goal was to create a framework that could be seamlessly
integrated into an existing spacecraft engineering workflow, therefore it needed
to have as much proof automation as possible. The automation is achieved by means
of \emph{symbolic program execution}. \citet{Currie2006} applied
symbolic execution with uninterpreted functions to prove equivalence of low-level
assembly programs. The framework we present allows not only proving the
equivalence of low-level programs, but also their compliance with higher-level
specifications written in a subset of Haskell.
% A lot of research work has been done on the design of \emph{typed assembly
% languages}, e.g. see~\cite{Haas:2017:BWU:3140587.3062363}\cite{Morrisett:1999:SFT:319301.319345}.
% The low-level REDFIN assembly is untyped, but the syntactic language of
% arithmetic expressions that we implemented on top of it does have a simple type
% system. In principle, the REDFIN assembly itself may benefit from a richer type
% system, especially one enforcing correct operation with relevant mission-specific
% units of measurement~\cite{Kennedy:1997:RPU:263699.263761}.
Finally, we would like to acknowledge the projects and talks
that provided an initial inspiration for this work: the `Monads to Machine
Code' compiler by \citet{diehl-monads-to-machines}, \mbox{RISC-V} semantics
by~\citet{riscv-semantics}, the assembly monad by~\citet{asm-monad}, and
SMT-based program analysis by \citet{haskell-z3}.
\section*{Acknowledgements}
We would like to thank Vitaly Bragilevsky, Georgi Lyubenov, Neil Mitchell, Charles Morisset,
Artem Pelenitsyn, Danil Sokolov, as well as the three Haskell Symposium
reviewers for their helpful feedback on an earlier version of this paper. | {
"alphanum_fraction": 0.8103072828,
"avg_line_length": 67.7540983607,
"ext": "tex",
"hexsha": "d434a88b2c53ff7d3dfdd5eca5b4b09bd5777727",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tuura/redfin",
"max_forks_repo_path": "papers/acm-tecs-paper/7-related-work.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tuura/redfin",
"max_issues_repo_path": "papers/acm-tecs-paper/7-related-work.tex",
"max_line_length": 99,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tuura/redfin",
"max_stars_repo_path": "papers/acm-tecs-paper/7-related-work.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-05T19:13:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-05T19:13:46.000Z",
"num_tokens": 990,
"size": 4133
} |
% The contents of this file is
% Copyright (c) 2009- Charles R. Severance, All Righs Reserved
\chapter{Visualizing data}
So far we have been learning the Python language and then
learning how to use Python, the network, and databases
to manipulate data.
In this chapter, we take a look at three
complete applications that bring all of these things together
to manage and visualize data. You might use these applications
as sample code to help get you started in solving a
real-world problem.
Each of the applications is a ZIP file that you can download
and extract onto your computer and execute.
\section{Building a Google map from geocoded data}
\index{Google!map}
\index{Visualization!map}
In this project, we are using the Google geocoding API
to clean up some user-entered geographic locations of
university names and then placing the data on a Google
map.
\beforefig
\centerline{\includegraphics[height=2.25in]{figs2/google-map.eps}}
\afterfig
To get started, download the application from:
\url{www.py4inf.com/code/geodata.zip}
The first problem to solve is that the free Google geocoding
API is rate-limited to a certain number of requests per day. If you have
a lot of data, you might need to stop and restart the lookup
process several times. So we break the problem into two
phases.
\index{cache}
In the first phase we take our input ``survey'' data in the file
{\bf where.data} and read it one line at a time, and retrieve the
geocoded information from Google and store it
in a database {\bf geodata.sqlite}.
Before we use the geocoding API for each user-entered location,
we simply check to see if we already have the data for that
particular line of input. The database is functioning as a
local ``cache'' of our geocoding data to make sure we never ask
Google for the same data twice.
You can restart the process at any time by removing the file
{\bf geodata.sqlite}.
Run the {\bf geoload.py} program. This program will read the input
lines in {\bf where.data} and for each line check to see if it is already
in the database. If we don't have the data for the location, it will
call the geocoding API to retrieve the data and store it in
the database.
Here is a sample run after there is already some data in the
database:
\beforeverb
\begin{verbatim}
Found in database Northeastern University
Found in database University of Hong Kong, ...
Found in database Technion
Found in database Viswakarma Institute, Pune, India
Found in database UMD
Found in database Tufts University
Resolving Monash University
Retrieving http://maps.googleapis.com/maps/api/
geocode/json?address=Monash+University
Retrieved 2063 characters { "results" : [
{u'status': u'OK', u'results': ... }
Resolving Kokshetau Institute of Economics and Management
Retrieving http://maps.googleapis.com/maps/api/
geocode/json?address=Kokshetau+Inst ...
Retrieved 1749 characters { "results" : [
{u'status': u'OK', u'results': ... }
...
\end{verbatim}
\afterverb
%
The first five locations are already in the database and so they
are skipped. The program scans to the point where it finds new
locations and starts retrieving them.
The {\bf geoload.py} program can be stopped at any time, and there is a counter
that you can use to limit the number of calls to the geocoding
API for each run. Given that the {\bf where.data} only has a few hundred
data items, you should not run into the daily rate limit, but if you
had more data it might take several runs over several days to
get your database to have all of the geocoded data for your input.
Once you have some data loaded into {\bf geodata.sqlite}, you can
visualize the data using the {\bf geodump.py} program. This
program reads the database and writes the file {\bf where.js}
with the location, latitude, and longitude in the form of
executable JavaScript code.
A run of the {\bf geodump.py} program is as follows:
\beforeverb
\begin{verbatim}
Northeastern University, ... Boston, MA 02115, USA 42.3396998 -71.08975
Bradley University, 1501 ... Peoria, IL 61625, USA 40.6963857 -89.6160811
...
Technion, Viazman 87, Kesalsaba, 32000, Israel 32.7775 35.0216667
Monash University Clayton ... VIC 3800, Australia -37.9152113 145.134682
Kokshetau, Kazakhstan 53.2833333 69.3833333
...
12 records written to where.js
Open where.html to view the data in a browser
\end{verbatim}
\afterverb
%
The file {\bf where.html} consists of HTML and JavaScript to visualize
a Google map. It reads the most recent data in {\bf where.js} to get
the data to be visualized. Here is the format of the {\bf where.js} file:
\beforeverb
\begin{verbatim}
myData = [
[42.3396998,-71.08975, 'Northeastern Uni ... Boston, MA 02115'],
[40.6963857,-89.6160811, 'Bradley University, ... Peoria, IL 61625, USA'],
[32.7775,35.0216667, 'Technion, Viazman 87, Kesalsaba, 32000, Israel'],
...
];
\end{verbatim}
\afterverb
%
This is a JavaScript variable that contains a list of lists.
The syntax for JavaScript list constants is very similar to
Python, so the syntax should be familiar to you.
Simply open {\bf where.html} in a browser to see the locations. You
can hover over each map pin to find the location that the
geocoding API returned for the user-entered input. If you
cannot see any data when you open the {\bf where.html} file, you might
want to check the JavaScript or developer console for your browser.
\section{Visualizing networks and interconnections}
\index{Google!page rank}
\index{Visualization!networks}
\index{Visualization!page rank}
In this application, we will perform some of the functions of a search
engine. We will first spider a small subset of the web and run
a simplified version of the Google page rank algorithm to
determine which pages are most highly connected, and then visualize
the page rank and connectivity of our small corner of the web.
We will use the D3 JavaScript visualization library
\url{http://d3js.org/} to produce the visualization output.
You can download and extract this application from:
\url{www.py4inf.com/code/pagerank.zip}
\beforefig
\centerline{\includegraphics[height=2.25in]{figs2/pagerank.eps}}
\afterfig
The first program ({\bf spider.py}) program crawls a web
site and pulls a series of pages into the
database ({\bf spider.sqlite}), recording the links between pages.
You can restart the process at any time by removing the
{\bf spider.sqlite} file and rerunning {\bf spider.py}.
\beforeverb
\begin{verbatim}
Enter web url or enter: http://www.dr-chuck.com/
['http://www.dr-chuck.com']
How many pages:2
1 http://www.dr-chuck.com/ 12
2 http://www.dr-chuck.com/csev-blog/ 57
How many pages:
\end{verbatim}
\afterverb
%
In this sample run, we told it to crawl a website and retrieve two
pages. If you restart the program and tell it to crawl more
pages, it will not re-crawl any pages already in the database. Upon
restart it goes to a random non-crawled page and starts there. So
each successive run of {\bf spider.py} is additive.
\beforeverb
\begin{verbatim}
Enter web url or enter: http://www.dr-chuck.com/
['http://www.dr-chuck.com']
How many pages:3
3 http://www.dr-chuck.com/csev-blog 57
4 http://www.dr-chuck.com/dr-chuck/resume/speaking.htm 1
5 http://www.dr-chuck.com/dr-chuck/resume/index.htm 13
How many pages:
\end{verbatim}
\afterverb
%
You can have multiple starting points in the same database---within
the program, these are called ``webs''. The spider
chooses randomly amongst all non-visited links across all
the webs as the next page to spider.
If you want to dump the contents of the {\bf spider.sqlite} file, you can
run {\bf spdump.py} as follows:
\beforeverb
\begin{verbatim}
(5, None, 1.0, 3, u'http://www.dr-chuck.com/csev-blog')
(3, None, 1.0, 4, u'http://www.dr-chuck.com/dr-chuck/resume/speaking.htm')
(1, None, 1.0, 2, u'http://www.dr-chuck.com/csev-blog/')
(1, None, 1.0, 5, u'http://www.dr-chuck.com/dr-chuck/resume/index.htm')
4 rows.
\end{verbatim}
\afterverb
%
This shows the number of incoming links, the old page rank, the new page
rank, the id of the page, and the url of the page. The {\bf spdump.py} program
only shows pages that have at least one incoming link to them.
Once you have a few pages in the database, you can run page rank on the
pages using the {\bf sprank.py} program. You simply tell it how many page
rank iterations to run.
\beforeverb
\begin{verbatim}
How many iterations:2
1 0.546848992536
2 0.226714939664
[(1, 0.559), (2, 0.659), (3, 0.985), (4, 2.135), (5, 0.659)]
\end{verbatim}
\afterverb
%
You can dump the database again to see that page rank has been updated:
\beforeverb
\begin{verbatim}
(5, 1.0, 0.985, 3, u'http://www.dr-chuck.com/csev-blog')
(3, 1.0, 2.135, 4, u'http://www.dr-chuck.com/dr-chuck/resume/speaking.htm')
(1, 1.0, 0.659, 2, u'http://www.dr-chuck.com/csev-blog/')
(1, 1.0, 0.659, 5, u'http://www.dr-chuck.com/dr-chuck/resume/index.htm')
4 rows.
\end{verbatim}
\afterverb
%
You can run {\bf sprank.py} as many times as you like and it will simply refine
the page rank each time you run it. You can even run {\bf sprank.py} a few times
and then go spider a few more pages sith {\bf spider.py} and then run {\bf sprank.py}
to reconverge the page rank values. A search engine usually runs both the crawling and
ranking programs all the time.
If you want to restart the page rank calculations without respidering the
web pages, you can use {\bf spreset.py} and then restart {\bf sprank.py}.
\beforeverb
\begin{verbatim}
How many iterations:50
1 0.546848992536
2 0.226714939664
3 0.0659516187242
4 0.0244199333
5 0.0102096489546
6 0.00610244329379
...
42 0.000109076928206
43 9.91987599002e-05
44 9.02151706798e-05
45 8.20451504471e-05
46 7.46150183837e-05
47 6.7857770908e-05
48 6.17124694224e-05
49 5.61236959327e-05
50 5.10410499467e-05
[(512, 0.0296), (1, 12.79), (2, 28.93), (3, 6.808), (4, 13.46)]
\end{verbatim}
\afterverb
%
For each iteration of the page rank algorithm it prints the average
change in page rank per page. The network initially is quite
unbalanced and so the individual page rank values change wildly between
iterations. But in a few short iterations, the page rank converges. You
should run {\bf sprank.py} long enough that the page rank values converge.
If you want to visualize the current top pages in terms of page rank,
run {\bf spjson.py} to read the database and write the data for the
most highly linked pages in JSON format to be viewed in a
web browser.
\beforeverb
\begin{verbatim}
Creating JSON output on spider.json...
How many nodes? 30
Open force.html in a browser to view the visualization
\end{verbatim}
\afterverb
%
You can view this data by opening the file {\bf force.html} in your web browser.
This shows an automatic layout of the nodes and links. You can click and
drag any node and you can also double-click on a node to find the URL
that is represented by the node.
If you rerun the other utilities, rerun {\bf spjson.py} and
press refresh in the browser to get the new data from {\bf spider.json}.
\section{Visualizing mail data}
Up to this point in the book, you have become quite familiar with our
{\bf mbox-short.txt} and {\bf mbox.txt} data files. Now it is time to take
our analysis of email data to the next level.
In the real world, sometimes you have to pull down mail data from servers.
That might take quite some time and the data might be inconsistent,
error-filled, and need a lot of cleanup or adjustment. In this section, we
work with an application that is the most complex so far and pull down nearly a
gigabyte of data and visualize it.
\beforefig
\centerline{\includegraphics[height=2.50in]{figs2/wordcloud.eps}}
\afterfig
You can download this application from:
\url{www.py4inf.com/code/gmane.zip}
We will be using data from a free email list archiving service called
\url{www.gmane.org}. This service is very popular with open source
projects because it provides a nice searchable archive of their
email activity. They also have a very liberal policy regarding accessing
their data through their API. They have no rate limits, but ask that you
don't overload their service and take only the data you need. You can read
gmane's terms and conditions at this page:
\url{http://gmane.org/export.php}
{\em It is very important that you make use of the gmane.org data
responsibly by adding delays to your access of their services and spreading
long-running jobs over a longer period of time. Do not abuse this free service
and ruin it for the rest of us.}
When the Sakai email data was spidered using this software, it produced nearly
a Gigabyte of data and took a number of runs on several days.
The file {\bf README.txt} in the above ZIP may have instructions as to how
you can download a pre-spidered copy of the {\bf content.sqlite} file for
a majority of the Sakai email corpus so you don't have to spider for
five days just to run the programs. If you download the pre-spidered
content, you should still run the spidering process to catch up with
more recent messages.
The first step is to spider the gmane repository. The base URL
is hard-coded in the {\bf gmane.py} and is hard-coded to the Sakai
developer list. You can spider another repository by changing that
base url. Make sure to delete the {\bf content.sqlite} file if you
switch the base url.
The {\bf gmane.py} file operates as a responsible caching spider in
that it runs slowly and retrieves one mail message per second so
as to avoid getting throttled by gmane. It stores all of
its data in a database and can be interrupted and restarted
as often as needed. It may take many hours to pull all the data
down. So you may need to restart several times.
Here is a run of {\bf gmane.py} retrieving the last five messages of the
Sakai developer list:
\beforeverb
\begin{verbatim}
How many messages:10
http://download.gmane.org/gmane.comp.cms.sakai.devel/51410/51411 9460
[email protected] 2013-04-05 re: [building ...
http://download.gmane.org/gmane.comp.cms.sakai.devel/51411/51412 3379
[email protected] 2013-04-06 re: [building ...
http://download.gmane.org/gmane.comp.cms.sakai.devel/51412/51413 9903
[email protected] 2013-04-05 [building sakai] melete 2.9 oracle ...
http://download.gmane.org/gmane.comp.cms.sakai.devel/51413/51414 349265
[email protected] 2013-04-07 [building sakai] ...
http://download.gmane.org/gmane.comp.cms.sakai.devel/51414/51415 3481
[email protected] 2013-04-07 re: ...
http://download.gmane.org/gmane.comp.cms.sakai.devel/51415/51416 0
Does not start with From
\end{verbatim}
\afterverb
%
The program scans {\bf content.sqlite} from one up to the first message number not
already spidered and starts spidering at that message. It continues spidering
until it has spidered the desired number of messages or it reaches a page
that does not appear to be a properly formatted message.
Sometimes \url{gmane.org} is missing a message. Perhaps administrators can delete messages
or perhaps they get lost. If your spider stops, and it seems it has hit
a missing message, go into the SQLite Manager and add a row with the missing id leaving
all the other fields blank and restart {\bf gmane.py}. This will unstick the
spidering process and allow it to continue. These empty messages will be ignored in the next
phase of the process.
One nice thing is that once you have spidered all of the messages and have them in
{\bf content.sqlite}, you can run {\bf gmane.py} again to get new messages as
they are sent to the list.
The {\bf content.sqlite} data is pretty raw, with an inefficient data model,
and not compressed.
This is intentional as it allows you to look at {\bf content.sqlite}
in the SQLite Manager to debug problems with the spidering process.
It would be a bad idea to run any queries against this database, as they
would be quite slow.
The second process is to run the program {\bf gmodel.py}. This program reads the raw
data from {\bf content.sqlite} and produces a cleaned-up and well-modeled version of the
data in the file {\bf index.sqlite}. This file will be much smaller (often 10X
smaller) than {\bf content.sqlite} because it also compresses the header and body text.
Each time {\bf gmodel.py} runs it deletes and rebuilds {\bf index.sqlite}, allowing
you to adjust its parameters and edit the mapping tables in {\bf content.sqlite} to tweak the
data cleaning process. This is a sample run of {\bf gmodel.py}. It prints a line out each time
250 mail messages are processed so you can see some progress happening, as this program may
run for a while processing nearly a Gigabyte of mail data.
\beforeverb
\begin{verbatim}
Loaded allsenders 1588 and mapping 28 dns mapping 1
1 2005-12-08T23:34:30-06:00 [email protected]
251 2005-12-22T10:03:20-08:00 [email protected]
501 2006-01-12T11:17:34-05:00 [email protected]
751 2006-01-24T11:13:28-08:00 [email protected]
...
\end{verbatim}
\afterverb
%
The {\bf gmodel.py} program handles a number of data cleaning tasks.
Domain names are truncated to two levels for .com, .org, .edu, and .net.
Other domain names are truncated to three levels. So si.umich.edu becomes
umich.edu and caret.cam.ac.uk becomes cam.ac.uk. Email addresses are also
forced to lower case, and some of the @gmane.org address like the following
\beforeverb
\begin{verbatim}
[email protected]
\end{verbatim}
\afterverb
%
are converted to the real address whenever there is a matching real email
address elsewhere in the message corpus.
In the {\bf content.sqlite} database there are two tables that allow
you to map both domain names and individual email addresses that change over
the lifetime of the email list. For example, Steve Githens used the following
email addresses as he changed jobs over the life of the Sakai developer list:
\beforeverb
\begin{verbatim}
[email protected]
[email protected]
[email protected]
\end{verbatim}
\afterverb
%
We can add two entries to the Mapping table in {\bf content.sqlite} so
{\bf gmodel.py} will map all three to one address:
\beforeverb
\begin{verbatim}
[email protected] -> [email protected]
[email protected] -> [email protected]
\end{verbatim}
\afterverb
%
You can also make similar entries in the DNSMapping table if there are multiple
DNS names you want mapped to a single DNS. The following mapping was added to the Sakai data:
\beforeverb
\begin{verbatim}
iupui.edu -> indiana.edu
\end{verbatim}
\afterverb
%
so all the accounts from the various Indiana University campuses are tracked together.
You can rerun the {\bf gmodel.py} over and over as you look at the data, and add mappings
to make the data cleaner and cleaner. When you are done, you will have a nicely
indexed version of the email in {\bf index.sqlite}. This is the file to use to do data
analysis. With this file, data analysis will be really quick.
The first, simplest data analysis is to determine "who sent the most mail?" and "which
organization sent the most mail"? This is done using {\bf gbasic.py}:
\beforeverb
\begin{verbatim}
How many to dump? 5
Loaded messages= 51330 subjects= 25033 senders= 1584
Top 5 Email list participants
[email protected] 2657
[email protected] 1742
[email protected] 1591
[email protected] 1304
[email protected] 1184
Top 5 Email list organizations
gmail.com 7339
umich.edu 6243
uct.ac.za 2451
indiana.edu 2258
unicon.net 2055
\end{verbatim}
\afterverb
%
Note how much more quickly {\bf gbasic.py} runs compared to {\bf gmane.py}
or even {\bf gmodel.py}. They are all working on the same data, but
{\bf gbasic.py} is using the compressed and normalized data in
{\bf index.sqlite}. If you have a lot of data to manage, a multistep
process like the one in this application may take a little longer to develop,
but will save you a lot of time when you really start to explore
and visualize your data.
You can produce a simple visualization of the word frequency in the subject lines
in the file {\bf gword.py}:
\beforeverb
\begin{verbatim}
Range of counts: 33229 129
Output written to gword.js
\end{verbatim}
\afterverb
%
This produces the file {\bf gword.js} which you can visualize using
{\bf gword.htm} to produce a word cloud similar to the one at the beginning
of this section.
A second visualization is produced by {\bf gline.py}. It computes email
participation by organizations over time.
\beforeverb
\begin{verbatim}
Loaded messages= 51330 subjects= 25033 senders= 1584
Top 10 Oranizations
['gmail.com', 'umich.edu', 'uct.ac.za', 'indiana.edu',
'unicon.net', 'tfd.co.uk', 'berkeley.edu', 'longsight.com',
'stanford.edu', 'ox.ac.uk']
Output written to gline.js
\end{verbatim}
\afterverb
%
Its output is written to {\bf gline.js} which is visualized using {\bf gline.htm}.
\beforefig
\centerline{\includegraphics[height=2.50in]{figs2/mailorg.eps}}
\afterfig
This is a relatively complex and sophisticated application and
has features to do some real data retrieval, cleaning, and visualization.
| {
"alphanum_fraction": 0.7609314237,
"avg_line_length": 37.7885304659,
"ext": "tex",
"hexsha": "f7613c35f32f0bc2fb80430d53865cdb7d53f0c4",
"lang": "TeX",
"max_forks_count": 45,
"max_forks_repo_forks_event_max_datetime": "2022-01-09T16:06:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-03T17:26:02.000Z",
"max_forks_repo_head_hexsha": "e68273927aeb0decbe6b24703de6c30494f0fc55",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "milog17/py4inf",
"max_forks_repo_path": "15-viz.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "e68273927aeb0decbe6b24703de6c30494f0fc55",
"max_issues_repo_issues_event_max_datetime": "2017-01-13T15:29:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-12-15T04:03:15.000Z",
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "milog17/py4inf",
"max_issues_repo_path": "15-viz.tex",
"max_line_length": 95,
"max_stars_count": 41,
"max_stars_repo_head_hexsha": "e68273927aeb0decbe6b24703de6c30494f0fc55",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "csev/py4inf",
"max_stars_repo_path": "15-viz.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-14T15:37:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-27T22:13:41.000Z",
"num_tokens": 5824,
"size": 21086
} |
\chapter{\prog\ on the Macintosh}
As of version 1.2 of \prog, there is a Macintosh port of everything.
At the moment, the Mac version only runs on Power Macs.
A port to the 68K based Macs is not suported.
%There hasn't been much testing of \prog\ on Power Macs
%configured differently than what we have in the group, but it's a fairly
%safe bet that \viewprog\ will not work on displays that have
%less than 256 colors available. There's also a distinct possibility
%that it won't work on displays with more than 256 colors. To be safe,
%go to the Monitors control panel and switch to 256 color mode before
%running \viewprog.
The Mac port was done using the CodeWarrior compiler from Metrowerks.
Source code and project files for the Metrowerks IDE are available
upon request.
Codewarrior is fantastic ! Metroworks prices it reasonably, includes a ton of
useful examples and libraries, and has an
excellent upgrade policy. In addition, the MW technical support is
{\bf excellent}.
The Fortran bits of the program were converted using f2c on our
workstations, and then compiled on the Mac using a port of the f2c
libraries. All input and output that would normally go to the
console on a workstation is handled by the SIOUX library included with
CodeWarrior. The basic structure of the graphics stuff used in
\viewprog\ was done using the EasyApp application shell distributed
with CW.
\section{A couple of disclaimers}
The Mac version of \prog\ is not the most beautiful thing that the
world has ever seen. Some of the operations are handled in an ugly,
non-Mac way. This is a direct consequence of the program's
Unix heritage. Hopefully, in some future version these
difficulties will be eliminated.
The Mac version of the programs are not nearly as stable as the UNIX
version, principally because the MacOS isn't a
protected mode operating system.
\section{Using \calcprog\ on a Macintosh}
When \calcprog\ starts up it will open a standard file choice dialog,
you should choose the input file in that dialog box. If the program
has problems opening the parameter file (usually called {\tt
eht\_parms.dat}), it'll pop up another dialog box. You should use that
dialog box to find and select the parameter file. You can avoid this
by having a copy of the parameter file in the same folder as the input file.
To work around this make an alias for {\tt eht\_parms.dat}, copy it to
the input folder, and then rename it {\tt eht\_parms.dat}.
%\section{Using \viewprog\ on a Macintosh}
%While the Mac version of \viewprog\ is very similar to the X version,
%there are a few differences:
%\begin{itemize}
%\item Instead of having button windows open up, new menus are added to the
%menu bar.
%\item When opening files, you will not be prompted for the file
%name, but will be given a standard Mac file choice dialog.
%\item Filling of projected DOSs is not done on screen.
%If curve filling is turned on, the printed output will
%be filled properly.
%\item Line styles are indicated with color on screen. This is because
%Quickdraw doesn't seem to support dashed line styles. The Postscript
%files generated still use dashed line styles like on the Unix version.
%\item Breaking lines and tube bonds are not always drawn properly on screen.
%This is due to another problem with Quickdraw, which does not define
%line thicknesses relative to the center of the line. Again, the
%Postscript output is fine.
%\item Standard mac printing isn't up and running yet, but you can still
% create a postscript file and print that just as you would print any
% postscript file from the mac.
%This will probably be fixed in a future version.
%\item Because the Mac mouse only has a single button, some of the
%selection features work differently. You must be in {\sf Select} mode
%to change active objects.
%\end{itemize}
\section{The fitting programs}
The fitting programs will open a file choice dialog on start up. You
should pick the {\em input} file used to run the calculation.
\section{General Mac hints}
If you get errors about the programs not having enough memory or not
being able to allocate matrices, increase the size of the memory
allocation for the troublesome program. If you don't know how to do
this: select the application you want to change, select ``Get
Info'' from the File menu (or hit CMD-I), then increase the
``Preferred Size'' entry.
%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%
| {
"alphanum_fraction": 0.7686752426,
"avg_line_length": 42.2,
"ext": "tex",
"hexsha": "3f7f993abc09f29a1d21197539fbaf9b43ccce1b",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2022-01-19T17:07:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-07-28T18:57:32.000Z",
"max_forks_repo_head_hexsha": "d8c7e437b949af4868f7d79c68faf77433081549",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "richardjgowers/yaehmop",
"max_forks_repo_path": "docs/mac_version.tex",
"max_issues_count": 26,
"max_issues_repo_head_hexsha": "d8c7e437b949af4868f7d79c68faf77433081549",
"max_issues_repo_issues_event_max_datetime": "2021-02-22T13:03:01.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-07-28T18:59:31.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "richardjgowers/yaehmop",
"max_issues_repo_path": "docs/mac_version.tex",
"max_line_length": 77,
"max_stars_count": 17,
"max_stars_repo_head_hexsha": "d8c7e437b949af4868f7d79c68faf77433081549",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "richardjgowers/yaehmop",
"max_stars_repo_path": "docs/mac_version.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-19T16:57:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-08-07T05:17:19.000Z",
"num_tokens": 1057,
"size": 4431
} |
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{wrapfig}
\usepackage[pdftex]{graphicx}
\usepackage{geometry}
\geometry{a4paper, top=30mm, left=27mm, right=27mm}
\newcommand{\whitespace}{\rule{\linewidth}{0.0mm}}
\begin{document}
\title{Virtual Reality}
\author{Maximilian Sieß}
%======TITLE PAGE=========
\begin{titlepage}
\begin{center}
\includegraphics[scale=1]{images/uibk} \\
Leopold-Franzens-Universität Innsbruck
\linebreak
Institute of Computer Science \\
Interactive Graphics and Simulation Group
%\maketitle
\whitespace \\[3.0cm]
\LARGE \textbf{Virtual Reality}\\
\normalsize Einführung in das Wissenschaftliche Arbeiten\\
Seminarreport
\whitespace \\[1.8cm]
Maximilian Sieß
\whitespace \\[3.0cm]
advised by\\
Prof. Dr. Matthias Harders
\whitespace \\[5.0cm]
Innsbruck, \today
\end{center}
\end{titlepage}
%\newpage
%=========TABLE OF CONTENTS==============
%\tableofcontents
\newpage
\section{Abstract}
The dream of feeling present at another location than one actually is at is becoming reality with the advent of virtual reality developments. Display technology, software and input devices have evolved enough to make this dream possible enough to be exciting and useful in many different areas. The drawbacks to them that still prevent the user to fully forget that they are seeing a simulation realized by computer graphics get diminished more and more, increasing the intensity of the intended effect.
\section{Introduction}
Virtual Reality is the attempt to use technology, such as head mounted display devices, and computer generated graphics, to allow the user to experience a sense of presence in a virtual environment. This is used in a wide variety of cases, including but not limited to entertainment, education, medical therapy, research, and visualization. Virtual Reality has the potential to fundamentally change the way we experience, and interact with, data and software.
\subsection{Definitions}
\subsubsection{Virtual Reality}
Virtual Reality, or VR for short, is the field of computing that aims to create a virtual world, allowing the user to enter, experience and interact with it, by using specific devices to simulate the virtual environment and the feedback it would provide in order to make the experience as real as possible.
%"Virtual Reality stands for the field of computing which has the objective of creating a virtual world, having one immerse into it and giving one the capability of interacting with this world, while using specific devices to simulate an environment and stimulate one by feedback in order to make the experience as real as possible."
\cite{boas13}
\subsubsection{Immersion}
Immersion can be differentiated into three different forms. \textit{Engagement}, which has to come from the subject, not the medium. \textit{Engrossment}, which depends on how the software is designed, and is important to affect a subjects emotions, if that is intended. And lastly \textit{total immersion}, or the sense of presence. Total immersion can be understood as what happens when someone is fully engulfed by a piece of media, like a book, movie, or computer game. \cite{Brown:2004:GIG:985921.986048} \\
Total immersion can also, in the case of VR, be taken more literally as the “extent to which a person's cognitive and perceptual systems are tricked into believing they are somewhere other than their physical location”. \cite{Patrick:2000:ULP:332040.332479} Users of the Oculus Rift have been recorded to be so immersed in the simulation, that they tried to interact with virtual objects by reaching out them them, to for example touch or grab them. \cite{bastiaens14}
\subsubsection{Telepresence}
Telepresence is defined as to have the experience that one is present at another location than his or her physical one. This name has been coined by Marvin Minsky in the 1980s. \cite{minsky1980telepresence} While Minsky's definition had in mind that ones actions have consequences at another physical location somewhere, which is not necessarily true for VR, Virtual Reality follows the same concept.
\section{Related Work}
\subsection{Technology}
\subsubsection{Head-Mounted Displays}
\begin{wrapfigure}{r}{0.55\textwidth}
\vspace{-20pt}
\begin{center}
\includegraphics[scale=0.3]{images/or_small.png}
\end{center}
\vspace{-20pt}
\caption{A Oculus Rift DK1 Headset and HDMI/DVI Converter Box}
\vspace{-10pt}
\end{wrapfigure}
Today, the most common form of how Virtual Reality is realized is via head-mounted displays. Goggles with a high density display in it, the same as used in phones. Utilizing special lenses and stereoscopic vision to create a believable view into the virtual environment. \\
Examples for headsets like these would be the \emph{Oculus Rift} by Oculus VR and one under the working title of \emph{Project Morpheus} by Sony. Both are very similar in execution and produce a VR experience of similar quality. \cite{goradia2014review}
At the time of this writing, both have not seen a commercial release, although a development kit for the Oculus Rift is available for purchase.
\subsubsection{Software}
Programming software for virtual reality does not differ much from regular computer graphics programming. Most commercial vendors offer their own API that helps translating a virtual camera to a two camera 3D setup. It was found however, that how the camera is used is imperative to not give the user of the virtual reality headset motion sickness or other unpleasant side effects. \cite{seppanen14}
For example, moving the camera without the user moving their head has resulted in severely negative feedback from users. The Oculus Rift Best Practices Manual states that "Acceleration creates a mismatch among your visual, vestibular, and proprioceptive senses; minimize the duration and frequency of such conflicts. Make accelerations as short (preferably instantaneous) and infrequent as you can." \cite{yao2014oculus}
\subsubsection{Input Devices}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-20pt}
\begin{center}
\includegraphics[scale=0.02]{images/ps_move.jpg}
\end{center}
\vspace{-20pt}
\caption{A Nintendo Wii motion controller and a Sony Playstation Move controller}
\vspace{-10pt}
\end{wrapfigure}
With headmounted displays, vision, the groundwork for a feeling of presence in virtual reality, is laid out. Headsets or surround sound systems have been shown to suffice for the audio representation of the virtual environment.\\
Moving around naturally has proven difficult, however. While virtual reality demos often use a gamepad, it is less than ideal for upholding a sense of presence. The abstract translation from a analogue stick to moving oneself in virtual space, or interact with one's surroundings, is not very intuitive. \cite{ruddle13} Some of the earliest virtual reality input devices were wired gloves, using fiber optics, conductive ink and mechanical sensors to determine the state of the users hands. \cite{boas13} Other, later developments were done with "wands", like the Nintendo Wii motion-sensor controller or the Sony Move Controller.
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-10pt}
\begin{center}
\includegraphics[scale=0.1]{images/KinectSensor.png}
\end{center}
\vspace{-20pt}
\caption{A Microsoft Kinect 3D Camera with infrared sensors}
\vspace{-10pt}
\end{wrapfigure}
\cite{boas13} And another way to realise input is to use computer vision, by using 3D cameras that record infrared, or end user devices such as the Microsoft Kinect, or the Sony Playstation Eye. \cite{boas13}
\subsection{Applications of Virtual Reality}
Most virtual reality hardware developed at this time is geared towards the entertainment business, with a special focus on simulation software, such as flight simulators, and video games. %From Oculus Rift, over Sony's Project Morpheus to Vive by HTC and Valve.
\subsubsection{Serious Games}
Yet VR can also be used for more humanitarian ways or scientific studies. Games made for these purposes are called \emph{serious games}, and are being used to, for example, allow the user to examine scanned, centuries old manuscripts of which only one copy exists, at a virtual table. \cite{lorenzini2013serious}
\subsubsection{Education}
Virtual reality can be used to visualize and teach subjects in ways that were inaccessible before. Teaching history by visiting locations or events of historical importance, for example. \cite{mosaker2001visualising}
\subsubsection{Military}
Interest in virtual reality for training simulation purposes by the military is one of the most important one when it comes to investment and deployment. \cite{moshell1993three}
\subsubsection{Medical Therapy}
Perhaps unexpected are the numerous ways virtual reality was found to be useful in medical treatments. From distracting patients from intense pain during treatments, by allowing them to feel like they are somewhere else \cite{hoffman14}, %over teaching walk-inhibited patients to walk again \cite{ruddle13},
to help patients with Diplopia (commenly known as lazy eye) to see three dimensionally again and potentially even improve their vision. \cite{blaha2014diplopia}
\section{Discussion}
With head-mounted displays, headphones and gloves, suspension of disbelief is easier than it ever has been before. All these technologies seem to work towards the same goal, to have total immersion without any barriers diminishing the feeling of presence.
At the moment, there are still many of these barriers remaining. The lack of intuitive input devices, the small imperfections of current head-mounted displays, and the lack of investigation of virtual reality software design. However, total immersion is being reported by users to be reached most of the time regardless. \cite{bastiaens14}
\section{Conclusion}
The level of immersion achievable by technology available today, namely the Oculus Rift Development Kit, is already enough to use virtual reality effectively, as many studies and experiments have shown.
Once the first VR headset is commercially released for end users, even more developers will create applications and drive developments that are going to find new and innovative possibilities to use telepresence and total immersion in beneficial, innovative, or simply entertaining ways.
\bibliographystyle{plain}
\bibliography{vr_bib}
\end{document}
%Todo still:
%insert more pictury pictures
%polish!
%done.
| {
"alphanum_fraction": 0.7872902131,
"avg_line_length": 64.6707317073,
"ext": "tex",
"hexsha": "0baac26589064a651a0cff2855cbb2c147d77424",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fddcfa59928be3d677ed6331d72b258d60692b84",
"max_forks_repo_licenses": [
"Artistic-2.0"
],
"max_forks_repo_name": "HunterwolfAT/WissenschaftlichesArbeiten",
"max_forks_repo_path": "virtualreality.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fddcfa59928be3d677ed6331d72b258d60692b84",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Artistic-2.0"
],
"max_issues_repo_name": "HunterwolfAT/WissenschaftlichesArbeiten",
"max_issues_repo_path": "virtualreality.tex",
"max_line_length": 632,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fddcfa59928be3d677ed6331d72b258d60692b84",
"max_stars_repo_licenses": [
"Artistic-2.0"
],
"max_stars_repo_name": "HunterwolfAT/WissenschaftlichesArbeiten",
"max_stars_repo_path": "virtualreality.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2473,
"size": 10606
} |
% Part: model-theory
% Chapter: models-of-arithmetic
% Section: non-standard-models
\documentclass[../../../include/open-logic-section]{subfiles}
\begin{document}
\olfileid{mod}{mar}{mdq}
\section{Models of $\Th{Q}$}
\begin{explain}
We know that there are non-standard !!{structure}s that make the same
!!{sentence}s true as~$\Struct{N}$ does, i.e., is a model
of~$\Th{TA}$. Since $\Sat{N}{\Th{Q}}$, any model of~$\Th{TA}$ is also
a model of~$\Th{Q}$. $\Th{Q}$ is much weaker than~$\Th{TA}$, e.g.,
$\Th{Q} \Proves/ \lforall[x][\lforall[y][\eq[(x +
y)][(y+x)]]]$. Weaker theories are easier to satisfy: they have
more models. E.g., $\Th{Q}$ has models which make
$\lforall[x][\lforall[y][\eq[(x + y)][(y+x)]]]$ false, but those
cannot also be models of~$\Th{TA}$, or $\Th{PA}$ for that matter.
Models of $\Th{Q}$ are also relatively simple: we can specify them
explicitly.
\end{explain}
\begin{ex}
\ollabel{ex:model-K-of-Q}
Consider the !!{structure}~$\Struct{K}$ with domain $\Domain{K} = \Nat
\cup \{a\}$ and interpretations
\begin{align*}
\Assign{\Obj{0}}{K} & = 0\\
\Assign{\prime}{K}(x) & =
\begin{cases}
x+1 & \text{if $x\in \Nat$}\\
a & \text{if $x = a$}
\end{cases}\\
\Assign{+}{K}(x, y) & =
\begin{cases}
x+y & \text{if $x$, $y \in\Nat$}\\
a & \text{otherwise}
\end{cases}\\
\Assign{\times}{K}(x, y) & =
\begin{cases}
xy & \text{if $x$, $y \in\Nat$}\\
a & \text{otherwise}\\
\end{cases}\\
\Assign{<}{K} & =
\Setabs{\tuple{x,y}}{x, y \in \Nat \text{ and } x<y} \cup
\Setabs{\tuple{x,a}}{x \in \Domain{K}}
\end{align*}
To show that $\Sat{K}{\Th{Q}}$ we have to verify that all axioms
of~$\Th{Q}$ are true in~$\Struct{K}$. For convenience, let's write
$x^\nssucc$ for $\Assign{\prime}{K}(x)$ (the ``successor'' of $x$
in~$\Struct{K}$), $x \nsplus y$ for $\Assign{+}{K}(x, y)$ (the ``sum''
of $x$ and $y$ in~$\Struct{K}$, $x \nstimes y$ for
$\Assign{\times}{K}(x, y)$ (the ``product'' of $x$ and~$y$
in~$\Struct{K}$), and $x \nsless y$ for $\tuple{x,y} \in
\Assign{<}{K}$. With these abbreviations, we can give the operations
in~$\Struct{K}$ more perspicuously as
\[
\begin{array}{c|c}
x & x^\nssucc \\
\hline
n & n+1 \\
a & a
\end{array}
\qquad
\begin{array}{c|ccc}
x \nsplus y & m & a \\
\hline
n & n+m & a \\
a & a & a \\
\end{array}
\qquad
\begin{array}{c|ccc}
x \nstimes y & m & a \\
\hline
n & nm & a \\
a & a & a \\
\end{array}
\]
We have $n \nsless m$ iff $n<m$ for $n$, $m \in \Nat$ and $x \nsless
a$ for all~$x \in \Domain{K}$.
$\Sat{K}{\lforall[x][\lforall[y][(\eq[x'][y'] \lif \eq[x][y])]]}$
since $\nssucc$ is !!{injective}. $\Sat{K}{\lforall[x][\eq/[\Obj 0][x']]}$
since $0$ is not a $\nssucc$-successor
in~$\Struct{K}$. $\Sat{N}{\lforall[x][(\eq[x][\Obj 0] \lor
\lexists[y][\eq[x][y']])]}$ since for every $n>0$, $n = (n-1)^\nssucc$,
and $a = a^\nssucc$.
$\Sat{K}{\lforall[x][\eq[(x + \Obj 0)][x]]}$ since $n \nsplus 0 = n+0 =
n$, and $a\nsplus 0 = a$ by definition
of~$\nsplus$. $\Sat{K}{\lforall[x][\lforall[y][\eq[(x + y')][(x +
y)']]]}$ is a bit trickier. If $n$, $m$ are both standard, we have:
\begin{align*}
(n \nsplus m^\nssucc) & = (n+(m+1)) = (n+m)+1 = (n \nsplus m)^\nssucc
\intertext{since $\nsplus$ and $^\nssucc$ agree with $+$ and $\prime$ on
standard numbers. Now suppose $x \in \Domain{K}$. Then}
(x \nsplus a^\nssucc) & = (x \nsplus a) = a = a^\nssucc = (x \nsplus a)^\nssucc
\intertext{The remaining case is if $y \in \Domain{K}$ but $x =
a$. Here we also have to distinguish cases according to whether $y =
n$ is standard or $y = b$:}
(a \nsplus n^\nssucc) & = (a \nsplus (n+1)) = a = a^\nssucc = (x \nsplus n)^\nssucc\\
(a \nsplus a^\nssucc) & = (a \nsplus a) = a = a^\nssucc = (x \nsplus a)^\nssucc
\end{align*}
This is of course a bit more detailed than needed. For instance, since
$a \nsplus z = a$ whatever $z$ is, we can immediately conclude $a \nsplus
a^\nssucc = a$. The remaining axioms can be verified the same way.
$\Struct{K}$ is thus a model of~$\Th{Q}$. Its ``addition''~$\nsplus$
is also commutative. But there are other sentences true
in~$\Struct{N}$ but false in~$\Struct{K}$, and vice versa. For
instance, $a \nsless a$, so $\Sat{K}{\lexists[x][x < x]}$ and
$\Sat/{K}{\lforall[x][\lnot x<x]}$. This shows that $\Th{Q} \Proves/
\lforall[x][\lnot x < x]$.
\end{ex}
\begin{prob}
Prove that $\Struct{K}$ from \olref[mod][mar][mdq]{ex:model-K-of-Q}
satisifies the remaining axioms of~$\Th{Q}$,
\begin{align*}
& \lforall[x][\eq[(x \times \Obj 0)][\Obj 0]] \tag{$!Q_6$}\\
& \lforall[x][\lforall[y][\eq[(x \times y')][((x \times y) + x)]]] \tag{$!Q_7$}\\
& \lforall[x][\lforall[y][(x < y \liff \lexists[z][\eq[(z' + x)][y]])]] \tag{$!Q_8$}
\end{align*}
Find !!a{sentence} only involving~$\prime$ true in~$\Struct{N}$ but
false in~$\Struct{K}$.
\end{prob}
\begin{ex}
\ollabel{ex:model-L-of-Q} Consider the !!{structure}~$\Struct{L}$ with
domain $\Domain{L} = \Nat \cup \{a, b\}$ and interpretations
$\Assign{\prime}{L} = \nssucc$, $\Assign{+}{L} = \nsplus$ given by
\[
\begin{array}{c|c}
x & x^\nssucc \\
\hline
n & n+1 \\
a & a\\
b & b
\end{array}
\qquad
\begin{array}{c|ccc}
x \nsplus y & m & a & b\\
\hline
n & n+m & b & a\\
a & a & b & a\\
b & b & b & a
\end{array}
\]
Since $\nssucc$ is !!{injective}, $0$ is not in its range, and every
$x \in \Domain{L}$ other than~$0$ is, axioms $!Q_1$--$!Q_3$ are true
in~$\Struct{L}$. For any $x$, $x \nsplus 0 = x$, so $!Q_4$ is true as
well. For $!Q_5$, consider $x \nsplus y^\nssucc$ and $(x \nsplus
y)^\nssucc$. They are equal if $x$ and $y$ are both standard, since
then $\nssucc$ and $\nsplus$ agree with $\prime$ and $+$. If $x$ is
non-standard, and $y$ is standard, we have $x \nsplus y^\nssucc = x =
x^\nssucc = (x \nsplus y)^\nssucc$. If $x$ and $y$ are both
non-standard, we have four cases:
\begin{align*}
& a \nsplus a^\nssucc = b = b^\nssucc = (a \nsplus a)^\nssucc\\
& b \nsplus b^\nssucc = a = a^\nssucc = (b \nsplus b)^\nssucc\\
& b \nsplus a^\nssucc = b = b^\nssucc = (b \nsplus y)^\nssucc\\
& a \nsplus b^\nssucc = a = a^\nssucc = (a \nsplus b)^\nssucc\\
\intertext{If $x$ is standard, but $y$ is non-standard, we have}
& n \nsplus a^\nssucc = n \nsplus a = b = b^\nssucc = (n \nsplus a)^\nssucc\\
& n \nsplus b^\nssucc = n \nsplus b = a = a^\nssucc = (n \nsplus b)^\nssucc
\end{align*}
So, $\Sat{L}{!Q_5}$. However, $a \nsplus 0 \neq 0 \nsplus a$, so
$\Sat/{L}{\lforall[x][\lforall[y][\eq[(x+y)][(y+x)]]]}$.
\end{ex}
\begin{prob}
Expand $\Struct{L}$ of \olref[mod][mar][mdq]{ex:model-L-of-Q} to
include $\nstimes$ and $\nsless$ that interpret~$\times$ and $<$. Show
that your structure satisifies the remaining axioms of~$\Th{Q}$,
\begin{align*}
& \lforall[x][\eq[(x \times \Obj 0)][\Obj 0]] \tag{$!Q_6$}\\ &
\lforall[x][\lforall[y][\eq[(x \times y')][((x \times y) + x)]]]
\tag{$!Q_7$}\\ & \lforall[x][\lforall[y][(x < y \liff \lexists[z][\eq[(z'+x)][y]])]] \tag{$!Q_8$}
\end{align*}
\end{prob}
\begin{prob}
In $\Struct{L}$ of \olref[mod][mar][mdq]{ex:model-L-of-Q}, $a^\nssucc
= a$ and $b^\nssucc = b$. Is there a model of~$\Th{Q}$ in which
$a^\nssucc = b$ and $b^\nssucc = a$?
\end{prob}
\begin{explain}
We've explicitly constructed models of~$\Th{Q}$ in which the
non-standard !!{element}s live ``beyond'' the standard elements. In
fact, that much is required by the axioms. A non-standard
!!{element}~$x$ cannot be ${} \nsless 0$, since $\Th{Q} \Proves
\lforall[x][\lnot x<0]$ (see \olref[inc][req][min]{lem:less-zero}).
Also, for every $n$, $\Th{Q} \Proves \lforall[x][(x < \num{n}' \lif
(\eq[x][\num{0}] \lor \eq[x][\num{1}] \lor \dots \lor
\eq[x][\num{n}]))]$ (\olref[inc][req][min]{lem:less-nsucc}), so we
can't have $a \nsless n$ for any~$n>0$.
\end{explain}
\end{document}
| {
"alphanum_fraction": 0.5865892972,
"avg_line_length": 38.0147058824,
"ext": "tex",
"hexsha": "13c846fad5cf00be385a6987d18acedddedd82b7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "14039c24df2432fb4b8e360a3aeeb09df7ddab92",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "grafoid/OpenLogic",
"max_forks_repo_path": "content/model-theory/models-of-arithmetic/models-of-q.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "14039c24df2432fb4b8e360a3aeeb09df7ddab92",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "grafoid/OpenLogic",
"max_issues_repo_path": "content/model-theory/models-of-arithmetic/models-of-q.tex",
"max_line_length": 99,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "14039c24df2432fb4b8e360a3aeeb09df7ddab92",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "grafoid/OpenLogic",
"max_stars_repo_path": "content/model-theory/models-of-arithmetic/models-of-q.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3213,
"size": 7755
} |
\subsection{Sufficient statistics}
We can make estimates of a population parameter using statistics from the same.
A statistic is sufficient if it contains all the information needed to estimate the parameter.
We can describe the role of a parameter as:
\(P(x|\theta, t)\)
\(t\) is a sufficient statistic for \(\theta \) if:
\(P(x|t)=P(x|\theta, t)\)
| {
"alphanum_fraction": 0.7270194986,
"avg_line_length": 22.4375,
"ext": "tex",
"hexsha": "57032f1485981098f6289a5d9da5ef5b4df6c72d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/generative/01-02-sufficient.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/generative/01-02-sufficient.tex",
"max_line_length": 94,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/generative/01-02-sufficient.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 86,
"size": 359
} |
%---------------------------------------------------------------------
%
%-----------------------------resumen.tex%----------------------------
%
%---------------------------------------------------------------------
%
% Contiene el cap�tulo del resumen.
%
% Se crea como un cap�tulo sin numeraci�n.
%
%---------------------------------------------------------------------
\chapter{Abstract}
\cabeceraEspecial{Abstract}
\begin{FraseCelebre}
\begin{Frase}
There is light at the end of the tunnel... hopefully it's not a freight train.
\end{Frase}
\begin{Fuente}
M. Carey
\end{Fuente}
\end{FraseCelebre}
\providecommand{\keywords}[1]{\textbf{\textit{Index terms---}} #1}
\keywords{CBR, data visualization, report, IA, artificial intelligence}
\bigskip
This document reflects my Bachelor's Thesis corresponding to the Double Degree in Mathematics and Computer Science, developed within the area of intelligent data analytics and 'Case Based Reasoning'.
During the progress of the project, the principles applicable in any environment of data processing and the science behind it are explained generally and aimed to be usable in any kind of context by any user provided the right format of data.
Nowadays, highly heterogeneous data collection and processing methods are employed in all industries,
however the techniques employed to get useful information out of the data usually have a generalistic aim,
and the work relevant to the field itself is often done manually. In this work we aim to provide an automated way to analyze information while taking into account information and techniques relevant to the field of the analysis.
The objective of this Degree's Final Project is the development of a prototype capable of carrying this analysis while being able to learn based on user input. As a Proof of Concept, we have included several medical domains with each one having developed specific methods and techniques for them.
To serve as a base for this analysis, we have also developed a system for storing, loading and analyzing the information of the domain and the information provided by the user. This system will be the backbone of our architecture and enable the Case Based Reasoning analysis to function correctly in very different situations, providing the metrics and functions needed for every case.
You can find the code and proofs of concept \href{https://www.github.com/jorses/tfg}{here}.
% Variable local para emacs, para que encuentre el fichero maestro de
% compilaci�n y funcionen mejor algunas teclas r�pidas de AucTeX
%%%
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../Tesis.tex"
%%% End:
| {
"alphanum_fraction": 0.7093952073,
"avg_line_length": 61.1395348837,
"ext": "tex",
"hexsha": "80ce276487ee2c3b359f9587628cd93aec2b6039",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8cf15997378d782a2f2bdcff929830af9b3d9840",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jorses/tfg",
"max_forks_repo_path": "tex/Cascaras/resumen.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8cf15997378d782a2f2bdcff929830af9b3d9840",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jorses/tfg",
"max_issues_repo_path": "tex/Cascaras/resumen.tex",
"max_line_length": 385,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8cf15997378d782a2f2bdcff929830af9b3d9840",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jorses/tfg",
"max_stars_repo_path": "tex/Cascaras/resumen.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 530,
"size": 2629
} |
% **********************************************************************
% Author: Ajahn Chah
% Translator:
% Title: Right Practice -- Steady Practice
% First published: Food for the Heart
% Comment: Given at Wat Keuan to a group of university students who had taken temporary ordination, during the hot season of 1978
% Copyright: Permission granted by Wat Pah Nanachat to reprint for free distribution
% **********************************************************************
\chapterFootnote{\textit{Note}: This talk has been published elsewhere under the title: `\textit{Right Practice -- Steady Practice}'}
\chapter{Steady Practice}
\index[general]{Wat Wana Potiyahn}
\dropcaps{W}{at Wana Potiyahn}\footnote{One of the many branch monasteries of Ajahn Chah's main monastery, Wat Pah Pong.} here is certainly very peaceful, but this is meaningless if our minds are not calm. All places are peaceful. That some may seem distracting is because of our minds. However, a quiet place can help us to become calm, by giving us the opportunity to train and thus harmonize with its calm.
\index[general]{mind!training}
\index[general]{six senses}
\index[general]{contact}
You should all bear in mind that this practice is difficult. To train in other things is not so difficult, it's easy, but the human mind is hard to train. The Lord Buddha trained his mind. The mind is the important thing. Everything within this body-mind system comes together at the mind. The eyes, ears, nose, tongue and body all receive sensations and send them into the mind, which is the supervisor of all the other sense organs. Therefore, it is important to train the mind. If the mind is well trained, all problems come to an end. If there are still problems, it's because the mind still doubts, it doesn't know in accordance with the truth. That is why there are problems.
\index[general]{mindfulness!all postures}
So recognize that all of you have come fully prepared for practising Dhamma. Whether standing, walking, sitting or reclining, you are provided with the tools you need to practise, wherever you are. They are there, just like the Dhamma. The Dhamma is something which abounds everywhere. Right here, on land or in water, wherever, the Dhamma is always there. The Dhamma is perfect and complete, but it's our practice that's not yet complete.
The Lord, the fully enlightened Buddha, taught a means by which all of us may practise and come to know this Dhamma. It isn't a big thing, only a small thing, but it's right. For example, look at hair. If we know even one strand of hair, then we know every strand, both our own and also that of others. We know that they are all simply `hair'. By knowing one strand of hair we know it all.
\index[general]{conditions!nature of}
Or consider people. If we see the true nature of conditions within ourselves, then we know all the other people in the world also, because all people are the same. Dhamma is like this. It's a small thing and yet it's big. That is, to see the truth of one condition is to see the truth of them all. When we know the truth as it is, all problems come to an end.
\index[general]{craving}
\index[general]{desire}
\looseness=1
Nevertheless, the training is difficult. Why is it difficult? It's difficult because of wanting, \pali{\glsdisp{tanha}{ta\d{n}h\=a.}} If you don't `want' then you don't practise. But if you practise out of desire you won't see the Dhamma. Think about it, all of you. If you don't want to practise, you can't practise. You must first want to practise in order to actually do the practice. Whether stepping forward or stepping back you meet desire. This is why the cultivators of the past have said that this practice is something that's extremely difficult to do.
\index[general]{Dhamma!and mind}
You don't see Dhamma because of desire. Sometimes desire is very strong, you want to see the Dhamma immediately, but the Dhamma is not your mind -- your mind is not yet Dhamma. The Dhamma is one thing and the mind is another. It's not that whatever you like is Dhamma and whatever you don't like isn't. That's not the way it goes.
\index[similes]{tree in a forest!mind}
Actually this mind of ours is simply a condition of nature, like a tree in the forest. If you want a plank or a beam, it must come from a tree, but a tree is still only a tree. It's not yet a beam or a plank. Before it can really be of use to us we must take that tree and saw it into beams or planks. It's the same tree but it becomes transformed into something else. Intrinsically it's just a tree, a condition of nature. But in its raw state it isn't yet of much use to those who need timber. Our mind is like this. It is a condition of nature. As such it perceives thoughts, it discriminates into beautiful and ugly and so on.
This mind of ours must be further trained. We can't just let it be. It's a condition of nature! Train it to realize that it's a condition of nature. Improve on nature so that it's appropriate to our needs, which is Dhamma. Dhamma is something which must be practised and brought within.
\index[general]{study!of Dhamma}
\index[general]{conventions!of names}
\index[general]{concepts}
If you don't practise you won't know. Frankly speaking, you won't know the Dhamma by just reading it or studying it. Or if you do know it, your knowledge is still defective. For example, this spittoon here. Everybody knows it's a spittoon but they don't fully know the spittoon. Why don't they fully know it? If I called this spittoon a saucepan, what would you say? Suppose that every time I asked for it I said, `Please bring that saucepan over here,' that would confuse you. Why so? Because you don't fully know the spittoon. If you did, there would be no problem. You would simply pick up that object and hand it to me, because actually there isn't any spittoon. Do you understand? It's a spittoon due to convention. This convention is accepted all over the country, so it's a spittoon. But there isn't any real `spittoon'. If somebody wants to call it a saucepan it can be a saucepan. It can be whatever you call it. This is called `concept'. If we fully know the spittoon, even if somebody calls it a saucepan there's no problem. Whatever others may call it, we are unperturbed because we are not blind to its true nature. This is \glsdisp{one-who-knows}{one who knows} Dhamma.
\index[general]{enlightenment}
Now let's come back to ourselves. Suppose somebody said, `You're crazy!' or, `You're stupid,' for example. Even though it may not be true, you wouldn't feel so good. Everything becomes difficult because of our ambitions to have and to achieve. Because of these desires to get and to be, because we don't know according to the truth, we have no contentment. If we know the Dhamma, are enlightened to the Dhamma, greed, aversion and delusion will disappear. When we understand the way things are, there is nothing for them to rest on.
\index[general]{meditation!desire in}
\index[general]{desire!for peace}
Why is the practice so difficult and arduous? Because of desires. As soon as we sit down to meditate we want to become peaceful. If we didn't want to find peace we wouldn't sit, we wouldn't practise. As soon as we sit down we want peace to be right there, but wanting the mind to be calm makes for confusion, and we feel restless. This is how it goes. So the Buddha says, `Don't speak out of desire, don't sit out of desire, don't walk out of desire. Whatever you do, don't do it with desire.' Desire means wanting. If you don't want to do something you won't do it. If our practice reaches this point, we can get quite discouraged. How can we practise? As soon as we sit down there is desire in the mind.
\index[general]{self}
\index[general]{letting go}
It's because of this that the body and mind are difficult to observe. If they are not the self nor belonging to self, then who do they belong to? Because it's difficult to resolve these things, we must rely on wisdom. The Buddha says we must practise with `letting go'. But if we let go, then we just don't practise, right? Because we've let go.
\index[similes]{buying coconuts!practice}
Suppose we went to buy some coconuts in the market, and while we were carrying them back someone asked:
`What did you buy those coconuts for?'
`I bought them to eat.'
`Are you going to eat the shells as well?'
`No.'
`I don't believe you. If you're not going to eat the shells then why did you buy them also?'
\index[general]{desire!to practise}
\index[general]{craving}
Well what do you say? How are you going to answer their question? We practise with desire. If we didn't have desire we wouldn't practise. Practising with desire is \pali{ta\d{n}h\=a}. Contemplating in this way can give rise to wisdom, you know. For example, those coconuts: Are you going to eat the shells as well? Of course not. Then why do you take them? Because the time hasn't yet come for you to throw them away. They're useful for wrapping up the coconut in. If, after eating the coconut, you throw the shells away, there is no problem.
\index[general]{restraint!of desire}
\index[general]{concepts!vs. transcendence}
Our practice is like this. The Buddha said, `Don't act on desire, don't speak from desire, don't eat with desire.' Standing, walking, sitting or reclining, whatever, don't do it with desire. This means to do it with detachment. It's just like buying the coconuts from the market. We're not going to eat the shells but it's not yet time to throw them away. We keep them first.
This is how the practice is. Concept (\pali{\glsdisp{sammuti}{sammuti}}) and transcendence (\pali{\glsdisp{vimutti}{vimutti}}) are co-existent, just like a coconut. The flesh, the husk and the shell are all together. When we buy a coconut we buy the whole lot. If somebody wants to accuse us of eating coconut shells that's their business, we know what we're doing.
Wisdom is something each of us finds for oneself. To see it we must go neither fast nor slow. What should we do? Go to where there is neither fast nor slow. Going fast or going slow is not the way.
\index[general]{impatience}
\index[general]{M\=ara}
But we're all impatient, we're in a hurry. As soon as we begin we want to rush to the end, we don't want to be left behind. We want to succeed. When it comes to fixing their minds for meditation some people go too far. They light the incense, prostrate and make a vow, `As long as this incense is not yet completely burnt I will not rise from my sitting, even if I collapse or die, no matter what, I'll die sitting.' Having made their vow they start their sitting. As soon as they start to sit, \glsdisp{mara}{M\=ara's} hordes come rushing at them from all sides. They've only sat for an instant and already they think the incense must be finished. They open their eyes for a peek, `Oh, there's still ages left!'
They grit their teeth and sit some more, feeling hot, flustered, agitated and confused. Reaching the breaking point they think, `It \textit{must} be finished by now'. They have another peek. `Oh, no! It's not even \textit{half-way} yet!'
\index[general]{hating oneself}
\index[general]{hindrances!ill-will}
Two or three times and it's still not finished, so they just give up, pack it in and sit there hating themselves. `I'm so stupid, I'm so hopeless!' They sit and hate themselves, feeling like a hopeless case. This just gives rise to frustration and hindrances. This is called the hindrance of ill-will. They can't blame others so they blame themselves. And why is this? It's all because of wanting.
\index[general]{Buddha, the!determination to attain enlightenment}
Actually it isn't necessary to go through all that. To concentrate means to concentrate with detachment, not to concentrate yourself into knots. But maybe we read the scriptures about the life of the Buddha, how he sat under the Bodhi tree and determined to himself:
\index[similes]{small and big cars!meditation}
\index[general]{regonising one's own level}
`As long as I have still not attained Supreme Enlightenment I will not rise from this place, even if my blood dries up.'
Reading this in the books you may think of trying it yourself. You'll do it like the Buddha. But you haven't considered that your car is only a small one. The Buddha's car was a really big one, he could take it all in one go. With only your tiny, little car, how can you possibly take it all at once? It's a different story altogether.
\index[general]{balance!in effort}
Why do we think like that? Because we're too extreme. Sometimes we go too low, sometimes we go too high. The point of balance is so hard to find.
\index[general]{practice!with desire}
\index[general]{laziness}
Now I'm only speaking from experience. In the past my practice was like this. Practising in order to get beyond wanting. If we don't want, can we practise? I was stuck here. But to practise with wanting is suffering. I didn't know what to do, I was baffled. Then I realized that the practice which is steady is the important thing. One must practise consistently. They call this the practice that is `consistent in all postures'. Keep refining the practice, don't let it become a disaster. Practice is one thing, disaster is another.\footnote{The play on words here between the Thai \textit{`patibat'} (practice) and \textit{`wibut'} (disaster) is lost in the English.} Most people usually create disaster. When they feel lazy they don't bother to practise, they only practise when they feel energetic. This is how I tended to be.
\index[general]{practice!consistency}
All of you ask yourselves now, is this right? To practise when you feel like it, not when you don't: is that in accordance with the Dhamma? Is it straight? Is it in line with the teaching? This is what makes practice inconsistent.
\index[general]{disaster}
\index[general]{practice!all postures}
Whether you feel like it or not you should practise just the same: this is how the Buddha taught. Most people wait till they're in the mood before practising; when they don't feel like it they don't bother. This is as far as they go. This is called `disaster', it's not practice. In the true practice, whether you are happy or depressed you practice; whether it's easy or difficult you practise; whether it's hot or cold you practise. It's straight like this. In the real practice, whether standing, walking, sitting or reclining you must have the intention to continue the practice steadily, making your \glsdisp{sati}{sati} consistent in all postures.
\index[general]{mindfulness}
\index[general]{mindfulness!all postures}
At first thought it seems as if you should stand for as long as you walk, walk for as long as you sit, sit for as long as you lie down. I've tried it but I couldn't do it. If a meditator were to make his standing, walking, sitting and lying down all equal, how many days could he keep it up for? Stand for five minutes, sit for five minutes, lie down for five minutes. I couldn't do it for very long. So I sat down and thought about it some more. `What does it all mean? People in this world can't practise like this!'
Then I realized. `Oh, that's not right, it can't be right because it's impossible to do. Standing, walking, sitting, reclining \ldots{} make them all consistent. To make the postures consistent the way they explain it in the books is impossible.'
\index[general]{wisdom}
\index[general]{clear comprehension}
But it is possible to do this: the mind, just consider the mind. To have sati, recollection, \pali{\glsdisp{sampajanna}{sampaja\~n\~na,}} self-awareness, and \glsdisp{panna}{pa\~n\~n\=a,} all-round wisdom, this you can do. This is something that's really worth practising. This means that while standing we have sati, while walking we have sati, while sitting we have sati, and while reclining we have sati -- consistently. This is possible. We put awareness into our standing, walking, sitting, lying down -- into all postures.
\index[general]{Buddho!mantra}
When the mind has been trained like this it will constantly recollect \pali{\glsdisp{buddho}{Buddho,} Buddho, Buddho} \ldots{} which is knowing. Knowing what? Knowing what is right and what is wrong at all times. Yes, this is possible. This is getting down to the real practice. That is, whether standing, walking, sitting or lying down there is continuous sati.
\index[general]{sensuality!sensual indulgence}
\index[general]{self-mortification}
Then you should understand those conditions which should be given up and those which should be cultivated. You know happiness, you know unhappiness. When you know happiness and unhappiness your mind will settle at the point which is free of happiness and unhappiness. Happiness is the loose path, \pali{k\=amasukhallik\=anuyogo}. Unhappiness is the tight path, \pali{atta\-kila\-math\=anu\-yogo}.\footnote{These are the two extremes pointed out as wrong paths by the Buddha in his First Discourse. They are normally rendered as `indulgence in sense pleasures' and `self-mortification'.} If we know these two extremes, we pull it back. We know when the mind is inclining towards happiness or unhappiness and we pull it back, we don't allow it to lean over. We have this sort of awareness, we adhere to the One Path, the single Dhamma. We adhere to the awareness, not allowing the mind to follow its inclinations.
\index[similes]{lazy worker!practice}
\index[general]{teachings!acceptance of}
But in your practice it doesn't tend to be like that, does it? You follow your inclinations. If you follow your inclinations it's easy, isn't it? But this is the ease which causes suffering, like someone who can't be bothered working. He takes it easy, but when the time comes to eat he hasn't got anything. This is how it goes.
I've contended with many aspects of the Buddha's teaching in the past, but I couldn't really beat him. Nowadays I accept it. I accept that the many teachings of the Buddha are straight down the line, so I've taken those teachings and used them to train both myself and others.
\index[general]{practice!pa\d{t}ipad\=a}
\index[general]{effort}
\index[general]{mind!contemplation of}
The practice which is important is \pali{\glsdisp{patipada}{pa\d{t}ipad\=a.}} What is \pali{pa\d{t}ipad\=a}? It is simply all our various activities: standing, walking, sitting, reclining and everything else. This is the \pali{pa\d{t}ipad\=a} of the body. Now the \pali{pa\d{t}ipad\=a} of the mind: how many times in the course of today have you felt low? How many times have you felt high? Have there been any noticeable feelings? We must know ourselves like this. Having seen those feelings, can we let go? Whatever we can't yet let go of, we must work with. When we see that we can't yet let go of some particular feeling, we must take it and examine it with wisdom. Reason it out. Work with it. This is practice. For example, when you are feeling zealous, practise, and when you feel lazy, try to continue the practice. If you can't continue at `full speed' then at least do half as much. Don't just waste the day away by being lazy and not practising. Doing that will lead to disaster, it's not the way of a practitioner.
Now I've heard some people say, `Oh, this year I was really in a bad way.'
`How come?'
`I was sick all year. I couldn't practise at all.'
\index[general]{practice!ill health}
\index[general]{practice!constant}
Oh! If they don't practise when death is near, when will they ever practise? If they're feeling well, do you think they'll practise? No, they only get lost in happiness. If they're suffering they still don't practise, they get lost in that. I don't know when people think they're going to practise! They can only see that they're sick, in pain, almost dead from fever -- that's right, bring it on heavy, that's where the practice is. When people are feeling happy it just goes to their heads and they get vain and conceited.
We must cultivate our practice. What this means is that whether you are happy or unhappy you must practise just the same. If you are feeling well you should practise, and if you are feeling sick you should also practise. There are those who think, `This year I couldn't practise at all, I was sick the whole time'. If these people are feeling well, they just walk around singing songs. This is wrong thinking, not right thinking. This is why the practitioners of the past have all maintained the steady training of the heart. If things go wrong, just let them be with the body, not in the mind.
\index[general]{solitude!solitary practice}
\index[general]{practice!solitary}
There was a time in my practice, after I had been practising about five years, when I felt that living with others was a hindrance. I would sit in my \glsdisp{kuti}{ku\d{t}\={\i}} and try to meditate and people would keep coming by for a chat and disturbing me. I ran off to live by myself. I thought I couldn't practise with those people bothering me. I was fed up, so I went to live in a small, deserted monastery in the forest, near a small village. I stayed there alone, speaking to no-one because there was nobody else to speak to.
\index[general]{pa-kow}
After I'd been there about fifteen days the thought arose, `Hmm. It would be good to have a novice or \textit{\glsdisp{pah-kow}{pah-kow}} here with me. He could help me out with some small jobs.' I knew it would come up, and sure enough, there it was!
`Hey! You're a real character! You say you're fed up with your friends, fed up with your fellow monks and novices, and now you want a novice. What's this?'
`No,' it says, `I want a good novice.'
\index[general]{people!good people}
`There! Where are all the good people, can you find any? Where are you going to find a good person? In the whole monastery there were only no-good people. You must have been the only good person, to have run away like this!'
You have to follow it up like this, follow up the tracks of your thoughts until you see.
\index[general]{praise and blame}
\index[general]{criticism}
`Hmm. This is the important one. Where is there a good person to be found? There aren't any good people, you must find the good person within yourself. If you are good in yourself then wherever you go will be good. Whether others criticize or praise you, you are still good. If you aren't good, then when others criticize you, you get angry, and when they praise you, you are pleased.
\index[general]{other people}
\index[general]{foundation!solid}
At that time I reflected on this and have found it to be true from that day on until the present. Goodness must be found within. As soon as I saw this, that feeling of wanting to run away disappeared. In later times, whenever I had that desire arise I let it go. Whenever it arose I was aware of it and kept my awareness on that. Thus I had a solid foundation. Wherever I lived, whether people condemned me or whatever they said, I would reflect that the point is not whether \textit{they} were good or bad. Good or evil must be seen within ourselves. The way other people are, that's their concern.
\index[general]{laziness}
Don't go thinking, `Oh, today is too hot,' or, `Today is too cold,' or, `Today is \ldots{}.' Whatever the day is like, that's just the way it is. Really, you are simply blaming the weather for your own laziness. We must see the Dhamma within ourselves, then there is a surer kind of peace.
So for all of you who have come to practise here, even though it's only for a few days, many things will arise. Many things may be arising which you're not even aware of. There is some right thinking, some wrong thinking -- many, many things. So I say this practice is difficult.
\index[general]{meditation!good and bad}
Even though some of you may experience some peace when you sit in meditation, don't be in a hurry to congratulate yourselves. Likewise, if there is some confusion, don't blame yourselves. If things seem to be good, don't delight in them, and if they're not good don't be averse to them. Just look at it all, look at what you have. Just look, don't bother judging. If it's good, don't hold fast to it; if it's bad, don't cling to it. Good and bad can both bite, so don't hold fast to them.
\index[similes]{raising a child!praise and blame}
\index[general]{middle way}
The practice is simply to sit, sit and watch it all. Good moods and bad moods come and go as is their nature. Don't only praise your mind or only condemn it, know the right time for these things. When it's time for congratulations, congratulate it, but just a little, don't overdo it. Just like teaching a child, sometimes you may have to spank it a little. In our practice sometimes we may have to punish ourselves, but don't punish yourself all the time. If you punish yourself all the time, in a while you'll just give up the practice. But then you can't just give yourself a good time and take it easy either. That's not the way to practise. We practise according to the Middle Way. What is the Middle Way? This Middle Way is difficult to follow, you can't rely on your moods and desires.
\index[general]{mindfulness!all postures}
Don't think that just sitting with your eyes closed is practise. If you do think this way then quickly change your thinking! Steady practice is having the attitude of practice while standing, walking, sitting and lying down. When coming out of sitting meditation, reflect that you're simply changing postures. If you reflect in this way you will have peace. Wherever you are, you will have this attitude of practice with you constantly, you will have a steady awareness within yourself.
\index[general]{moods!following}
Those of you who, simply indulge in your moods, spending the whole day letting the mind wander where it wants, will find that the next evening in sitting meditation all you get is the `backwash' from the day's aimless thinking. There is no foundation of calm because you have let it go cold all day. If you practise like this, your mind gets gradually further and further from the practice. When I ask some of my disciples, `How is your meditation going?' They say, `Oh, it's all gone now.' You see? They can keep it up for a month or two but in a year or two it's all finished.
\index[general]{concentration}
\index[general]{practice!determination}
Why is this? It's because they don't take this essential point into their practice. When they've finished sitting they let go of their \glsdisp{samadhi}{sam\=adhi.} They start to sit for shorter and shorter periods, till they reach the point where as soon as they start to sit they want to finish. Eventually they don't even sit. It's the same with bowing to the Buddha image. At first they make the effort to prostrate every night before going to sleep, but after a while their minds begin to stray. Soon they don't bother to prostrate at all, they just nod, till eventually it's all gone. They throw out the practice completely.
\index[general]{practice!constant}
\index[general]{mindfulness}
Therefore, understand the importance of sati, practise constantly. Right practice is steady practice. Whether standing, walking, sitting or reclining, the practice must continue. This means that practice, meditation, is done in the mind, not in the body. If our mind has zeal, is conscientious and ardent, there will be awareness. The mind is the important thing. The mind is that which supervises everything we do.
\index[general]{mindfulness!all postures}
\index[general]{awareness!maintaining}
When we understand properly, we practise properly. When we practise properly, we don't go astray. Even if we only do a little, that is still all right. For example, when you finish sitting in meditation, remind yourselves that you are not actually finishing meditation, you are simply changing postures. Your mind is still composed. Whether standing, walking, sitting or reclining, you have sati with you. If you have this kind of awareness you can maintain your internal practice. In the evening when you sit again the practice continues uninterrupted. Your effort is unbroken, allowing the mind to attain calm.
This is called steady practice. Whether we are talking or doing other things we should try to make the practice continuous. If our mind has recollection and self-awareness continuously, our practice will naturally develop, it will gradually come together. The mind will find peace, because it will know what is right and what is wrong. It will see what is happening within us and realize peace.
\index[general]{Noble Eightfold Path}
\index[general]{s\={\i}la, sam\=adhi, pa\~n\~n\=a}
If we are to develop \glsdisp{sila}{s\={\i}la} or sam\=adhi, we must first have pa\~n\~n\=a. Some people think that they'll develop moral restraint one year, sam\=adhi the next year and the year after that they'll develop wisdom. They think these three things are separate. They think that this year they will develop s\={\i}la, but if the mind is not firm (sam\=adhi), how can they do it? If there is no understanding (pa\~n\~n\=a), how can they do it? Without sam\=adhi or pa\~n\~n\=a, s\={\i}la will be sloppy.
\index[similes]{mango!s\={\i}la, sam\=adhi, pa\~n\~n\=a}
In fact these three come together at the same point. When we have s\={\i}la we have sam\=adhi, when we have sam\=adhi we have pa\~n\~n\=a. They are all one, like a mango. Whether it's small or fully grown, it's still a mango. When it's ripe it's still the same mango. If we think in simple terms like this, we can see it more easily. We don't have to learn a lot of things, just know these things, know our practice.
\index[general]{giving up}
When it comes to meditation some people don't get what they want, so they just give up, saying they don't yet have the merit to practise meditation. They can do bad things, they have that sort of talent, but they don't have the talent to do good. They give it up, saying they don't have a good enough foundation. This is the way people are, they side with their defilements.
\index[general]{right view}
Now that you have this chance to practise, please understand that whether you find it difficult or easy to develop sam\=adhi it is entirely up to you, not the sam\=adhi. If it is difficult, it is because you are practising wrongly. In our practice we must have \glsdisp{right-view}{`right view'} (\pali{samm\=a-di\d{t}\d{t}hi}). If our view is right, everything else is right: right view, right intention, right speech, right action, right livelihood, right effort, right recollection, right concentration -- the \glsdisp{eightfold-path}{Eightfold Path.} When there is right view all the other factors will follow.
\index[general]{practice!vs. study}
Whatever happens, don't let your mind stray off the track. Look within yourself and you will see clearly. As I see it, for the best practice, it isn't necessary to read many books. Take all the books and lock them away. Just read your own mind. You have all been burying yourselves in books from the time you entered school. I think that now you have this opportunity and have the time, take the books, put them in a cupboard and lock the door. Just read your mind.
\index[general]{not sure}
\index[general]{uncertainty}
Whenever something arises within the mind, whether you like it or not, whether it seems right or wrong, just cut it off with, `this is not a sure thing.' Whatever arises just cut it down, `not sure, not sure.' With just this single axe you can cut it all down. It's all `not sure'.
For the duration of this next month that you will be staying in this forest monastery, you should make a lot of headway. You will see the truth. This `not sure' is really an important one. This one develops wisdom. The more you look, the more you will see `not sure-ness'. After you've cut something off with `not sure' it may come circling round and pop up again. Yes, it's truly `not sure'. Whatever pops up just stick this one label on it all -- `not sure'. You stick the sign on, `not sure', and in a while, when its turn comes, it crops up again, `Ah, not sure.' Dig here! Not sure. You will see this same old one who's been fooling you month in, month out, year in, year out, from the day you were born. There's only this one who's been fooling you all along. See this and realize the way things are.
\index[general]{clinging}
\index[general]{sensations!and the world}
When your practice reaches this point you won't cling to sensations, because they are all uncertain. Have you ever noticed? Maybe you see a clock and think, `Oh, this is nice.' Buy it and see -- in not many days you're bored with it already. `This pen is really beautiful,' so you take the trouble to buy one. In not many months you tire of it. This is how it is. Where is there any certainty?
\index[similes]{old rag!sensations}
If we see all these things as uncertain, their value fades away. All things become insignificant. Why should we hold on to things that have no value? We keep them only as we might keep an old rag to wipe our feet with. We see all sensations as equal in value because they all have the same nature.
When we understand sensations we understand the world. The world is sensations and sensations are the world. If we aren't fooled by sensations, we aren't fooled by the world. If we aren't fooled by the world, we aren't fooled by sensations.
The mind which sees this will have a firm foundation of wisdom. Such a mind will not have many problems. Any problems it does have, it can solve. When there are no more problems there are no more doubts. Peace arises in their stead. This is called `practice'. If we really practise it must be like this.
| {
"alphanum_fraction": 0.7658427635,
"avg_line_length": 136.5040650407,
"ext": "tex",
"hexsha": "3dba73860d0d6b3fff6e94f8af4989f5a2f4044f",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-12-15T15:03:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-12-15T15:03:46.000Z",
"max_forks_repo_head_hexsha": "b12017212627a50c5c635d43acf0739a325a4fa7",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "profound-labs/ajahn-chah-collected-singlevol",
"max_forks_repo_path": "manuscript/tex/food_right-practice-steady-practice.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b12017212627a50c5c635d43acf0739a325a4fa7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "profound-labs/ajahn-chah-collected-singlevol",
"max_issues_repo_path": "manuscript/tex/food_right-practice-steady-practice.tex",
"max_line_length": 1183,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b12017212627a50c5c635d43acf0739a325a4fa7",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "profound-labs/ajahn-chah-collected-singlevol",
"max_stars_repo_path": "manuscript/tex/food_right-practice-steady-practice.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8237,
"size": 33580
} |
\section{Approach}\label{approach}
Gene expression data are usually represented by the matrix $X = x_{ij}$ of the expression profiles of $i$ genes and $j$ individuals or sample tissues.
The main goal of the approach described in the current section is to infer the network topology that regulates the main interactions of the genes under investigation.
Generally speaking, a network model is formed by a set of vertices $G$, representing the genes in our specific case, and a set of edges $E$ representing pairwise interactions. The existence of edge $(i,j)$ represents the conditional dependency between gene $i$ and gene $j$. If such an edge is not present, the two genes are considered conditionally independent, in the symbolic representation $(G_i \perp G_j) | G_k, \forall k \neq j$.
In the specific application described in this paper, we aim at finding the best set of neighbours associated to each gene. We interpret the biological meaning of genetic associations within the terms specified by regression analysis. Regressing the expression value of a gene (\emph{response}) against the remaining ones in the dataset (\emph{independent variables}) leads to selecting a subset of the most influential genes associated with the response.
Regardless of the number of mathematical models that have been considered for inferring the association between variables in genetics, linear regression is a type of analysis that has found large consensus in the field of computational biology due to its simplicity of modelling (\citealp{linregression2, linregression1}). %Moreover, the capabilities of linear regression can be extended to other genetic compounds from the field of proteomics, metabolomics, methylation etc. [move to intro]
One limitation of linear regression methods prevails in assuming a linear dependency between variables, a hypothesis that does not always apply in biology. One strategy to overcome such a limitation consists of splitting the problem of learning the topology of the entire network of genes into a number of smaller linear problems. This can be achieved by regressing each covariate against all the remaining ones. Such a strategy, which has been used first in the work reported in (\citealp{Meinshausen06highdimensional}) makes the assumption of linearity more suitable to the analysis of biological data. Assuming the presence of linearities on a local scale is a much more convincing and appropriate conjecture that might find an application to data from genomics and proteomics.
Another limitation that researchers have to take into account appears in the case of high-dimensional data. In such a scenario, the number of genes is usually some orders of magnitude larger than the number of the individuals.
Penalised regression has been considered as a way to circumvent such limitation due to the presence of a penalty factor that encourages sparsity of the final network.
Specifically, Lasso is one such regression method that converts the problem of estimating the covariance matrix into an optimisation problem in which a convex function, applied to each variable, is minimised.
Given $X_i$ the expression of gene $i$ and the expression profiles of the remaining genes (referred to as $X$, for simplicity), the Lasso-based estimate consists of providing a solution for Equation \ref{eq:lasso}
\begin{equation}
\label{eq:lasso}
\hat{\Theta}^{a,\lambda} =
\argmin_{%
\substack{%
\text{s.\,t.}\, \Theta:\Theta_a = 0 \\
\phantom{}\,
}
}
(\frac{1}{n} \| X_i - X\Theta \|^{2}_2 + \lambda \| \Theta \|_1)
\end{equation}
The vector of regression coefficients $\Theta$ determines the conditional independence structure between variables. The $l_1$-norm of the coefficient vector tends to shrink the coefficients of some variables to zero, removing them from the set of selected variables associated to the response, as extensively explained in (\citealp{Tibshirani94regressionshrinkage}).
The right choice of the shrinkage factor $\lambda$ is crucial to controlling the rate of false positives and false negatives. Regardless of the number of approaches to approximate the optimal $\lambda$, reported in (\citealp{adalasso, efron2004, tuneparamsel}), a reliable estimate that is widely used in practice is provided by cross-validation (\citealp{glmnet}).
We use a 3-fold cross validation approach and estimate $\hat{\lambda_{cv}}$ from a subset of the data. Cross-validation can be a time consuming task especially when applied to datasets with a high number of covariates. Therefore, we estimate the shrinkage factor that minimises the expected generalisation error, for a grid of $\lambda$ values, on the $10\%$ of the total number of genes. The R package $glmnet$ has been used to provide such an estimate.
The method we describe in this paper is a two-step approach that recursively performs the regression of Equation \ref{eq:lasso} for each gene, considered as response, with respect to all remaining genes, considered as independent variables. The response gene is not included in the set of independent variables. Regardless of biological evidence that supports the existence of self interactions and positive/negative feedback loops within regulatory networks (\citealp{netmotif, avrahamfeedback2011, generegmodel}), those are not considered here, in order to avoid complex interactions and simplify as much as possible the inferred network topology.
\textbf{In step 1}, the set $S$ of variables associated with the current response gene is selected. We use a Lasso method that does not fit the intercept. As explained, the choice of the optimal $\lambda$ occurs prior to this stage.
\textbf{In step 2}, we use a permutation-based approach to assess the significance of the associated edges detected in step 1. The values of the response variables are permuted a number of times specified as parameter. For each permutation we count how many times each variable within the set $S$ of selected genes has been selected again. At the end of the permutation test, the variables with the smallest counter are selected as the best candidate variables associated with the current response gene.
This approach is supported by the fact that after permuting the response variable, the genes selected at step 1 should be no longer associated and therefore should be considered as selected by chance.
\begin{algorithm}
\begin{algorithmic}[1]
\Procedure{lasso2net}{$X_i,X, B, fanout, best$}
\State $fit \gets lasso.cv(X_i,X$)
\State $\lambda_{cv} \gets fit.lambda$
\State $S \gets fit.coeffs$
\State $S \gets sort(S, decreasing)[1:best] $
\While{$r < B$}
\State $X^{perm}_i \gets permute(X_i)$
\State $permfit \gets lasso(X^{perm}_i, X, \lambda_{cv}$)
\State $update(counter[S])$ update counters of selected variables
\State $r\gets r+1$
\EndWhile
\State $sel\gets sort(counter[S], increase)[1:fanout] $ order and select first fanout
%\Comment{variables sorted by increasing order of counter}
\State \textbf{return} $sel$
\EndProcedure
\end{algorithmic}
\caption{Variable selection and permutation-based stability test}
\label{algo:perm}
\end{algorithm}
The procedure we propose is summarised in Algorithm \ref{algo:perm}.
It selects the $best$ number of genes associated to the current response. Namely, the vector of the associated genes is sorted in decreasing order and the first $best$ are selected (\emph{line 5}). The parameter $best$ can be tuned in order to select a variable number of strong genetic effects according to the type of disease under investigation and the dataset at the researcher's disposal which, in turn, might determine the amount of significant genetic compounds to be considered for further analysis.
At each permutation, the counters of the selected variables are updated (\emph{line 9}) and after $B$ permutations the first $fanout$ genes are selected. These variables represent the most stable genes associated with the response variable (\emph{line 12}).
For large values of $B$ we perform an additional significance test for the smallest counter. A critical case to deal with occurs whenever different covariates are selected with similar frequency. This phenomenon in turns produces uniform values of counters for a high number of selected variables. Lasso-based regression methods are affected by issues of this type due to the fact that one from a group of highly correlated variables can be randomly selected at each permutation. To mitigate such side effects, we compute the empirical distribution of the counters of the selected covariates regressed against each permuted response. The p-value of the smallest counter is calculated from the aforementioned empirical distribution. We found that a significance level of $0.05$ improves the precision (calculated as $\frac{TP}{TP+FP}$ ) by $3\%$.
The algorithm described above finds a solution of Equation \ref{eq:lasso} for each response variable. Subsequently, it finds the most stable non-zero regression coefficients associated to each gene.
Consequently, when the described procedure is performed on the entire set of genes, an adjacency matrix can be built directly from the counters of selected variables. The aforementioned adjacency matrix can be used to visualise the network topology of the inferred network of interactions.
Since we are interested in discovering genetic interactions we convert the non-zero values of the adjacency matrix to $1$, in order to denote the presence of an edge in the graph.
As one would expect, the method does not guarantee the adjacency matrix to be symmetric. A symmetrisation procedure would be required before further analysis or visualisation of the predicted network.
A number of approaches that perform matrix symmetrisation have been proposed in \citealp{wna}. Given two nodes $i$ and $j$ and the weights of the edges $M_{ij}$ and $M_{ji}$, the symmetric adjacency matrix can be built by taking the average value as in $M_{ij} = M_{ji} = mean(\frac{M_{ij}+M{ji}}{2})$; by selecting the largest weight $M_{ij} = M_{ji} = max(M_{ij}, M_{ji})$; or by selecting the smallest weight $M_{ij} = M_{ji} = min(M_{ij}, M{ji})$.
For a binary adjacency matrix, in which each entry represents the presence or absence of the edge $(i,j)$, the AND rule will set $M_{ij} = M_{ji} = (M_{ij} \land M_{ji})$.
In order to detect a generic association between nodes $i$ and $j$, we symmetrise the adjacency matrix by applying the $OR$ rule which considers two variables as associated if only one of the two variables is associated with the other. Namely,
$M_{ij} = M_{ji} = (M_{ij} \vee M_{ji})$.
The main goal of the work described here is to detect the structure of the network of the main genetic associations, passing over the magnitude of interaction and its direction.
| {
"alphanum_fraction": 0.7844946931,
"avg_line_length": 138.9102564103,
"ext": "tex",
"hexsha": "028cf1cd75ae0c2b38226d4ecc4107f23020b83d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a9ad707cf504e6460ebc0ec53e6217156726022a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "worldofpiggy/academic-papers",
"max_forks_repo_path": "LABnet bioinformatics/approach.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a9ad707cf504e6460ebc0ec53e6217156726022a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "worldofpiggy/academic-papers",
"max_issues_repo_path": "LABnet bioinformatics/approach.tex",
"max_line_length": 846,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a9ad707cf504e6460ebc0ec53e6217156726022a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "worldofpiggy/academic-papers",
"max_stars_repo_path": "LABnet bioinformatics/approach.tex",
"max_stars_repo_stars_event_max_datetime": "2016-10-31T19:39:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-10-31T19:39:38.000Z",
"num_tokens": 2413,
"size": 10835
} |
\documentclass{tufte-handout}
\usepackage[utf8]{inputenc}
\usepackage{tikz}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{color}
\newcommand{\red}[1]{{\color{red} #1}}
\usepackage{booktabs}
\title{Red Scare!}
\begin{document}
\maketitle
\begin{abstract}
Oh no! All I wanted was to write a straightforward reachability exercise.
But some of the vertices have turned red, giving me cruel ideas for much harder questions!
\end{abstract}
\section{Problems}
In every problem of this exercise, we consider a graph $G$ with vertex set $V(G)$ and edge set $E(G)$.
The graph can be directed or undirected.
Every graph in this exercise is simple (no multiple edges between any pair of vertices) and unweighted.
We fix the notation $n=|V(G)|$ and $m=|E(G)|$.
\begin{marginfigure}
\begin{tikzpicture}[yscale=.4, xscale = .7]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$, label = left:$s$] at (0,0) {};
\node (1) [label=below:$1$] at (1,0) {};
\node (2) [label=below:$2$] at (2,0) {};
\node (3) [label=below:$3$, label = right:$t$] at (3,0) {};
\node (4) [label=above:$4$, fill=red, draw, inner sep =1.5pt] at (1.5,1) {};
\node (5) [label=left:$5$, fill=red, draw, inner sep =1.5pt] at (.5,3) {};
\node (6) [label=above:$6$] at (1.5,3) {};
\node (7) [label=right:$7$, fill=red, draw, inner sep =1.5pt] at (2.5,3) {};
\draw (0) -- (1) -- (2) -- (3);
\draw (0) -- (4) -- (3);
\draw (0) -- (5) -- (6) -- (7) -- (3);
\end{scope}
\end{tikzpicture}
\caption{Example graph $G_{\text{ex}}$ corresponding to the file {\tt G-ex}.}
\end{marginfigure}
Every graph comes with two specified vertices $s,t\in V(G)$ called the \emph{start} and \emph{end} vertices, and a subset $R\subseteq V(G)$ of \emph{red} vertices.
In particular, $R$ can inlude $s$ and $t$.
We fix the notation $r= |R|$.
In the example graph $G_{\text{ex}}$, we have $s=0$, $t=3$, and $R=\{4,5,7\}$.
An \emph{$s,t$-path} is a sequence of distinct vertices $v_1,\ldots, v_l$ such that $v_1=s$, $v_l=t$, and $v_iv_{i+1}\in E(G)$ for each $i\in\{1,\ldots,l-1\}$.
The \emph{length} of such a path is $l-1$, the number of edges.
Note that this definition requires the vertices on a path to be distinct, this is sometimes called a \emph{simple} path.
The problems we want solved for each graph are the following:
\begin{description}
\item[None] Return the length of a shortest $s,t$-path internally avoiding $R$.
To be precise, let $P$ be the set of $s,t$-paths using no vertices from $R$ except maybe $s$ and $t$ themselves. Let $l(p)$ denote the length of a path $p$.
Return $\min\{\,l(p)\colon p\in P\,\}$.
If no such path exists, return `-1'.
Note that if the edge $st$ exists then the answer is 1, no matter the colour of $s$ or $t$.
In $G_{\text{ex}}$, the answer is 3 (because of the path 0, 1, 2, 3.)
\item[Some] Return `true' if there is a path from $s$ to $t$ that includes at least one vertex from $R$.
Otherwise, return `false.'
In $G_{\text{ex}}$, the answer is `yes' (in fact, two such paths exist: the path 0, 4, 3 and the path 0, 5, 6, 7, 3.)
\item [Many] Return the maximum number of red vertices on any path from $s$ to $t$.
To be precise, let $P$ be the set of $s,t$-paths and let $r(p)$ denote the number of red vertices on a path $p$.
Return $\max\{\,r(p)\colon p\in P\,\}$.
If no path from $s$ to $t$ exists, return `-1'.
In $G_{\text{ex}}$, the answer is `2' (because of the path 0, 5, 6, 7, 3.)
\item [Few] Return the minimum number of red vertices on any path from $s$ to $t$.
To be precise, let $P$ be the set of $s,t$-paths and let $r(p)$ denote the number of red vertices on a path $p$.
Return $\min\{\,r(p)\colon p\in P\,\}$.
If no path from $s$ to $t$ exists, return `-1'.
In $G_{\text{ex}}$, the answer is 0 (because of the path 0, 1, 2, 3.)
\item [Alternate] Return `true' if there is a path from $s$ to $t$ that alternates between red and non-red vertices.
To be precise, a path $v_1,\ldots, v_l$ is \emph{alternating} if for each $i\in\{1,\ldots,l-1\}$, exactly one endpoint of the edge $v_iv_{i+1}$ is red.
Otherwise, return `false.'
In $G_{\text{ex}}$, the answer is `yes' (because of the path 0, 5, 6, 7, 3.)
\end{description}
\subsection{Requirements}
\paragraph{Solved instances.}
For three of the problems (I’m not telling which), you need to be able to handle \emph{all} instances.
For the remaining two problems, you will not be able to solve all instances, but should be able to solve roughly half of them.
Your solutions must run in polynomial time.
If you do have a polynomial-time implementation that takes more than 1 hour on some instance, just abort it and report this in your report.\sidenote{However, this should not happen. As a guideline, I have a non-optimised Python implementation that solves all the instances in a single run in half an hour on a low-powered 2012 laptop.}
For two of the problems (I’m not telling which), you will not be able to write an algorithm that works for all graphs, because the problems are hard in general.
For one of those problems, you should be able to argue for computational hardness with a simple reduction.
The other problem will probably mystify you.
A sophisticated hardness argument exists in the research literature (but not in the course text book)---you are welcome to try to find it and include a reference in your report.
But you are not required to find an explanation, and absolutely not required to come up with the reduction yourself.
\paragraph{Universality.}
Your algorithms must run in polynomial time on a well-defined class of graphs.
“Well-defined class” means something like “all graphs,”\sidenote{“All graphs” means all simple, loop-less, unweighted graphs.} “directed graphs,” “undirected graphs,” “bipartite graphs,” “acyclic graphs,” “graphs of bounded treewidth,” “planar graphs,” “expanders” or even a combination of these.
In particular, you are allowed to do something like this:
\begin{quotation}
\vspace*{-3ex} \begin{tabbing}
if \= (isBipartite($G$)) then\\
\> $\cdots\qquad\qquad$\=\# run the Strumpf--Chosa algorithm \\
else \\
\> print(``?!'') \>\# problem is NP-hard for non-bipartite graphs, so give up
\end{tabbing}
\end{quotation}
On the other hand, you are not allowed to base your algorithm on specific knowledge of which graphs are in the {\tt data} directory,
For an extreme example, the following would not be allowed:
\begin{quotation}
\vspace*{-3ex}
\begin{tabbing}
if \= (filename == "rusty-1-17")) then\\
\> print("14") \=\# solved by hand\\
else \\
\> print(``?'') \>\# no idea what to do
\end{tabbing}
\end{quotation}
\paragraph{Libraries.}
This exercise focusses on choosing \emph{between} algorithms, not implementing them.
Thus, you are \emph{not} required to write these algorithms from scratch.
For instance, if you need a minimum spanning tree, and you already have a working implementation of Prim’s algorithm, you are free to reuse that.
In particular, you are free to use a library implementation of these algorithms.
You are also free to use implementations from books or from other people, provided that you are not violating any intellectual property rights.
(It goes without saying that you properly cite the source of these implementations in your report.)
You are highly encouraged to use your own implementation of standard graph algorithms that you may have made for some other exercise.
If you do this, then separate that implementation in your source code, maybe by leaving it in its own file.
Attribute all the original authors in the source code.
\paragraph{Deliverables.}
Hand in
\begin{enumerate}
\item a report; follow the skeleton in {\tt doc/report.pdf}.
\item a text file {\tt results.txt} with all the results, as specified in the report.
\item the programs you have written to answer the questions, including any scripts you need to run these programs on the instances, and a README file that explains how to recreate {\tt results.txt} by running your programs.
\end{enumerate}
\newpage
\section{Appendix: Gallery of Graphs}
This gallery consists of descriptions and drawings of many of the graphs in the {\tt data} directory.
Ideally, these descriptions are useful for finding mistakes in your code.
In particular, for many of these graphs it is obvious what the correct answers are.
Some of the graphs are \emph{random} graph---they have no structure and are pretty boring.
Others, such as the Word graphs, have a lot of structure.
\subsection{Individual graphs}
\begin{marginfigure}
\begin{tikzpicture}[scale=.4]
\node at (1,-2) {\tt P3-0};
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$, label = above:$s$] at (0,0) {};
\node (1) [label=below:$1$, fill=red, draw, inner sep =1.5pt] at (1,0) {};
\node (2) [label=below:$2$, label = above:$t$] at (2,0) {};
\draw (0) -- (1) -- (2);
\end{scope}
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=.4]
\node at (1,-2) {\tt P3-1};
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$, label = above:$s$] at (0,0) {};
\node (1) [label=below:$1$, label = above:$t$] at (1,0) {};
\node (2) [label=below:$2$, fill=red, draw, inner sep =1.5pt] at (2,0) {};
\draw (0) -- (1) -- (2);
\end{scope}
\end{tikzpicture}
\bigskip
\begin{tikzpicture}[scale=.4]
\node at (0,-2) {\tt K3-0};
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=135:$0$, label = 45:$s$] at (90:1cm) {};
\node (1) [label=210:$1$, fill=red, draw, inner sep =1.5pt] at (210:1cm) {};
\node (2) [label=285:$2$, label = 15:$t$] at (330:1cm) {};
\draw (0) -- (1) -- (2) -- (0);
\end{scope}
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=.4]
\node at (0,-2) {\tt K3-1};
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=135:$0$, label = 45:$s$] at (90:1cm) {};
\node (1) [label=210:$1$, fill=red, draw, inner sep =1.5pt] at (210:1cm) {};
\node (2) [label=285:$2$, label = 15:$t$] at (330:1cm) {};
\draw [->] (0) -- (1);
\draw [->] (1) -- (2);
\draw [->] (0) -- (2);
\end{scope}
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=.4]
\node at (0,-2) {\tt K3-2};
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=135:$0$, label = 45:$s$] at (90:1cm) {};
\node (1) [label=210:$1$, fill=red, draw, inner sep =1.5pt] at (210:1cm) {};
\node (2) [label=285:$2$, label = 15:$t$] at (330:1cm) {};
\draw [->] (1) -- (0);
\draw [->] (1) -- (2);
\draw [->] (0) -- (2);
\end{scope}
\end{tikzpicture}
\caption{Paths and triangles with various choices of orientation and redness.}
\end{marginfigure}
The {\tt data} directory contains a small number of individual graphs, typicall of very small size.
This includes $G_{\text{ex}}$, a number of small graphs of 3 vertices shown to the right, and an all-red dodecahedron.
It is a good idea to use these graphs initially to ensure that your parser works, your graph data structure makes sense, etc.
These graphs also serve as an invitation to create more toy examples by hand while you test your code, so ensure that everything works on very small graphs.
Keep some good, clear drawings of `your' graphs around, they help immensely when finding mistakes.
\subsection{Word graphs}
In the word graphs, each vertex represents a five-letter word of English.
For $k\in\{1,2\}$, an edge joins $u$ and $v$ if the corresponding words are anagrams, or if they differ in exactly $k$ positions.
For instance ``begin'' and ``binge'' are neighbours, and so are ``turns'' and ``terns'' for $k=1$.
The word graphs come in two flavours.
The \emph{rusty word} graphs are guaranteed to include ``begin,'' ``ender,'' and ``rusty.''
The vertex corresponding to ``rusty'' is coloured red, no other vertices are red.
The filenames are {\tt rusty-$k$-$n$}.
\begin{figure}
\[
\begin{tikzpicture}[
xscale = 1.5,
every node/.style={ draw,rectangle,
rounded corners, inner sep = 1.5pt, font =\sf\small }
]
\node (rungs) at (3,2) {RUNGS};
\node (sings) at (1,2) {SINGS};
\node (begin) [label = below:$s$] at (0,0) {BEGIN};
\node (rents) at (4,1) {RENTS};
\node (ender) [label = below:$t$] at (4,0) {ENDER};
\node (rests) at (5,1) {RESTS};
\node (binge) at (1,0) {BINGE};
\node (singe) at (1,1) {SINGE};
\node (rings) at (2,2) {RINGS};
\node (rusty) [fill = red] at (5,3) {RUSTY};
\node (rente) at (3,1) {RENTE};
\node (runts) at (4,2) {RUNTS};
\node (enter) at (3,0) {ENTER};
\node (rusts) at (5,2) {RUSTS};
\node (tinge) at (2,0) {TINGE};
\node (tings) at (2,1) {TINGS};
\node (runty) at (4,3) {RUNTY};
\draw (rungs) -- (runts) ;
\draw (sings) -- (tings) ;
\draw (sings) -- (rings) ;
\draw (sings) -- (singe) ;
\draw (begin) -- (binge) ;
\draw (rents) -- (rests) ;
\draw (rents) -- (rente) ;
\draw (rents) -- (runts) ;
\draw (ender) -- (enter) ;
\draw (rests) -- (rusts) ;
\draw (binge) -- (tinge) ;
\draw (binge) -- (singe) ;
\draw (singe) -- (tinge) ;
\draw (rings) -- (tings) ;
\draw (rusty) -- (rusts) ;
\draw (rusty) -- (runty) ;
\draw (rente) -- (enter) ;
\draw (runts) -- (rusts) ;
\draw (runts) -- (runty) ;
\draw (tinge) -- (tings) ;
\draw (rungs) -- (rings) ;
\end{tikzpicture}
\]
\caption{{\tt rusty-1-17}}
\end{figure}
The \emph{common word} use the same adjacency structure, and always include `start' and `ender.'
A word is red if it is uncommon (like `ender'), just under half the words are uncommon.
The filenames for these graphs are {\tt common-$n$}.
\subsection{Grids}
\begin{marginfigure}
\begin{tikzpicture}[scale = .5, every node/.style={circle, fill, inner sep =1.5pt}]
\foreach \x in {0,1,2,3,4} {
\foreach \y in {0,1,2,3,4} {
\node (\x_\y) at (\x,\y) {};
}
}
\node [label = left:$s$] at (0,0) {};
\node [label = right:$t$] at (4,4) {};
\draw (1_4) -- (2_4);
\draw (1_4) -- (0_4);
\draw (1_4) -- (1_3);
\draw (1_4) -- (0_3);
\draw (1_3) -- (2_3);
\draw (1_3) -- (2_4);
\draw (1_3) -- (1_2);
\draw (1_3) -- (0_2);
\draw (1_3) -- (0_3);
\draw (1_2) -- (2_2);
\draw (1_2) -- (2_3);
\draw (1_2) -- (1_1);
\draw (1_2) -- (0_2);
\draw (1_2) -- (0_1);
\draw (1_1) -- (2_1);
\draw (1_1) -- (2_2);
\draw (1_1) -- (1_0);
\draw (1_1) -- (0_0);
\draw (1_1) -- (0_1);
\draw (1_0) -- (2_0);
\draw (1_0) -- (2_1);
\draw (1_0) -- (0_0);
\draw (0_4) -- (0_3);
\draw (0_2) -- (0_3);
\draw (0_2) -- (0_1);
\draw (0_0) -- (0_1);
\draw (3_1) -- (3_0);
\draw (3_1) -- (3_2);
\draw (3_1) -- (2_0);
\draw (3_1) -- (2_1);
\draw (3_1) -- (4_2);
\draw (3_1) -- (4_1);
\draw (3_0) -- (2_0);
\draw (3_0) -- (4_0);
\draw (3_0) -- (4_1);
\draw (3_3) -- (3_2);
\draw (3_3) -- (3_4);
\draw (3_3) -- (2_2);
\draw (3_3) -- (2_3);
\draw (3_3) -- (4_3);
\draw (3_3) -- (4_4);
\draw (3_2) -- (2_1);
\draw (3_2) -- (2_2);
\draw (3_2) -- (4_2);
\draw (3_2) -- (4_3);
\draw (3_4) -- (2_3);
\draw (3_4) -- (2_4);
\draw (3_4) -- (4_4);
\draw (2_0) -- (2_1);
\draw (2_1) -- (2_2);
\draw (2_2) -- (2_3);
\draw (2_3) -- (2_4);
\draw (4_2) -- (4_1);
\draw (4_2) -- (4_3);
\draw (4_3) -- (4_4);
\draw (4_0) -- (4_1);
\node [fill=red,draw,inner sep = 1.5pt] at (1,0) {};
\node [fill=red,draw,inner sep = 1.5pt] at (1,1) {};
\node [fill=red,draw,inner sep = 1.5pt] at (1,2) {};
\node [fill=red,draw,inner sep = 1.5pt] at (1,3) {};
\node [fill=red,draw,inner sep = 1.5pt] at (3,1) {};
\node [fill=red,draw,inner sep = 1.5pt] at (3,2) {};
\node [fill=red,draw,inner sep = 1.5pt] at (3,3) {};
\node [fill=red,draw,inner sep = 1.5pt] at (3,4) {};
\end{tikzpicture}
\caption{The grid for $N=5$, represented by {\tt grid-5-0}.}
\end{marginfigure}
The Grid graphs consist of $N^2$ vertices that represent integer coordinates $(x,y)$ for $x,y\in\{0,\ldots,N-1\}$.
Each vertex $(x,y)$ is connected to $(x-1,y)$, $(x,y-1)$, and $(x-1,y-1)$, provided that these vertices exist.
The red vertices form a maze-like structure in the graph:
Every second row is red, except for the top- or bottommost vertex, alternatingly.
There is a unique $s,t$-path avoiding all red vertices, and a shortest alternating path following the diagonal.
Grid graphs of various sizes are represented by {\tt grid-$N$-0}.
Each of these graphs comes with two variants.
In {\tt grid-$N$-1}, some random red vertices have turned non-red (so there are `holes' in the hedges).
In {\tt grid-$N$-2}, some random non-red vertices have turned red (so some passages are blocked).
\subsection{Walls}
\begin{marginfigure}
\begin{tikzpicture}[scale=.4]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$, label=left:$s$] at (0,0) {};
\node (1) [label=below:$1$] at (1,0) {};
\node (2) [label=below:$2$] at (2,0) {};
\node (3) [label=below:$3$,fill=red,draw,inner sep = 1.5pt] at (3,0) {};
\node (4) [label=above:$4$] at (3,1) {};
\node (5) [label=above:$5$] at (2,1) {};
\node (6) [label=above:$6$] at (1,1) {};
\node (7) [label=above:$7$, label=left:$t$] at (0,1) {};
\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
\end{scope}
\end{tikzpicture}
\caption{The single-brick wall, {\tt wall-p-1}.}
\end{marginfigure}
Bricks are arranged like a wall of height $2$.
Here are three bricks with overlap $1$:
\begin{marginfigure}
\begin{tikzpicture}[scale=.4]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$] at (0,0) {};
\node (1) [label=below:$1$] at (1,0) {};
\node (2) [label=below:$$] at (2,0) {};
\node (3) [label=below:$$] at (3,0) {};
\node (4) [label=above:$4$] at (3,1) {};
\node (5) [label=above:$5$] at (2,1) {};
\node (6) [label=above:$6$] at (1,1) {};
\node (7) [label=above:$7$] at (0,1) {};
\node (8) [label=below:$$] at (4,0) {};
\node (9) [label=below:$$] at (5,0) {};
\node (10) [label=below:\small$10$] at (5,-1) {};
\node (11) [label=below:\small$11$] at (4,-1) {};
\node (12) [label=below:\small$12$] at (3,-1) {};
\node (13) [label=below:\small$13$] at (2,-1) {};
\node (14) [label=below:\small$14$] at (6,0) {};
\node (15) [label=below:\small$15$,fill=red,draw,inner sep = 1.5pt] at (7,0) {};
\node (16) [label=above:\small$16$] at (7,1) {};
\node (17) [label=above:\small$17$] at (6,1) {};
\node (18) [label=above:\small$18$] at (5,1) {};
\node (19) [label=above:\small$19$] at (4,1) {};
\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
\draw (3)--(8)--(9)--(10)--(11)--(12)--(13)--(2);
\draw (9)--(14)--(15)--(16)--(17)--(18)--(19)--(8);
\end{scope}
\end{tikzpicture}
\caption{Three bricks with overlap 1, {\tt wall-p-3}.}
\end{marginfigure}
\begin{marginfigure}
\begin{tikzpicture}[scale=.4]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$, label=left:$s$] at (0,0) {};
\node (1) [label=below:$1$] at (1,0) {};
\node (2) [label=below:$2$] at (2,0) {};
\node (3) [label=below:$$] at (3,0) {};
\node (4) [label=above:$4$] at (3,1) {};
\node (5) [label=above:$5$] at (2,1) {};
\node (6) [label=above:$6$] at (1,1) {};
\node (7) [label=above:$7$, label=left:$t$] at (0,1) {};
\node (8) [label=above:$8$] at (4,0) {};
\node (9) [label=above:$9$] at (5,0) {};
\node (10) [label=below:\small$$] at (6,0) {};
\node (11) [label=below:\small$11$] at (6,-1) {};
\node (12) [label=below:\small$12$] at (5,-1) {};
\node (13) [label=below:\small$13$] at (4,-1) {};
\node (14) [label=below:\small$14$] at (3,-1) {};
\node (15) [label=below:\small$15$] at (7,0) {};
\node (16) [label=below:\small$16$] at (8,0) {};
\node (17) [label=below:\small$17$,fill=red,draw,inner sep = 1.5pt] at (9,0) {};
\node (18) [label=above:\small$18$] at (9,1) {};
\node (19) [label=above:\small$19$] at (8,1) {};
\node (20) [label=above:\small$20$] at (7,1) {};
\node (21) [label=above:\small$21$] at (6,1) {};
\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
\draw (3)--(8)--(9)--(10)--(11)--(12)--(13)--(14)--(3);
\draw (10)--(15)--(16)--(17)--(18)--(19)--(20)--(21)--(10);
\end{scope}
\end{tikzpicture}
\caption{Three bricks with overlap 0, {\tt wall-z-3}.}
\end{marginfigure}
\begin{marginfigure}
\begin{tikzpicture}[scale=.4]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label=below:$0$] at (0,0) {};
\node (1) [label=below:$1$] at (1,0) {};
\node (2) [label=below:$2$] at (2,0) {};
\node (3) [label=below:$3$] at (3,0) {};
\node (4) [label=above:$4$] at (3,1) {};
\node (5) [label=above:$5$] at (2,1) {};
\node (6) [label=above:$6$] at (1,1) {};
\node (7) [label=above:$7$] at (0,1) {};
\node (8) [label=below:$8$] at (4,0) {};
\node (9) [label=below:$9$] at (5,0) {};
\node (10) [label=below:$10$] at (6,0) {};
\node (11) [label=below:$11$,fill=red,draw,inner sep = 1.5pt] at (7,0) {};
\node (12) [label=above:$12$] at (7,1) {};
\node (13) [label=above:$13$] at (6,1) {};
\node (14) [label=above:$14$] at (5,1) {};
\node (15) [label=above:$15$] at (4,1) {};
\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
\draw (3)--(8)--(9)--(10)--(11)--(12)--(13)--(14)--(15)--(8);
\end{scope}
\end{tikzpicture}
\caption{Two bricks with negative overlap, {\tt wall-n-2}.}
\end{marginfigure}
The Wall graphs are a family consisting of $N$ overlapping 8-cycles called \emph{bricks}.
The bricks are laid in a wall of height 2, with various intervals of overlap.
Each wall has a single red vertex $w$, the rightmost vertex at the same level as vertex $0$.
These graphs are interesting instances for finding paths from $s$ to $t$ through the red vertex.
The should help you avoid some obvious pitfalls when developing an algorithm for the problem \emph{Some}.
The Walls with overlap 1, called {\tt brick-1-$N$}, allow an $s,t$-path through $w$.
The Walls with overlap 0, called {\tt brick-0-$N$}, allow a walk from $s$ to $t$ through $w$, but this walk will use $N-2$ vertices twice.
In particular, such a walk it not a path, and your algorithm for Problem \emph{Some} should not be fooled by it.
The walls with negative overlap, called {\tt brick-n-$N$} also allow a walk from $s$ to $t$ through $w$, but this walk would use $N-2$ edges twice.
Again, such walk is not a path.
\section{Ski}
SkiFree\footnote{Read more at Wikipedia’s SkiFree page, the author’s site {\tt ski.ihoc.net}, or (amazingly) SkiFree Fan Fiction at {\tt www.fanfiction.net/game/SkiFree/}.} is an ancient computer game by Chris Pirih, part of the Microsoft Entertainment Pack in 1991, and going back to VT100 VAX/VMS terminals, ultimately inspired by Activision’s \emph{Skiing} for the Atari 2600 console.
Time to show you can handle yourself off-pist. Get from the start to the goal, avoiding the trees, dogs rocks etc.
Beware of the dreaded Yeti who is told to lurk at the red nodes of the mountain and will certainly chase you down and eat you if you pass him.
In each level, the player moves down, and either one step left or right.
(Some moves are blocked by obstacles.)
\begin{figure*}[h]
\includegraphics[width=7cm]{ski.png}
\quad
\begin{tikzpicture}[scale = .5, every node/.style={circle, fill, inner sep =1.5pt}]
\node (0) [label = above:$0\,\,s$] at (0,0) {};
\node (1) [label = above:$1$] at (-1,-1) {};
\node (2) [label = above:$2$] at (1,-1) {};
\node (3) [label = above:$3$] at (-2,-2) {};
\node (4) at (0,-2) {};
\node (5) [label = above:$5$] at (2,-2) {};
\node (6) [label = above:$6$] at (-3,-3) {};
\node (7) at (-1,-3) {};
\node (8) at (1,-3) {};
\node (9) [label = above:$9$] at (3,-3) {};
\node (10) at (-4,-4) {};
\node (11) at (-2,-4) {};
\node (12) [label = left:$12$, fill = red, draw] at (0,-4) {};
\node (13) at (2,-4) {};
\node (14) at (4,-4) {};
\node (15) at (-5,-5) {};
\node (16) at (-3,-5) {};
\node (17) at (-1,-5) {};
\node (18) at (1,-5) {};
\node (19) at (3,-5) {};
\node (20) at (5,-5) {};
\node (21) at (-6,-6) {};
\node (22) at (-4,-6) {};
\node (23) at (-2,-6) {};
\node (24) at (0,-6) {};
\node (25) at (2,-6) {};
\node (26) at (4,-6) {};
\node (27) at (6,-6) {};
\node (29) at (-5,-7) {};
\node (30) at (-3,-7) {};
\node (31) at (-1,-7) {};
\node (32) at (1,-7) {};
\node (33) at (3,-7) {};
\node (34) at (5,-7) {};
\node (35) at (7,-7) {};
\node (36) [label= below:$36\,\,t$] at (0,-8) {};
\draw [->] (0) -- (1);
\draw [->] (0) -- (2);
\draw [->] (1) -- (3);
\draw [->] (2) -- (4);
\draw [->] (2) -- (5);
\draw [->] (3) -- (6);
\draw [->] (3) -- (7);
\draw [->] (4) -- (7);
\draw [->] (4) -- (8);
\draw [->] (5) -- (8);
\draw [->] (6) -- (11);
\draw [->] (7) -- (11);
\draw [->] (8) -- (12);
\draw [->] (9) -- (13);
\draw [->] (9) -- (14);
\draw [->] (10) -- (15);
\draw [->] (11) -- (16);
\draw [->] (12) -- (18);
\draw [->] (13) -- (18);
\draw [->] (14) -- (19);
\draw [->] (14) -- (20);
\draw [->] (15) -- (22);
\draw [->] (16) -- (22);
\draw [->] (16) -- (23);
\draw [->] (17) -- (23);
\draw [->] (17) -- (24);
\draw [->] (18) -- (24);
\draw [->] (18) -- (25);
\draw [->] (19) -- (25);
\draw [->] (19) -- (26);
\draw [->] (20) -- (26);
\draw [->] (20) -- (27);
\draw [->] (21) -- (29);
\draw [->] (22) -- (29);
\draw [->] (23) -- (30);
\draw [->] (23) -- (31);
\draw [->] (24) -- (31);
\draw [->] (24) -- (32);
\draw [->] (25) -- (33);
\draw [->] (26) -- (33);
\draw [->] (26) -- (34);
\draw [->] (27) -- (35);
\draw [->] (29) -- (36);
\draw [->] (30) -- (36);
\draw [->] (31) -- (36);
\draw [->] (32) -- (36);
\draw [->] (33) -- (36);
\draw [->] (34) -- (36);
\draw [->] (35) -- (36);
\end{tikzpicture}
\caption{A Ski level and the corresponding graph.
This graph is described in {\tt ski-illustration}.
Yeti is lurking at node 12.
Graphics from {\tt ski.ihoc.net}.}
\end{figure*}
\subsection{Increasing numbers}
Each \emph{Increase} graph is generated from a sequence $a_1,\ldots, a_n$ of unique integers with $0\leq\alpha_i\leq 2n$.
(The random process is this: Pick a subset of size $n$ from $\{0,\ldots, 2n\}$ and arrange the elements in random order.)
We set $s=\alpha_1$ and $t=\alpha_n$.
Odd numbers are red.
There is an edge from $\alpha_i$ to $\alpha_j$ if $i<j$ and $\alpha_i<\alpha_j$.
\begin{figure}
\begin{tikzpicture}[]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (2) [label=below:$2$, label = left:$s$] at (0,0) {};
\node (13) [label=below:$13$, fill=red, draw] at (1,0) {};
\node (1) [label=below:$1$, fill=red, draw] at (2,0) {};
\node (11) [label=below:$11$, fill=red, draw] at (3,0) {};
\node (15) [label=below:$15$, fill=red, draw] at (4,0) {};
\node (3) [label=below:$3$, fill=red, draw] at (5,0) {};
\node (4) [label=below:$4$] at (6,0) {};
\node (5) [label=below:$5$, label = right:$t$, fill=red, draw] at (7,0) {};
\draw [->] (2) edge[] (13);
\draw [->] (2) edge [bend left] (11);
\draw [->] (2) edge [bend left] (15);
\draw [->] (2) edge [bend left] (3);
\draw [->] (2) edge [bend left] (4);
\draw [->] (2) edge [bend left] (5);
\draw [->] (13) edge [bend left] (15);
\draw [->] (1) edge [] (11);
\draw [->] (1) edge [bend left] (15);
\draw [->] (1) edge [bend left] (3);
\draw [->] (1) edge [bend left] (4);
\draw [->] (1) edge [bend left] (5);
\draw [->] (11) edge [] (15);
\draw [->] (3) edge [] (4);
\draw [->] (3) edge [bend left] (5);
\draw [->] (4) edge [] (5);
\end{scope}
\end{tikzpicture}
\caption{\tt increase-n8-1.0}
\end{figure}
\begin{figure}
\begin{tikzpicture}[]
\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]
\node (11) [label=below:$11$, label = left:$s$] at (0,0) {};
\node (15) [label=below:$15$, fill=red, draw] at (1,0) {};
\node (1) [label=below:$1$, fill=red, draw] at (2,0) {};
\node (8) [label=below:$8$] at (3,0) {};
\node (13) [label=below:$13$, fill=red, draw] at (4,0) {};
\node (4) [label=below:$4$] at (5,0) {};
\node (6) [label=below:$6$] at (6,0) {};
\node (12) [label=below:$12$, label = right:$t$] at (7,0) {};
\draw [->] (11) edge[] (15);
\draw [->] (11) edge [bend left] (13);
\draw [->] (11) edge [bend left] (12);
\draw [->] (1) edge [] (8);
\draw [->] (1) edge [bend left] (13);
\draw [->] (1) edge [bend left] (3);
\draw [->] (1) edge [bend left] (4);
\draw [->] (1) edge [bend left] (6);
\draw [->] (1) edge [bend left] (12);
\draw [->] (8) edge [] (13);
\draw [->] (8) edge [bend left] (12);
\draw [->] (4) edge [] (6);
\draw [->] (4) edge [bend left] (12);
\end{scope}
\end{tikzpicture}
\caption{\tt increase-n8-2.0}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.589266646,
"avg_line_length": 43.5900900901,
"ext": "tex",
"hexsha": "b7500aa46df9232d75366a6a5d6835cc287ab66b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6edcc18de22ad76505d2c13ac6a207a2c274cc95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ITU-2019/DoDoBing",
"max_forks_repo_path": "red-scare/doc/red-scare.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "6edcc18de22ad76505d2c13ac6a207a2c274cc95",
"max_issues_repo_issues_event_max_datetime": "2017-11-13T07:51:40.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-25T12:04:51.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ITU-2019/DoDoBing",
"max_issues_repo_path": "red-scare/doc/red-scare.tex",
"max_line_length": 387,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "6edcc18de22ad76505d2c13ac6a207a2c274cc95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Sebastian-ba/DoDoBing",
"max_stars_repo_path": "red-scare/doc/red-scare.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-20T12:55:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-25T11:59:20.000Z",
"num_tokens": 10972,
"size": 29031
} |
%!TEX root = ../thesis.tex
\ifpdf
\graphicspath{{Chapter2/Figs/Vector/}{Chapter2/Figs/PDF/}{Chapter2/Figs/}}
\else
\graphicspath{{Chapter2/Figs/Raster/}{Chapter2/Figs/}}
\fi
\chapter{Theoretical Framework}
\label{chapter:theoretical-framework}
This chapter discusses Gaussian processes (GP) and deep Gaussian processes (DGPs), the composite model obtained by stacking multiple GP models on top of each other. Both models will be a key building block throughout this report. We also review how to perform approximate Bayesian inference in these models, with a particular attention to Variational Inference (VI). We also cover the theory of positive definite kernels and the Reproducing Kernel Hilbert Spaces (RKHS) they characterise. We finish with an alternative construction of GPs through RKHSs.
\section{Gaussian Processes}
\label{sec:chapter1:gp}
Gaussian processes (GPs) \citep{rasmussen2006} are non-parametric models that specify distributions over functions similar to Bayesian Neural Networks (BNNs). The core difference is that neural networks represent distributions over functions through distributions on weights, while a Gaussian process specifies a distribution on function values at a collection of input locations. This representation allows us to use an infinite number of basis functions, while still enables Bayesian inference \citep{neal1996bayesian}.
Following from the Kolmogorov extension theorem, we can construct a real-valued stochastic process (i.e. function) on a non-empty set $\mathcal{X}$, $f: \mathcal{X} \rightarrow \Reals$, if there exists on all finite subsets $\{x_1, \ldots x_N\} \subset \mathcal{X}$, a \emph{consistent} collection of finite-dimensional marginal distributions over $f(\{x_1, \ldots, x_n\})$ \citep{tao2011introduction}. For a Gaussian process, in particular, the marginal distribution over every finite-dimensional subset is given by a multivariate normal distribution. This implies that, in order to fully specify a Gaussian process, it suffice to only define the mean and covariance (kernel) function because they are the sufficient statistics for every finite-dimensional marginal distribution. We can therefore denote the GP as
\begin{equation}
\label{eq:chapter1:def-gp}
f \sim \GP(\mu, k),
\end{equation}
where $\mu:\mathcal{X}\rightarrow\Reals$ is the mean function, which encodes the expected value of $f$ at every $x$, $\mu(x) = \Exp{f}{f(x)}$, and $k:\mathcal{X} \times \mathcal{X} \rightarrow \Reals$ is the covariance (kernel) function that describes the covariance between function values, $k(x, x') = \Cov\left(f(x), f(x')\right)$. The covariance function has to be a symmetric, positive-definite function. The Gaussianity, and the fact that we can manipulate function values at some finite points of interest without taking the behaviour at any other points into account (the marginalisation property) make GPs particularly convenient to manipulate and use as priors over functions in Bayesian models ---as we will show next.
Throughout this report, we will consider $f$ to be the complete function, and intuitively manipulate it as an infinitely long vector. When necessary, we will append `$(\cdot)$' to $f$ to highlight that it is indeed a function rather than a finite vector. Contrarily, $f(\vx) \in \Reals^N$ denotes the function evaluated at a finite set of points $\vx = \{x_1, \cdots, x_N\}$. $f^{\setminus \vx}$ denotes another infinitely long vector similar to $f$ but excluding $f(\vx)$. Intuitively, $f$ can be partitioned into two groups: $f(\vx)$ and $f^{\setminus \vx}$, with the following joint distribution
% Joint
\begin{equation}
\begin{pmatrix} f^{\setminus \vx}(\cdot)\\ f(\vx) \end{pmatrix} \sim \NormDist{
\begin{pmatrix} \mu(\cdot) \\ \vmu_{\vf} \end{pmatrix},
\begin{pmatrix} k(\cdot, \cdot) & \kfx\transpose \\ \kfx & \Kff \end{pmatrix},
}
\end{equation}
where $[\vmu_\vf]_{i} = \mu(x_i)$, $[\Kff]_{i,j} = k(x_i, x_j)$, $[\kfx]_i = k(x_i, \cdot)$, and $\mu(\cdot)$ and $k(\cdot, \cdot)$ are the mean and covariance function from \cref{eq:chapter1:def-gp}. Following the marginalisation property, the distribution over $f(\vx)$ is given by
\begin{equation}
p(f(\vx)) = \int p(f)\,\calcd{f^{\setminus \vx}} = \NormDist{\vmu_{\vf}, \Kff}.
\end{equation}
% Condition
We can now specify the GP at this finite set of points and use the rules of conditioning to obtain the GP elsewhere. Let $f(\vx) \shorteq \vf$, then the conditional distribution for $f^{\setminus \vx}$ is given by another Gaussian process
\begin{equation}
\label{eq:chapter1:conditional}
p(f^{\setminus \vx}(\cdot) \given f(\vx) \shorteq \vf) = \GP\Big(\mu(\cdot) + \kfx^\top \Kff^{-1} (\vf - \vmu_{\vf}),\ \ k(\cdot, \cdot) - \kfx^\top \Kff^{-1} \kfx\Big),
\end{equation}
The conditional distribution over the whole function $p(f(\cdot) \given f(\vx) = \vf)$ has the exact same form as in \cref{eq:chapter1:conditional}. This is mathematically slightly confusing because the random variable $f(\vx)$ is included both on the left and right-hand-side of the conditioning, but the equation is technically correct, as discussed in \citet{matthews16,van2020framework,Leibfried2020Tutorial}.
\subsection{The Beauty of Gaussian Process Regression: Exact Bayesian Inference}
\label{sec:gp-exact-inference}
A defining advantages of GPs is that we can perform exact Bayesian inference in the case of regression. % In this section we briefly discuss the general methodology of Bayesian modelling and how this is performed by GPs.
% \paragraph{Problem Defintion}
Assume a supervised learning setting where $x \in \mathcal{X}$ (typically, $\mathcal{X} = \Reals^d$) and $y \in \Reals$, and we are given a dataset $\data = \{(x_i, y_i)\}_{i=1}^N$ of input and corresponding output pairs. For convenience, we sometimes group the inputs in $\vx = \{x_i\}_{i=1}^N$ into a single design matrix and outputs $\vy = \{y_i\}_{i=1}^N$ into a vector. We further assume that the data is generated by an unknown function $f: \mathcal{X} \rightarrow \Reals$, and that the outputs are perturbed versions of functions evaluations at the corresponding inputs: $y_i = f(x_i) + \epsilon_i$. In the case of regression we assume a Gaussian noise model $\epsilon_i \sim \NormDist{0, \sigma^2}$. We are interested in learning the function $f$ that generated the data. % We denote the evaluation of the function at all inputs as $f(\vx) \in \Reals^N$ and at a single input as $f(x_i) \in \Reals$.
% The key idea in Bayesian modelling is to specify a prior distribution over the quantity of interest. The prior encodes what we know at that point in time about the quantity. In general term, this can be a lot or a little. We encode this information in the form of a distribution. Then, as more data becomes available, we use the rules of probability, an in particlar Bayes' rule, to update our prior beliefs and compute a posterior distribution (see \citet{mackay2003information, bisschop} for a thorough introduction).
Following the Bayesian approach, we specify a \emph{prior} over the parameters of interests, which in the case of GPs is the function itself. The prior is important because it characterises the search space over possible solutions for $f$. Through the prior, we can encode strong assumptions, such as linearity, differentiability, periodicity, etc. or any combination thereof. These strong inductive biases make it possible to generalise well from very limited data. Compared to Bayesian parametric models, it is much more convenient and intuitive to specify priors directly in \emph{function-space}, rather than on the weights of a parametric model \citep{rasmussen2006}.
In Gaussian process regression (GPR), we specify a GP prior over $f$, for which we assume without loss of generality a zero mean function:
\begin{equation}
f \sim \GP\big(0, k\big),
\end{equation}
The likelihood, describing the relation between the quantity of interest $f$ and the observed data, is given by
\begin{equation}
\label{eq:likelihood}
p(\vy \given f) = \prod_{i=1}^N p(y_i \given f) = \prod_{i=1}^N \NormDist{y_i \given f(x_i), \sigma^2},
\end{equation}
where we see that, conditioned on the GP, the likelihood factorises over datapoints. The posterior over the complete function $f$ is another GP, which can be obtained using Bayes' rule:
\begin{equation}
\label{eq:f-given-y}
p(f \given \vy)
= \GP\Big(\kfx^\top (\Kff + \sigma^2 \Eye)^{-1} \vy,\ \ k(\cdot, \cdot) - \kfx^\top (\Kff + \sigma^2 \Eye)^{-1} \kfx\Big).
\end{equation}
Similarly, the marginal likelihood (model evidence) is also available in closed-form and obtained by marginalising over the prior:
\begin{equation}
\label{eq:log-marginal-likelihood}
p(\vy) = \NormDist{\vy \given \vzero, \Kff + \sigma^2 \Eye}.
\end{equation}
The availablility of the posterior and the marginal likelihood in analytic form make GPs a unique tool for inferring unknown functions from data. In particular, the marginal likelihood allows for model comparison and expresses a natural preference for simpler explanation of the data---known as Occam's razor \citep{mackay2003information,Rasmussen2001Occam}.
Despite the convenience of the analytic expressions, computing this exact posterior requires inverting the $N \times N$ matrix $\Kff$, which has a $\BigO(N^3)$ computational complexity and a $\BigO(N^2)$ memory footprint. Interestingly, the computational intractability in GPR models originates from the prior and \emph{not} the complexity of marginalisation as is often the case for other Bayesian methods \citep[e.g.,][]{blei2003latent}. Indeed, these methods resort to MCMC or EM approaches to avoid computing intractable normalising constants, which is a problem that GPR does not encounter. Another shortcoming of GPR models is that there is no known analytical expression for the posterior distribution when the likelihood (\cref{eq:likelihood}) is not Gaussian, as encountered in classification for instance. In the next paragraph we discuss solutions to both problems.
\section{Approximate Inference with Sparse Gaussian Processes}
\label{sec:svgp}
Sparse Gaussian processes combined with Variational Inference (VI) provide an elegant way to address these two shortcomings \citep{titsias2009, hensman2013, hensman2015scalable}. VI consists of introducing a distribution $q(f)$ specified by a set of parameters, and finding the values of these parameters such that $q(f)$ gives the best possible approximation of the exact posterior $p(f \given \vy )$. It is worth noting that there exists other approaches in the literature that address the same issues. In particular, Expectation Propagation (EP) \citep{minka2001expectation,bui2017unifying} formulates, similar to VI, an approximation to the posterior but uses a different objective to optimise the approximation. Tangentially, other sparse GP methods (e.g., FITC) \citep{Snelson05,quinonero2005unifying} approximate the model rather than the posterior. A downside of this is that the posteriors lose their non-parametric nature, which can be detrimental for the uncertainty estimates \citep{bauer2016understanding}. In what follows we will focus on Sparse Gaussian processes with variational inference because of its general applicability and overall robust performance.
We first discuss the objective used in general variational inference before specifying our particular choice for $q(f)$. Let us therefore define $q(f)$ as the approximate to the posterior, then the idea is to optimise $q(f)$ so that the distance measured by the Kullback–Leibler (KL) divergence from $q(f)$ to $p(f \given \vy)$ is minimal. Rewriting $p(f \given \vy)$ using Bayes' rule, we obtain:
\begin{equation}
\KL{q(f)}{p(f \given \vy)} = -{\Exp{q(f)}{\log \frac{p(\vy \given f) p(f)}{q(f)}}} + \log p(\vy).
\end{equation}
Rearranging these terms yields:
\begin{equation}
\label{eq:elbo}
\log p(\vy) - \KL{q(f)}{p(f \given \vy)} =
\underbrace{\Exp{q(f)}{\log p(\vy \given f)} - \KL{q(f)}{p(f)}}_{\triangleq\,\textrm{ELBO}}
\end{equation}
The r.h.s. of \cref{eq:elbo} is known as the Evidence Lower BOund (ELBO). It is a lower bound to the log marginal likelihood ($\log p(\vy)$) because the KL between $q(f)$ and $p(\vy \given f)$ is always non-negative. Further, since $p(\vy)$ does not depend on the variational approximation, maximising the ELBO w.r.t. $q(f)$ is equivalent to minimising the $\KL{q(f)}{p(f \given \vy)}$. The maximum will be reached when $q(f) \shorteq p(f \given \vy)$, in which case the KL will be zero and the ELBO equals the log evidence.
Intuitively, VI casts the problem of Bayesian inference, namely marginalisation, as an optimisation problem. The objective for the optimisation problem is the ELBO, which depends on the variational approximation, the prior and the data but, importantly, not the true posterior. This approach has several advantages. Firstly, optimisation, as opposed to marginalisation, is guaranteed to converge ---albeit possibly to a local optima. Secondly, VI provides a framework in which one can trade computational cost for accuracy: by expanding the family of approximating distributions we can only tighten the bound. An interesting example of this is importance weighting, where in the limit of infinite compute the true posterior is part of the approximating family \citep{Domke2018IWVI}. Finally, the bound and its derivatives can be computed with stochastic estimations using the reparametrisation trick or score function estimators which enables big data settings.
So far we have discussed the bound in VI, but have left $q(f)$ unspecified, we now focus our attention to this. In general, we want the family of variational distributions to be flexible enough, such that some setting of the parameters can approximate the true posterior well, while also being computationally efficient and mathematically convenient to manipulate. % Mark van der Wilk discusses in his thesis \citep{vdw2019thesis} also the importance of maintaining the non-parametric uncertainty in the approximation.
We follow the approach proposed by \citet{titsias2009}, which parameterises the approximation using a set of $M$ pseudo inputs $\vz = \{z_1, \ldots, z_M\} \in \Reals^{M \times d}$ corresponding to $M$ inducing variables $\vu = f(\vz) \in \Reals^M$.
We choose to factorise the variational approximation as $q(f, \vu) = q(\vu) p(f\given f(\vz) \shorteq \vu)$, where
\begin{equation}
p(f \given f(\vz) \shorteq \vu) = \GP\Big(\kux^\top \Kuu^{-1}\vu,\ \ k(\cdot, \cdot) - \kux^\top \Kuu^{-1} \kux\Big),
\end{equation}
with $[\Kuu]_{i,j} = k(z_i, z_j)$, $[\kux]_i = k(z_i, \cdot)$. We specify a Gaussian distribution for the variational posterior over the inducing variables with a freely parameterised mean and covariance
\begin{equation}
q(\vu) = \NormDist{\vm, \MS}
\end{equation}
such that the overall approximate posterior is given by another GP
\begin{equation}
\label{eq:qf}
q(f) = \int_{\vu} q(f, \vu)\,\calcd{\vu} = \GP\Big(\kux^\top \Kuu^{-1}\vm,\ \ k(\cdot, \cdot) - \kux^\top \Kuu^{-1} (\Kuu + \MS)\Kuu^{-1} \kux\Big).
\end{equation}
as a result of the conjugacy of the variational posterior $q(\vu)$ and the conditional.
The approximation admits several interesting properties. Firstly, in the case of a Gaussian likelihood and the number of inducing points that equals the amount of data, $q(f)$ contains the true posterior. That is, if $\vz \shorteq \vx$, it is possible to set $\vm$ and $\MS$ such that $q(f) = p(f \given \vy)$ \citep{titsias2009}. Secondly, \citet{burt2019} showed that for $N \rightarrow \infty$, the approximate posterior can be made arbitrarly close to the true posterior for $M = \BigO(\log N)$. Finally, we notice that the approximate mean exhibits the same structure as a parametric model with basis functions $\kux$ and weights $\Kuu\inv \vm$. The variance, however, remains non-parametric, which means that predictions are made with an infinite amount of basis functions---exactly like in the true posterior. For non-degenarate kernels, this leads to error bars that are unconstrained by the data. We will come back to the parameteric nature of the approximate mean in \cref{chapter:dnn-as-point-estimate-for-dgps}.
We optimise the variational parameters $\{\vm, \MS\}$ by maximising the ELBO \cref{eq:elbo}. Assuming a general likelihood of the form $p(\vy \given f) = \prod_i p(y_i \given f(x_i))$ the objective can be written as
\begin{equation}
\label{eq:elbo-gp}
\textrm{ELBO} = \sum_{i=1}^N \Exp{q(f(x_i))}{\log p(y_i \given f(x_i))} - \KL{q(\vu)}{p(\vu)} \,.
\end{equation}
Crucially, the original KL between the two infinite-dimensional processes $\KL{q(f)}{p(f)}$ is mathematically equivalent to the finite-dimensional KL between the variational posterior and prior over the inducing variables. We refer the interested reader to \citet{matthews16} for a theoretical analysis of the equivalence and to \citet[Section 4.1]{Leibfried2020Tutorial} for a more intuitive explanation. The objective in \cref{eq:elbo-gp}, introduced in \citet{hensman2013,hensman2015scalable} reduces the computational complexity to $\BigO(M^2 N + M^3)$. It also allows for stochastic estimation through minibatching \citep{hensman2013} and for non-Gaussian likelihoods through Gauss-Hermite quadrature of the one-dimensinoal expectation over $q(f(x_i))$ \citep{hensman2015scalable}.
\subsection{Interdomain Inducing Variables}
\label{section:interdomain-inducing-variables}
Interdomain Gaussian processes use alternative forms of inducing variables such that the resulting sparse GP models consists of a different set of features. %\cref{chapter:vish} and \cref{chapter:dnn-as-point-estimate-for-dgps} heavily rely on this paradigm.
Classically, inducing variables are defined as pseudo function evaluations: $u_m = f(z_m)$, but in interdomain GPs the inducing variables are obtained using a linear transformation of the GP: $u_m = \mathcal{L}_m f$. This redefinition of $\vu$ implies that the expressions of $\Kuu$ and $\kux$ change, but the inference scheme of interdomain GPs and the mathematical expressions for the posterior mean and variance are exactly the same as classic sparse GPs. This is thanks to the linearity of the operator and the preservation of (joint) Gaussian behaviour under linear transformations. A common linear operator is the integral operator with an inducing function $g_m$ \citep{lazaro2009inter}:
\begin{equation*}
\mathcal{L}_m\,f = \int f(x)\,g_m(x)\,\calcd{x}\,.
\end{equation*}
In this case, $\Kuu$ and $\kux$ take the form
\begin{equation}
[\Kuu]_{m,m'} = \Cov(\mathcal{L}_m f, \mathcal{L}_{m'} f) = \int\!\int k(x, x') g_{m}(x) g_{m'}(x') \calcd{x}\,\calcd{x'}
\end{equation}
and
\begin{equation}
[\kux]_{m} = \Cov(\mathcal{L}_m f, f)) = \int k(\cdot, x') g_{m'}(x') \calcd{x'}.
\end{equation}
Most current interdomain methods are designed to improve computational properties \citep{hensman2017variational,Dutordoir2020spherical,burt2020variational}. For example, Variational Fourier Features (VFF)~\citep{hensman2017variational} is an interdomain method where the inducing variables are given by a Mat\'ern RKHS inner product between the GP and elements of the Fourier basis:
\begin{equation*}
u_m = \langle f, \psi_m \rangle_\rkhs,
\end{equation*}
where $\psi_0 = 1$, $\psi_{2m}=\cos(m x)$ and $\psi_{2m+1}=\sin(m x)$ if the input space is $[0, 2 \pi]$
\begin{equation*}
\Kuu = \left[\langle \psi_i , \psi_{j} \rangle_\rkhs^{} \right]_{i, j = 0}^{M-1}\qquad\text{and}\qquad\kux = \left[ \psi_i(x)\right]_{i = 0}^{M-1}\, .
\end{equation*}
This results in several advantages. First, the features $\kux$ are exactly the elements of the Fourier basis, which are independent of the kernel parameters and can be precomputed. Second, the matrix $\Kuu$ is the sum of a diagonal matrix plus low rank matrices. This structure can be used to drastically reduce the computational complexity, and the experiments showed one or two orders of magnitude speed-ups compared to classic sparse GPs. Another reason to use interdomain inducing variables is the ability is gives to control $\kux$---as shown in the next example.
\subsubsection*{Example: Heavyside Inducing Variable}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{heaviside2}
\caption{Samples}
\label{fig:heaviside1}
\end{subfigure}\hfil % <-- added
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{heaviside1}
\caption{$u_m = f(\theta_m)$}
\label{fig:heaviside2}
\end{subfigure}\hfil % <-- added
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{heaviside3}
\caption{$u_m = \mathcal{L}_m f$}
\label{fig:heaviside3}
\end{subfigure}
\caption{Example of interdomain inducing variables for the first-order Arc Cosine kernel.}
\label{fig:images}
\end{figure}
Using interdomain inducing variables in Sparse Variational Gaussian processes makes it possible to construct approximate GPs in which the basis functions exhibit interesting behaviour. As an illustration, in this example we design an inducing variable, through a linear transformation of the GP, for which the basis functions $\kux$ are given by the Heavide function. Therefore, assume $f \sim \GP(0, k)$ defined on $\sphere^1$ (the unit circle), $f: [-\pi, \pi] \rightarrow \Reals, \theta \mapsto f(\theta)$ and $k$ the zero-th order Arc Cosine kernel \citep{cho2009kernel}, defined as:
\begin{equation}
k(\theta, \theta') = 1 - \frac{\varphi}{\pi},
\end{equation}
where $\varphi \in [0, \pi]$ is the shortest angle between the two inputs. In \cref{fig:heaviside1} we show samples from this GP and in \cref{fig:heaviside2} we plot $k(\theta_m, \theta)$ for three different values of $\theta_m$.
Let us now define the $m$-th linear operator as the sum of the derivate at $\theta_m$ and the integral over the domain:
\begin{equation}
\mathcal{L}_m = \frac{\calcd{}}{\calcd{\theta}}\Bigr|_{\theta = \theta_m} + \frac{\pi}{2} \int_{-\pi}^{\pi} \calcd{\theta},
\end{equation}
and $u_m = \mathcal{L}_m f$, then the covariance between $f$ and $u_m$ is given by
\begin{align}
[\kux]_m = \Cov(u_m, f(\cdot)) &= \Exp{f}{\mathcal{L}_m f\,f(\cdot)} && \text{Definition covariance} \\
&= \mathcal{L}_m k(\theta, \cdot) && \text{Swap order $\mathcal{L}_m$ and $\mathbb{E}_f$} \\
&= \frac{\calcd{}}{\calcd{\theta}}\Bigr|_{\theta = \theta_m} k(\theta, \cdot) + \frac{\pi}{2} \int_{-\pi}^\pi k(\theta, \cdot) \calcd{\theta} && \text{Definition $\mathcal{L}_m$} \\
&= \mathbb{H}(\cdot - \theta_m) && \text{Computation}.
\end{align}
The last step follows from the fact that derivative of $k$ w.r.t. $\theta$ is $-1$ for $\theta < \theta_m$ and $+1$ otherwise. The integral part of the $\mathcal{L}_m$ is constant and simply shifts the covariance by +1, such that $\kux$ equals the Heaviside function. In \cref{fig:heaviside3}, we show $[\kux]_m = \Cov(u_m, f(\theta))$ computed emperically using $1000$ samples from the GP for three different values of $\theta_m$. Using this construction and noting the parametric form of the mean of $q(f)$ (see~\cref{eq:qf}) we created an approximate posterior GP that is equivalent to a linear combination of Heaviside functions. In \cref{chapter:dnn-as-point-estimate-for-dgps} we will employ a similar approach to construct a GP where the basis functions are ReLU.
\section{A Kernel Perspective on Gaussian Processes}
Kernels have a rich history in machine learning and functional analysis, and have enabled the developement of versatile algorithms to analyse and process data. Kernels became widely noticed by the ML community through Support Vector Machines (SVMs), who prior to the deep learning revolution, dominated many machine learning benchmarks. %, where they were used as a replacement of the simple we used a kernel function $k(x,x')$ instead of $x\transpose x'$. This means that rather than using $x\top x'$, one uses a function $k(x, x')$, which corresonponds to an inner-product in some space $\phi(x)\top \phi(x')$. This concept is also referred to as the ``kernel trick'' in the litarture. This made SVMs a compelling model for non-linear modelling problems.
% Prior to the deep learning revolution, kernel SVMs sat atop the leaderboards on many machine learning problems.
For instance, the winner of the 2011 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) was an SVM that used a Fisher kernel \citep{perronnin2010large}. The year after, however, the same competition was---by a large margin---won by a deep convolutional neural network approach \citep[AlexNet]{alexnet}. Nonetheless, kernels have always remained relevant and have surfaced in many other methods in the literature, such as splines, component/factor analysis and Gaussian processes. Today, kernels are seen as an important tool to understand and analyse the behaviour of (deep) neural networks \citep[e.g.,][]{jacot2018neural}. % They always operate through pairwise comparisons on datapoints and are versitale because they do not impose strong assumptions regarding the type of data they operatr on, such as vectors, graphs or strings.
In what follows, we are going to focus on the theoretical properties of Mercer kernels with the application of Gaussian processes in mind. Some of the theorems and definitions in this section are quoted from the excellent course ``Machine learning with kernel methods''\footnote{\url{https://members.cbio.mines-paristech.fr/~jvert/svn/kernelcourse/course/2021mva}} by Julien Mairal and Jean-Philippe Vert. Another excellent resource for this topic is \citet{kanagawa2018gaussian}.
\begin{definition}
A positive definite (p.d.) kernel (or Mercer kernel) on a set $\mathcal{X}$ is a function $k: \mathcal{X} \times \mathcal{X} \rightarrow \Reals$ that is
\begin{enumerate}
\item symmetric $k(x, x') = k(x', x)$, and
\item satisfies for all $x_i \in \mathcal{X}$ and $c_i \in \Reals$: $\sum_{i}\sum_j c_i c_j k(x_i, x_j)\ge 0$.
\end{enumerate}
\end{definition}
One of the simplest examples of a valid p.d. kernel is the linear kernel $k(x, x') = xx'$ with $\mathcal{X} = \Reals$. In the next theorem we will see that kernels induce a space over functions. For the linear kernel, for example, this space will consist of linear functions of the form $x \mapsto a x + b$. For more complicated kernels, however, it will be harder to explicity formulate the space of functions they induce.
% While being simple, as we will see through the next few theorems, the associated space of functions induced by this kernel is often too restricte
% In the next theorems we will see how kernels characterise a space over functions. Some of these spaces will be very explicit. For example, we will show how the linear kernel leads to a space of functions that are also linear, i.e. of the form $x \mapsto a x + b$. Other spaces belonging to kernel will be defined in a more abstract way.
\begin{theorem}[\citet{aronszajn1950theory}]
\label{theorem:k-and-H}
The kernel $k$ is a p.d. kernel on $\mathcal{X}$ i.f.f. there exists a Hilbert space $\rkhs$ and a mapping $\phi: \mathcal{X} \rightarrow \rkhs$ such that $\forall x,x' \in \mathcal{X}$:
\begin{equation}
k(x, x') = \langle \phi(x), \phi(x') \rangle_{\rkhs}
\end{equation}
\end{theorem}
Intuitively, a kernel corresponds to the inner product after embedding the data in some Hilbert space. We want to remind the reader that a Hilbert space $\rkhs$ is a (possibly infinite) vector space with an inner product $\langle f, g \rangle_{\rkhs}$ and norm $\norm{f} = \langle f, f \rangle_{\rkhs}^{\frac{1}{2}}$, and where any Cauchy sequence in $\rkhs$ converges in $\rkhs$. A Cauchy sequence is a sequence whose elements become arbitrarly close as the sequence progresses. A Hilbert space can be seen as a generalisation of an Euclidean space that is infinite dimensional. In the machine learning literature, $\rkhs$ is also commonly referred to as the feature space and $\phi$ the feature map.
A Reproducing Kernel Hilbert Space (RKHS) is a special type of Hilbert space mentioned in \cref{theorem:k-and-H}. It is a space over function from $\mathcal{X}$ to $\Reals$ where certain elements are ``parameterised'' by an element in $\mathcal{X}$. In other words, a datapoint $x \in \mathcal{X}$ is mapped to a function in the RKHS: $\phi(x) \in \rkhs$. To make this more explicit we write $\phi(x) = k_x(\cdot)$ to highlight that $k_x(\cdot)$ is another function in $\rkhs$.
\begin{definition}
\label{def:rkhs}
Let $\mathcal{X}$ be a set and $\rkhs$ a Hilbert space with inner-product $\langle \cdot, \cdot \rangle_{\rkhs}$. The function $k: \mathcal{X} \times \mathcal{X} \rightarrow \Reals$ is called a reproducing kernel of $\rkhs$ if
\begin{enumerate}
\item $\rkhs$ contains all functions of the form
\begin{equation}
\forall x \in \mathcal{X},\quad k_x: \mathcal{X} \rightarrow \Reals, x' \mapsto k(x, x')
\end{equation}
\item For every $x \in \mathcal{X}$ and $f \in \rkhs$ the reproducing property holds:
\begin{equation}
f(x) = \langle f, k(x, \cdot) \rangle_{\rkhs}
\end{equation}
\end{enumerate}
If a reproducing kernel exists, then $\rkhs$ is called a reproducing kernel Hilbert space (RKHS).
\end{definition}
As a result of the reproducing property, remark that $k$ can be written as an inner product in $\rkhs$:
\begin{equation}
k(x, x') = \langle k(x, \cdot), k(x', \cdot) \rangle_\rkhs,\quad\text{for}\,x,x'\in\mathcal{X},
\end{equation}
and therefore $k(x, \cdot)$ is sometimes referred to as the canonical feature map of $x$.
% \subsection{Connection GPs and RKHSs}
By the Moore-Aronszajn theorem \citep{aronszajn1950theory}, it can be shown that the reproducing kernel belonging to an RKHS is unique and, conversely, that a function can only be the reproducing kernel of a single RKHS. Furthermore, a function $k:\mathcal{X} \times \mathcal{X} \rightarrow \Reals$ is a positive definite kernel if and only if it is a reproducing kernel. In other words, there exists a one-to-one mapping between p.d. kernels and \emph{their} RKHSs.
We can define the RKHS, associated to a p.d. kernel $k$, as follows:
\begin{equation}
\rkhs = \left\{f = \sum_{i=1}^\infty c_i k(x_i, \cdot), \forall c_i \in \Reals, x_i \in \mathcal{X}:\quad \norm{f}_{\rkhs}^2 = \sum_{i,j} c_i c_j k(x_i, x_j) < \infty \right\}
\end{equation}
where the inner product between $f = \sum_i a_i k(x_i, \cdot)$ and $g = \sum_j b_j k(x_j, \cdot)$ is given by $\langle f,g \rangle_\rkhs = \sum_{i,j} a_i b_j k(x_i, x_j)$ and the norm is induced by the inner product $\norm{f}^2_{\rkhs} = \langle f, f \rangle_{\rkhs}$.
\subsection{Spectral Formulation}
\label{section:theory:spectral-formulation}
\begin{definition}[Kernel Operator]
Associated to a kernel $k$ we can define the kernel operator
\begin{equation}
\mathcal{K} f = \int k(\cdot, x) f(x) \calcd{\nu(x)},
\end{equation}
where $\nu$ denotes a measure. In our case this will typically be the Lebesgue measure or depend on the input distribution: $\nu(x) = p(x)\calcd{x}$.
\end{definition}
It can be shown that for p.d. kernels the associated operator $\mathcal{K}$ is compact, positive (i.e. $\langle f, \mathcal{K} f\rangle_\rkhs \ge 0$) and self-adjoint (i.e. $\langle f, \mathcal{K} g\rangle_\rkhs = \langle \mathcal{K} f, g\rangle_\rkhs$). Then according to the spectral theorem \citep[e.g.,][Chapter 17]{lang1993} the operator can be diagonalised as there exists a complete orthonormal set $\{\phi_1, \phi_2, \ldots \}$ of eigenfunctions in $\rkhs$ with real and non-negative eigenvalues $\lambda_1 \ge \lambda_2 \ge \ldots \ge 0$. This corresponds to
\begin{equation}
\mathcal{K} \phi_n = \lambda_n \phi_n\quad\text{and}\quad \langle \phi_n, \phi_m \rangle_{\rkhs} = \delta_{nm},
\end{equation}
where $\delta_{nm} = 1$ if $n=m$ and $0$ otherwise. The number of eigenfunctions and associated eigenvalues can be countably infinite $\Naturals$ or finite $\{1, \ldots, N\}$. This result is analogues to the fact that semi-definite matrices can be decomposed into a finite set of eigenvectors and non-negative eigenvalues and where the eigenvectors form a basis for the space.
Mercer's theorem describes the kernel $k$ in terms of $\{(\lambda_n, \phi_n)\}$, and as such opens interesting connections to GPs.
\begin{theorem}[Mercer]
Let $\mathcal{X}$ be a compact metric space, $\nu$ a positive measure on $\mathcal{X}$ and $\{(\lambda_n, \phi_n)\}$ the eigensystem of a p.d. kernel as described above, then
\begin{equation}
\label{eq:mercer}
k(x,x') = \sum_n \lambda_n \phi_n(x) \phi_n(x'),\quad\text{for}\ x,x\in \mathcal{X}
\end{equation}
where the convergence is absolute and uniform.
\end{theorem}
It is important to note that $\{(\lambda_n, \phi_n)\}$ will depend on the measure $\nu(x)$, which implicitly also depends on the domain $\mathcal{X}$. Furthermore, while Mercer's theorem is a very powerful result, there are only a few cases in which it is possible to compute the eigensystem associated with a kernel. In \cref{chapter:vish} we discuss such as case for rotationally invariant kernels defined on the hypersphere.
We can use the eigen-decomposition of the kernel operator to represent the RKHS of a kernel
\begin{theorem}[Mercer Representation of an RKHS]
Let $\mathcal{X}$ be a compact metric space, $k$ a p.d. kernel and $\{(\lambda_n, \phi_n)\}$ the eigensystem of the kernel operator for positive measure $\nu$, then the RKHS of $k$ is given by
\begin{equation}
\label{eq:mercer-rkhs}
\rkhs = \left\{
f = \sum_n a_n \phi_{n}:
\norm{f}_{\rkhs} = \sum_n \frac{a_n^2}{\lambda_n} < \infty
\right\},
\end{equation}
with an inner product between $f,g \in \rkhs$ given by
\begin{equation}
\langle f, g \rangle_{\rkhs} = \frac{a_n b_n}{\lambda_n},\quad\text{where}\quad f = \sum_n a_n \phi_n\ \text{and}\ g = \sum_n b_n \phi_n.
\end{equation}
\end{theorem}
The reproducing property of the Mercer representation of an RKHS is straightforward to show. Let $f \in \rkhs$ have coefficients $\{a_n\}$ and $k$ be the kernel associated to $\rkhs$ of which we assume that the Mercer representation is known, then
\begin{equation}
\langle f, k(x, \cdot) \rangle_\rkhs = \sum_n \frac{a_n \lambda_n \phi_n(x)}{\lambda_n} = \sum_n a_n \phi_n(x) = f(x).
\end{equation}
Through the Mercer decomposition of a kernel we can define a Gaussian proces---known as the Karhunen-Lo\`eve expansion. We can define $f \sim \GP(0, k)$ as
\begin{equation}
\label{eq:karhunen-loeve}
f = \sum_n \sqrt{\lambda_n} \xi_n \phi_n\quad\text{with}\quad\xi_n \sim \NormDist{0, 1}
\end{equation}
as indeed the covariance between two points is given by
\begin{align}
\Cov(f(x), f(x'))
&= \sum_{n,m} \sqrt{\lambda_m} \sqrt{\lambda_n} \ExpSymb[\xi_n \xi_m] \phi_n(x) \phi_m(x') && \cref{eq:karhunen-loeve}\\
&= \sum_n \lambda_n \phi_n(x) \phi_n(x') && \ExpSymb[\xi_n \xi_m] = \delta_{n\,m}\\
&= k(x, x') && \cref{eq:mercer},
\end{align}
which matches the kernel of the GP.
Interestingly, using the Karhunen-Lo\`eve expression to compute the RKHS norm of a GP sample $f$ yields $\norm{f}^2_\rkhs = \sum_{n} \xi_{n}^2$, which is a diverging series. This shows that GP samples do not belong to the RKHS of their defining kernel, which may seem surprising. A straightforward solution to this problem is to truncate the Karhunen-Lo\`eve expansion and to only use the first $\tilde{N}$ eigenvalues and functions: $f = \sum_n^{\tilde{N}} \sqrt{\lambda_n} \xi_n \phi_n$, which approximates the kernel and the corresponding GP.
\section{Conclusion}
\Cref{eq:karhunen-loeve} gives an alternative definition of a GP in terms of its basisfunctions $\{\phi_n\}$ and weights $\{\sqrt{\lambda_n} \xi_n\}$. This functions-space views allows, among other, the definition of an interdomain inducing variable (cfr. \cref{section:interdomain-inducing-variables}) that leads to linear-time sparse GP approximations. Unfortunately, for many kernels there is no explicit formulation of their decomposition, which renders these methods purely theoretic. In \cref{chapter:vish}, however, we study a concrete class of kernels (i.e. rotationally invariant kernels) defined on a specific space (i.e. hyperspheres) for which the eigendecomposition can be obtained.
% TODO:
% \subsubsection{Stationary kernels}
% - kernel operator becomes convolution
% For a stationary process $f(t)$, the Wiener-Khintchine theorem states (as a special case of Bochner's theorem, see \citet{rasmussen2006}) that the covariance function $k(r)$, is the inverse Fourier transform of the spectral density $S(\omega)$
% \begin{equation}
% k(r) = \int_{\Reals^d} S(\omega) e^{i 2 \pi \omega r} \calcd{\omega} \quad\text{and}\quad S(\omega) = \int_{\Reals^d} S(\omega) e^{-i 2 \pi \omega r} \calcd{r}.
% \end{equation} | {
"alphanum_fraction": 0.7419556304,
"avg_line_length": 112.2774390244,
"ext": "tex",
"hexsha": "8a17ee20312cb79f5322a950fb5bb5d4d09f1e8d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e32e175235720c7651c3b5200dcccf8046ab3099",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vdutor/FYR",
"max_forks_repo_path": "Chapter2/chapter2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e32e175235720c7651c3b5200dcccf8046ab3099",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vdutor/FYR",
"max_issues_repo_path": "Chapter2/chapter2.tex",
"max_line_length": 1174,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e32e175235720c7651c3b5200dcccf8046ab3099",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vdutor/FYR",
"max_stars_repo_path": "Chapter2/chapter2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10437,
"size": 36827
} |
\documentclass[main.tex]{subfiles}
\begin{document}
\section{Discussion}
In this thesis we set out to create a suite of programs to visualise key concept in the field of condensed matter physics. These concepts included crystal structures, families of lattice planes, a simulation of neutron scattering, along with band structures of two dimensional materials.
The end result is a code package written in Python which accomplishes just that. It includes 4 main functions: \texttt{Lattice}, \texttt{Reciprocal}, \texttt{Scattering} and \texttt{Band\_structure}. These 4 functions take in a wealth of arguments, supplied by the user, and produce some sort of figure. Examples of these being the crystal structure of hexagonal lattice with a one atom basis (figure \ref{fig:lattice_demo_2}), the (001) family of lattice planes for a bcc lattice (figure \ref{fig:lattice_planes}), neutron scattering on an fcc lattice (figures \ref{fig:scattering_no_systemic} and \ref{fig:scattering_systemic}) or the band structure of a monovalent two dimensional material with a high strength potential (figure \ref{fig:band_structure_strong}).
More can be done with these programs though, and they are by no means a comprehensive resource. For the lattice plotting program, there is currently no perfect algorithm to detect type of lattice that the user has input.
The algorithm employed to detect lattices relies on the user inputting primitive lattice vectors with the specifications from appendix \ref{app:lattice}. These lattices can be rotated and scaled to the heart's desires, and the program will still detect them. But the algorithm does not take into account the non-uniqueness of primitive lattice vectors.
The "easy" way to take this into account would be to use the method specified in section \ref{sec:lattice_theory}, calculating the matrix $ M $ for the different lattices and checking whether or not it meets the necessary criteria. However, this method cannot work if the user also rotates the lattice. Different magnitudes of input primitive lattice vectors will also complicate things, as the program would have to check a range of different sets of primitive lattice vectors for each lattice. For example, say the user inputs an fcc lattice with $ a_1 = a \D(1,0,0) $, $ a_2 = a\D(0, 1/2, 1/2) $ and $ a_3 = a\D(1/2, 0, 1/2) $. There is currently no way for the program to "know" which magnitude to use for the primitive lattice vectors used for comparison, and the only recourse is to brute-force check all combinations.
So currently no catch-all solution of classification is implemented. A thing to note is that the current algorithms only depend on the primitive lattice vectors. A different approach may be to construct the full crystal and look for a specific "type" of unit cell, corresponding to one of the Bravais lattices. However, this will necessarily involve a more complicated algorithm, which unfortunately there was not time for.
Furthermore, the program only calculates the relevant quantities for one specific set of values per function call. As such, if the user wants to see how systemic absences appear in scattering experiments, or how the Fermi surface distorts for stronger potentials, they will need to manually call the relevant function multiple times with different form factors or potential strength. This gets tedious after a while.
A solution to this is to create some sort of GUI which will display input boxes or sliders for the relevant quantities, automatically call the functions and display the results. This can be done, for example with the python package \href{http://flask.pocoo.org/}{Flask}, which allows the creation of web-applications. These can be run locally on the users machine and allows for javascript integration. This is especially useful (necessary, even!) for accomplishing the goal of added interactivity, as Matplotlib figures can be rendered as javascript objects.
With regards to distortion of the Fermi surface: The next (small) step would be to maybe include different two dimensional lattices. A rectangular lattice would be as simple as altering a value or two in the code, for example. But perhaps a better change would be to bump up the dimensions of the lattices to three. Doing this, however, means we run into problems with the number of available dimensions. It seems we live in a universe with only three spatial dimensions, which means we will use up all of those just specifying the geometry of the lattice, leaving no dimension for any other quantities of interest - like the dispersion relation of the particles in the crystal.
This limits us to only looking at isosurfaces of energy - like the Fermi surface, where we in two dimensions could view the entire Fermi sea (and the rest of the dispersion relation). While this would be enough to view the distortion of the Fermi surface (especially if combined with a web-app as specified above), there is the added issue of the computational time required. Currently, for a two dimensional lattice, with $ n_k $ values of $ \V{k} $ in each direction, and $ n_G $ allowed values of $ \V{G} $ (again in each direction), necessitates finding the eigenvalues of $ n_k^2 $ matrices of size $ n_G^2 \times n_G^2 $. By default these values og $ n_k $ and $ n_G $ are around 100 and 7 respectively, meaning the program diagonalises $ 10^4 $ $ 49\times 49 $ matrices. Increasing the dimensionality would increase both the number of matrices to be diagonalised, and the size of these. We would increase the amount of matrices by a factor of $ n_k $, and the size of each of these would go from $ n_G^2 \times n_G^2$ to $ n_G^3 \times n_G^3 $. With the standard arguments this would necessitate the diagonalisation of $ 10^6 $ matrices of size $ 343 \times 343 $. A very considerable increase.
As it currently stands the program does take a while to compute the band structure (around a second or so, for a decent resolution), and to get the same resolution in three dimensions would take at least a factor $ n_k \approx 100 $ longer (assuming it takes the same amount of time to diagonalise a $ 49 \times 49 $ matrix as it does a $ 343 \times 343 $ matrix, which it most certainly does not). So if three dimensions are to be considered, the program would need to be thoroughly optimised. One way to do this would be to employ an algorithm to find just the lowest eigenvalue of these matrices, as we are only concerned with the lowest band - i.e. the Fermi surface.
A line had to be drawn somewhere though, and a two dimensional lattice seemed like a reasonable compromise between added dimensionality and computational complexity.
One last issue is in regards to the programming language and packages chosen. Python is an interpreted language, which means that it necessarily trades computational speed for added ease of development. There are alternatives to this, like \href{https://nim-lang.org/}{Nim} or \href{https://julialang.org/}{Julia}, which are properly compiled or just-in-time compiled respectively. They do not, however, have as large a community as Python does, and therefore not as big a support for third-party packages like Matplotlib.
Further, Matplotlib has one glaring issue in that it does not support a fully fledged 3D graphics engine. This means that all the three dimensional figures in this thesis and the programs are actually just 2D projections of underlying 3D data (of course, computer screens are 2D, so the projection has to happen at some point, but the way Matplotlib does it has significant drawbacks). This creates artefacts like how there is no proper support for the intersection of surface plots. This can be seen in the band structure program, where the orange horizontal plane, indicating the Fermi energy, does not properly intersect the dispersion relation, seen in blue.
There are other packages, like \href{https://docs.enthought.com/mayavi/mayavi/}{Mayavi} which do support proper 3D plotting. However, these problems (and their potential solution in this package) were discovered too late in the process of writing, so could not be solved in time.
In conclusion: 4 programs have been created which illustrate concepts in the field of condensed matter physics. While they are not without their flaws or potential for improvement, they do still hold merit, and could be a valuable tool if used in combination with traditional book-based learning.
\end{document} | {
"alphanum_fraction": 0.7910324484,
"avg_line_length": 229.0540540541,
"ext": "tex",
"hexsha": "353b09ceebc329efa8d9cbad21edf053f6956844",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-05-24T08:32:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-19T05:12:31.000Z",
"max_forks_repo_head_hexsha": "e26f3cee6dcfc858b606b5d3112f553836dd3990",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NikolaiNielsen/Bachelor",
"max_forks_repo_path": "thesis/discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e26f3cee6dcfc858b606b5d3112f553836dd3990",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NikolaiNielsen/Bachelor",
"max_issues_repo_path": "thesis/discussion.tex",
"max_line_length": 1203,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "e26f3cee6dcfc858b606b5d3112f553836dd3990",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NikolaiNielsen/Bachelor",
"max_stars_repo_path": "thesis/discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-06T09:18:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-17T02:07:14.000Z",
"num_tokens": 1864,
"size": 8475
} |
\chapter{Possessive constructions}
\label{bkm:Ref155077914}\section{General background}
The topic of this chapter is possessive noun phrases, that is, noun phrases that involve a possessive modifier, the latter being roughly everything that is functionally equivalent to genitives. Following what is now established terminology, I shall speak of “possessor” and “possessee” for the two entities involved in the possessive relation. It should be noted from the start that possessive constructions may express a diversity of relations that sometimes have very little to do with “ownership”, which has traditionally been seen as their basic meaning.
When it comes to the expression of possessive relations in noun phrases, Scandinavian languages display a bewildering array of constructions. Quite often, we find a number of competing possibilities within one and the same variety. In this chapter, my main concern will be with lexical possessive NPs – constructions where the possessor is a full NP rather than a pronoun. This includes possessor NPs with different kinds of heads – most notably, the head may be either (i) a proper name or an articleless kin term such as ‘father’, or (ii) a common noun, usually in the definite form. \citet{Delsing2003a} treats these two types under separate headings, which is motivated by the fact that some constructions show up with the first type only. However, as he himself notes, there are no constructions which categorically exclude this type.
A caveat here about the available material: noun phrases with full NP possessors are less frequent in spoken and informal written language than one would like as a linguist studying this construction, making it difficult to collect enough data to formulate safe generalizations about usage.
\section{\textit{S-}genitive: old and new}
The traditional device for marking possessive constructions in Indo-European is the genitive case. In older Germanic, like in its sister branches, the genitive also had various other functions – thus, both verbs and prepositions could govern the genitive. This situation is still preserved in some of the modern Germanic languages, such as (Standard) German and Icelandic. In most Germanic varieties, however, the genitive case has either been transformed or has disappeared altogether. Thus, in languages such as Dutch and West Frisian, like in many spoken Scandinavian varieties, we obtain what \citet{KoptjevskajaTamm2003} calls “deformed genitives”. In these, what is kept from the traditional genitive case is primarily the suffixal marking, usually a generalized suffix such as \textit{-s}. Other common characteristics of deformed genitives are that the possessor phrase is preposed relative to the head noun and that there are restrictions on what kinds of NPs can occur as possessors – in the strictest cases, only proper names and name-like kinship terms. Syntactically, deformed genitives tend to behave more like “determiners” than like “modifiers”, which, among other things, means that they do not co-occur with definite articles. Even in Standard German, where the old genitive is in principle fairly well preserved, there is arguably an alternative “deformed” construction of this kind (e.g. \textstyleLinguisticExample{Peters Buch }‘Peter’s book’\textit{).} If we look at Central Scandinavian, we find a possessive construction which resembles the “deformed genitives” in several ways, but which also differs significantly from it. What is rather curious is that a construction with almost exactly the same properties is found in English – the so-called \textstyleLinguisticExample{s}{}-genitive. The English and Scandinavian constructions share with each other and with the garden-variety deformed genitive at least three properties: (i) the preposed position in the noun phrase; (ii) the generalized \textstyleLinguisticExample{s-}suffix; (iii) the lack of definite marking on the possessee NP or its head noun. They differ from other deformed genitives in not being restricted to proper names and kinship terms and in being possible with basically any noun phrase, regardless of syntactic complexity. The marker\textit{ {}-s} is always on the last element of the noun phrase, which may entail “group genitives” such as Swedish \textstyleLinguisticExample{far mins bok} ‘my father’s book’ or English \textstyleLinguisticExample{Katz and Fodor’s theory}, where the\textit{ {}-s}\textstyleLinguisticExample{ }is not suffixed to the head noun but rather to a postposed modifier or to the last element of a conjoined NP.
\textit{S}{}-genitives, so characterized, are not found generally in Scandinavian, but are in fact essentially restricted to “Central Scandinavian”, that is, standard Danish and Swedish, with a somewhat reluctant extension to some forms of standard Norwegian and the spoken varieties of southern Scandinavia (south of the \textstyleLinguisticExample{limes norrlandicus}). Even in parts of southern Sweden, however, deviant systems are found. Thus, in central parts of the province of Västergötland, according to the description in \citet{Landtmanson1952}, the ending\textit{ {}-a} is commonly found with proper names and kinship terms. This is also in accordance with the usage in the single Cat Corpus text from that province, the title of which is \textstyleLinguisticExample{Mormora Misse} ‘Granny’s cat’ (likewise, in the same text: \textstyleLinguisticExample{Allfrea kâring} ‘Alfred’s wife’). The ending\textit{ {}-s} is found with a few types of proper names and also with common nouns, “to the extent they can be used in the genitive at all” (\citealt[68]{Landtmanson1952}). Genitive forms in\textit{ {}-a} are also found in some Upplandic dialects. In Written Medieval Swedish, Wessén (1968: I:142) notes that the original\textit{ {}-a}\textstyleLinguisticExample{r} ending of \textstyleLinguisticExample{i-} and \textstyleLinguisticExample{u-}stems, often reduced to\textit{ }\textit{{}-}\textit{a}, survived for quite a long time with proper names, “especially in foreign ones”. “In Västergötland and Småland it even still survives: \textstyleLinguisticExample{Davida} ‘David’s’ etc.” The\textit{ {}-a}\textstyleLinguisticExample{r} ending, in non-reduced form, is also found in Orsa (Os): \textstyleLinguisticExample{Alfredar keling} ‘Alfred’s wife’ (but \textstyleLinguisticExample{Momos Måssä} ‘Granny’s cat’, with an \textstyleLinguisticExample{s-}ending).
More elaborate genitive forms are sometimes found. Thus, in the Cat Corpus text from Träslövsläge (Ha) we find the ending\textit{ {}-s}\textstyleLinguisticExample{a}, as in \textstyleLinguisticExample{Mormosa katt }‘Granny’s cat’ and \textstyleLinguisticExample{Alfredsa käring }‘Alfred’s wife’. The ending\textit{ {}-s}\textstyleLinguisticExample{a} is apparently a combination of the two endings\textit{ {}-s} and\textit{ {}-a}\textstyleLinguisticExample{. }It is also found in Faroese possessives, and in the Alunda vernacular (Up) as described in \citet{Bergman1893}. In the text from Sotenäs in Bohuslän the ending is\textit{ {}-s}\textstyleLinguisticExample{es}, apparently a doubling of\textit{ {}-s}: \textstyleLinguisticExample{Mormorses pissekatt} ‘Granny’s pussy cat’ and \textstyleLinguisticExample{Alfreses kjäreng} ‘Alfred’s wife’ (see \citet{Janzén1936} for a discussion of\textit{ {}-s}\textstyleLinguisticExample{es} forms in Bohuslän vernaculars). Compare also similar examples from Hälsingland with definite forms of the possessee under \sectref{sec:5.3}.
In the vernaculars of the Peripheral Swedish area, like in most of Norway, the \textstyleLinguisticExample{s}{}-genitive, at least in its canonical form as described above, is generally absent or weakly represented in a way that suggests late influence from acrolectal varieties. \citet[41]{Delsing2003a} says that the \textstyleLinguisticExample{s}{}-genitive is totally absent in the “old dative vernaculars” of Norrbotten and coastal northern Västerbotten, as well as in Jämtland and Härjedalen as well as in the Dalecarlian area. In the rest of northern and middle Norrland there are only few attestations, he says, and they seem to be a “young phenomenon”. On the whole, the weak support for the \textstyleLinguisticExample{s}{}-genitive in the vernaculars of peninsular Scandinavia, with the exception of the Southern Swedish/East Danish dialect area, is striking. In fact, it appears to me that the development of the \textstyleLinguisticExample{s}{}-genitive, as described e.g. by \citet{Norde1997},\footnote{ Norde describes the development of the \textit{s}{}-genitive as an essentially internal phenomenon in Swedish and does not treat deviant developments in vernaculars or draw parallels to Danish.} may be essentially restricted to Danish, Scanian and prestige or standard varieties of Swedish, and possibly some parts of Götaland.
\section{Definite in \textit{s-}genitives}
\label{sec:5.3}
A construction which is fairly analogous to the standard \textstyleLinguisticExample{s}{}-genitive – differing from it primarily in that the head noun takes the definite form – is found in a relatively large part of the Peripheral Swedish area on both sides of the Baltic (\citealt[27]{Delsing2003a}).
In Mainland Sweden, the strongest area seems to be Hälsingland. In the Cat Corpus, it is found in all three texts from this province, although alternating with the regular \textit{s}{}-genitive construction. Thus, for ‘Alfred’s wife’ we find \textstyleLinguisticExample{Alfreds käringa} from Järvsö (Hä) and \textstyleLinguisticExample{Alfreses tjeringa} from Färila (Hä), with a doubled ending\textit{ {}-ses}, and \textstyleLinguisticExample{frua Alfreds}, with the order possessee-possessor, from Forsa (Hä). Further north, it is less common, but does occur. Bergholm et al. found cases such as \textstyleLinguisticExample{Pers bole }‘Per.{\gen} table.{\deff}’ and \textstyleLinguisticExample{mine brorn} ‘my brother.{\deff}’ in Burträsk (NVb). \citet[27]{Delsing2003a} enumerates quite a few examples from the literature and from written texts, covering all the coastal provinces in Norrland except Norrbotten, and also the Laplandic parts of the Westrobothnian area.
The construction is also found in Gotland, as in the Cat Corpus examples \textstyleLinguisticExample{Mormors sänge} ‘Granny’s bed’ from Fårö (Go) and \textstyleLinguisticExample{Mårmårs sänggi }‘Granny’s bed’ from Lau (Go). In Gotland, definite forms can also be used with pronominal possessors, as in \REF{247}.
\ea%\label{}
\langinfo{\label{bkm:Ref155247297}Lau (Go)}{}{}\\
\gll De jär \textbf{min} \textbf{kattn}.\\
it be.{\prs} \textbf{my} \textbf{cat.{\deff}}\\
\glt ‘It is my cat.’ (Cat Corpus)
\z
The construction seems to be general in the whole Trans-Baltic area. In most cases, the possessor takes the affix\textit{ {}-s}, but in Ostrobothnian\textit{ {}-as }is also quite common – I shall return to this in \sectref{sec:5.4.2}. In Ostrobothnian, \citet{ErikssonEtAl1999} also found considerable variation between definite and indefinite possessees – roughly 50 per cent of each.
From older times, \citet[523]{Hesselman1908} quotes examples from the 17\textsuperscript{th} century lexicographer Ericus Schroderus such as
\ea%\label{}
\langinfo{Upplandic (17\textsuperscript{th} century)}{}{}\\
\gll Lijffzens Träet\\
life.{\gen}.{\deff} tree.{\deff}\\
\glt ‘the tree of life’
\z
%\begin{styleTextkrper}
and from Bureus, another 17\textsuperscript{th} century writer:
%\end{styleTextkrper}
\ea%\label{}
\langinfo{Upplandic (17\textsuperscript{th} century)}{}{}\\
\gll hos Anders Burmans i Rödbäck systren\\
at Anders Burman.{\gen} in Rödbäck sister.{\deff}\\
\glt ‘at the sister of Anders Burman in Rödbäck’
\z
and says “in the same way as modern Upplandic: \textit{Geijers dalen }[Geijer’s Valley], \textit{bokhandlarens pojken} ‘the bookseller’s boy’ etc.” This is the only place in the literature known to me where definites with \textit{s}{}-genitives are said to be found in Upplandic. (The first example is clearly a compound in the modern language, spelled \textit{Geijersdalen}.)
As for the alternative construction with the possessee-possessor word order, found in the example from Forsa (Hä) above, Delsing quotes a number of examples, some of them, as he says, “from unexpected places” such as Värmland and Västergötland. The word order possessee-possessor was normal in Old Nordic and is still used in Icelandic (although without definite marking on the possessee). It is thus possible that it is an archaism at least in some places – although hardly for the Laplandic vernaculars mentioned by Delsing.
\section{Constructions with the dative}
\label{sec:5.4}\subsection{The plain dative possessive}\label{sec:5.4.1}
In many Peripheral Swedish vernaculars, a common possessive construction involves a dative-marked possessor. In most cases, the word order is possessed-possessor, but preposed possessors also occur. The possessee NP is normally morphologically definite only when it precedes the possessor. I shall call this construction \textbf{the plain dative possessive}. The following two phrases exemplify the postposed and preposed variants of this construction:
\renewcommand{\eachwordone}{\scshape}
\renewcommand{\eachwordtwo}{\itshape}
\ea%\label{}
\langinfo{\textit{Skelletmål} (NVb)}{}{}\\
\glll \textbf{\textsc{possessee}} \textbf{\textsc{possessor}} \\
skoN paitjåm \\
shoe.{\sg}.{\deff} boy.{\dat}.{\sg}.{\deff} \\
\glt ‘the boy’s shoe’ (\citealt[22]{Marklund1976})
\z
\ea%\label{}
\langinfo{\label{bkm:Ref95906318}Nederkalix (Kx)}{}{}\\
\glll \textbf{\textsc{possessor}} \textbf{\textsc{possessee}} \\
Mårmorn kjaatt\\
Granny.{\deff}.{\dat} cat\\
\glt ‘Granny’s cat’ (Cat Corpus, title of translation)
\z
Even in those vernaculars where the dative is preserved, cases of zero-marking are common. Thus, many examples of this construction look like plain juxtaposition of two NPs:
\ea%\label{}
\langinfo{\label{bkm:Ref134419452}Älvdalen (Os)}{}{}\\
\glll \textbf{\textsc{possessee}} \textbf{\textsc{possessor}} \\
kalln Smis-Margit \\
man.{\deff}.{\sg} Smis-Margit\\
\glt ‘Smis-Margit’s husband’ (\citealt[97]{Levander1909})
\z
In examples such as \REF{252}, the possessor NPs can be regarded as being in the dative – the lack of overt marking is in accordance with the grammar of the vernacular. However, there are also examples where an expected overt marking is lacking. For instance, in the Cat Corpus, we find in addition to the dative-marked \REF{251} an example such as \REF{253}, with the nominative \textstyleLinguisticExample{mårmora} ‘Granny’:
\renewcommand{\eachwordone}{\itshape}
\renewcommand{\eachwordtwo}{\upshape}
\ea%\label{}
\langinfo{\label{bkm:Ref110681866}Nederkalix (Kx)}{}{}\\
\gll Hån Murre sprant åopp\\
{\pda}.{\m} Murre jump.{\pst} up\\
\gll å laar ’se opa \textbf{måan} \textbf{mårmora}.\\
and lay {\refl} on \textbf{belly.{\deff}} \textbf{Granny.{\deff}}\\
\glt ‘Murre jumped up and lay down on Granny’s belly.’ (Cat Corpus)
\z
\citet[161--163]{Källskog1992} treats the possessive dative in the Överkalix vernacular (Kx) in some detail and says that it is “perhaps the most common way of expressing the genitive concept”.\footnote{ “Det kanske vanligaste sättet att uttrycka genitivbegreppet i överkalixmålet är att använda en omskrivning med dativ.” It is not clear why Källskog uses the term \textit{omskrivning} ‘periphrasis’ here – it would seem that the dative construction is not more periphrastic than the \textit{s-}genitive.} She enumerates five possibilities:
\begin{enumerate}
\item[1] Definite possessee + definite possessor in the dative
\end{enumerate}
%\end{listLFOxvileveli}
\renewcommand{\eachwordone}{\scshape}
\renewcommand{\eachwordtwo}{\itshape}
\ea%\label{}
\langinfo{Överkalix (Kx)}{}{}\\
\ea {
\glll \textbf{\textsc{possessee}} \textbf{\textsc{possessor}}\\
stjella fa:ren iert\\
bell sheep.{\dat} your.{\n}\\
\glt ‘the bell of your sheep’
}
\ex {
\glll \textbf{\textsc{possessee}} \textbf{\textsc{possessor}}\\
möylhn stäjntn hina\\
ball.{\deff} girl.{\dat} this.{\f}\\
\glt ‘this girl’s ball’
}
\z
\z
\begin{enumerate}
\item[2] Definite possessor in the dative + indefinite possessee
\end{enumerate}
\ea%\label{}
\langinfo{Överkalix (Kx)}{}{}\\
\ea {
\glll \textbf{\textsc{possessor}} \textbf{\textsc{possessee}}\\
färssfe:ro djey$\lambda $\\
paternal\_grandfather.{\dat} field\\
\glt ‘Grandfather’s field’
}
\ex {
\glll \textbf{\textsc{possessor}} \textbf{\textsc{possessee}}\\
kwäjen ka$\lambda $v\\
heifer.{\dat} calf\\
\glt ‘The heifer’s calf has black and red spots.’
}
\z
\z
\begin{enumerate}
\item[3] Indefinite possessee + indefinite possessor in the dative
\end{enumerate}
\ea
\langinfo{Överkalix (Kx)}{}{}\\
\glll \textbf{\textsc{possessee}} \textbf{\textsc{possessor}}\\
in hesst ino åokonna kär\\
{\indf} horse {\indf}.{\dat}.{\m} unknown man\\
\glt ‘an unknown man’s horse’
\z
\begin{enumerate}
\item[4] Indefinite possessee + definite possessor in the dative
\end{enumerate}
%\end{listLFOxvileveli}
\ea%\label{}
\langinfo{Överkalix (Kx)}{}{}\\
\glll \textbf{\textsc{possessee}} \textbf{\textsc{possessor}}\\
in så:n sistern\\
{\indf} son sister.{\deff}.{\dat}\\
\glt ‘a son of my sister’
\z
%\begin{styleListeii}
\begin{enumerate}
\item[5] Possessor without case-marking + indefinite possessor
\end{enumerate}
%\end{listLFOxvileveli}
\ea%\label{}
\langinfo{Överkalix (Kx)}{}{}\\
\glll \textbf{\textsc{possessor}} \textbf{\textsc{possessee}}\\
mäjn ba:n laigseker\\
my child.{\pl} toy.{\pl}\\
\glt ‘my children’s toys’
\z
The first, third and fourth possibilities clearly represent the postposed variant of the plain dative possessive, and the second possibility the preposed variant. In the fifth case, the dative has been replaced by the nominative.
\citet{Rutberg1924}, in her description of \textit{Nederkalixmål}, presents paradigms where the genitive and the dative are identical throughout. Both \citet[161]{Källskog1992} and \citet[42]{Delsing2003a} take this as an indication that dative-marked possessors are possible. Indeed, the Cat Corpus text from Nederkalix contains at least three clear examples – \REF{251} above and also the following:
\renewcommand{\eachwordone}{\itshape}
\renewcommand{\eachwordtwo}{\upshape}
\ea%\label{}
\langinfo{Nederkalix (Kx)}{}{}\\
\gll Utimila var ‘e för varmt baki \textbf{röyggen} \textbf{mårmorn}.\\
sometimes be.{\pst} it too hot behind \textbf{back.{\deff}} \textbf{Granny.{\deff}.{\dat}}\\
\glt ‘[The cat thought:] Sometimes it was too hot behind Granny’s back.’ (Cat Corpus)
\z
\ea%\label{}
\langinfo{Nederkalix (Kx)}{}{}\\
\gll mårmorn vé\\
Granny.{\deff}.{\dat} firewood\\
\glt ‘Granny’s firewood’ (Cat Corpus)
\z
For \textit{Lulemål}, \citet{Nordström1925} says that the genitive, like the dative, takes the ending\textit{ {}-o}. He gives the example \textstyleLinguisticExample{färo mööss} ‘father’s cap’. In the Cat Corpus, there are examples from \textit{Lulemål} such as \textstyleLinguisticExample{Mormoro lillveg} ‘Granny’s little road’. \citet[163]{Källskog1992} quotes two proverbs from transcriptions done by E. Brännström, interpreting the \textstyleLinguisticExample{o}{}-ending as a dative marker:
\ea%\label{}
\langinfo{Nederluleå (Ll)}{}{}\\
\ea {
\gll Fisk o bröd jer \textbf{bånndo} \textbf{fööd}.\\
fish and bread be.{\prs} \textbf{farmer.{\deff}.{\dat}} \textbf{food}\\
\glt ‘Fish and bread are the farmer’s food.’
}
\ex {
\gll He jer ållt \textbf{bånndo} \textbf{arrbäjt}.\\
it be.{\prs} clearly \textbf{farmer.{\deff}.{\dat}} \textbf{work}\\
\glt ‘It is clearly the farmer’s work.’
}
\z
\z
She also mentions an expression \textstyleLinguisticExample{måora pappen} ‘father’s mother’, said to be obsolete, by a speaker born in 1898.
From Böle in Råneå parish (Ll), \citet[113]{Wikberg2004} quotes examples such as \textstyleLinguisticExample{gråsshändlaro daoter} ‘the wholesale trader’s daughter’ and \textstyleLinguisticExample{maoro klening} ‘Mother’s dress’ together with juxtapositional cases such as \textstyleLinguisticExample{pappen råck} ‘Father’s coat’ and \textstyleLinguisticExample{mammen tjaol} ‘Mother’s skirt’.
For \textit{Pitemål}, \citet[11]{Brännström1993} mentions the postposed construction as “obsolete” (ålderdomligt) and gives the example \textstyleLinguisticExample{påtjen fàrom} ‘Father’s boy’.
Moving south to northern Westrobothnian, we have already seen one case of the possessive dative from \textit{Skelletmål} as described by \citet[22]{Marklund1976}, who also gives the following examples: \textstyleLinguisticExample{löNa pi{\textasciigrave}gen} ‘the maid’s pay’, \textstyleLinguisticExample{rissla græ{\textasciigrave}nnåm} ‘the neighbour’s sleigh’, \textstyleLinguisticExample{kæppa n’Greta} ‘Greta’s coat’, \textstyleLinguisticExample{löngNeN n’Lova} ‘Lova’s lies’, \textstyleLinguisticExample{hästn åm Jâni }‘Johan’s horse’.
In his discussion of the Lövånger (NVb) vernacular, \citet[208]{Holm1942} says that “there are a great number of other possibilities” than the \textit{s}{}-genitive of the standard language (which he says is not possible in the vernacular), and gives as an example juxtaposition with the order possessee–possessor, as in \textstyleLinguisticExample{rävapälsen pastor Holm} ‘the Reverend Holm’s fox fur coat’.
\citet[125]{Larsson1929} reports postposed possessives both with and without dative marking from Westrobothnian, without indicating any specific geographical locations. About the juxtapositional construction, he says that it is “very common” and gives examples such as the following:
\begin{table}
\begin{tabular}{ll}
\lsptoprule Westrobothnian & English \\
\midrule
\textit{skon pötjen} & ‘the boy’s shoes’\\
\textit{nesdutjen stinta} & ‘the girl’s handkerchief’\\
\textit{lönja piga} & ‘the maid’s pay’\\
\textit{tjettn fara} & ‘the sheep’s pen’\\
\textit{legden Jonson} & ‘Jonsson’s former fields’\\
\textit{bökjsen n Nikkje} & ‘Nicke’s trousers’\\
\textit{strompen a Greta} & ‘Greta’s stockings’\\
\lspbottomrule
\end{tabular}
\caption{Some nice caption.}
\label{tab:5.1}
\end{table}
%\end{styleBlockQuote}
For the last two examples, he gives the alternatives \textstyleLinguisticExample{bökjsen hanjs Nikkje} and \textstyleLinguisticExample{strompen hanasj Greta}, both of which should more properly be treated as \textit{h}{}-genitives (see \sectref{sec:5.5}).
For the dative-marked construction, he gives the following examples: \textstyleLinguisticExample{boka prestum} ‘the clergyman’s book’, \textstyleLinguisticExample{lönja pigen} ‘the maid’s pay’,\textstyleLinguisticExample{ löngnen n kesa} ‘Kajsa’s lies’.
With the reservation that Larsson does not specify the location of his examples, it appears that no attestations of the dative construction are found in southern Westrobothnian, which is perhaps not so astonishing, given that the dative has more or less disappeared there. In order to find further examples of the plain dative construction, we have to move about 700 kilometers south to the Ovansiljan area, where \citet[97]{Levander1909} gives this construction as the normal way of expressing nominal possession in Elfdalian:\footnote{ “Genitivbegreppet uttrycks vanligen genom postponerad dativ”}
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\gll fjosbuðe; sturmasum\\
stable-shed.{\deff} Stormas.{\deff}-{\pl}\\
\glt ‘the shed of the Stormas people’
\z
As a modern example, we may cite the following:
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\gll Ulov add taið \textbf{pennskrineð} \textbf{kullun}.\\
Ulov have.PRET take.{\supp} \textbf{pen\_box.{\deff}} \textbf{girl:{\dat}.{\sg}.{\deff}}\\
\glt ‘Ulov had taken the girl’s pen case.’ (\citealt[120]{Åkerberg2012}))
\z
According to \citet[112]{Levander1928}, the plain dative possessive construction is (or was) found in many places in the Dalecarlian area. Outside Älvdalen, he quotes the following examples:
\ea%\label{}
\langinfo{Boda (Ns)}{}{}\\
\gll sk\k{u}ônna; Ierrka\\
shoe.{\pl} Erik.{\dat}\\
\glt ‘Erik’s shoes’
\z
\ea%\label{}
\langinfo{Sollerön (Os)}{}{}\\
\gll g\={ä}rdi S\={å}rim\\
work.{\pl} Zorn\\
\glt ‘[the painter Anders] Zorn’s works’
\z
\ea%\label{}
\langinfo{Transtrand (Vd)}{}{}\\
\gll hätta dränndjan\\
cap.{\deff} farm-hand.{\dat}\\
\glt ‘the farm-hand’s cap’
\z
For Sollerön, \citet[357]{AnderssonEtAl1999} mention the plain dative possessive as “a nice old locution”,\footnote{ “en gammal och fin ordvändning”} with examples such as the following:
\ea%\label{}
\langinfo{Sollerön (Os)}{}{}\\
\gll katto Margit\\
cat.{\deff} Margit\\
\glt ‘Margit’s cat’
\z
\ea%\label{}
\langinfo{Sollerön (Os)}{}{}\\
\gll biln prässtim\\
car.{\deff} priest.{\deff}.{\dat}\\
\glt “the priest’s car”
\z
%\begin{styleTextkrper}
In the Cat Corpus, we find the following examples without overt case-marking:
\ea
\ea
\langinfo{Mora}\\
\gl \textit{sendjen Mårmår}\\
\glt ‘Granny’s bed’
\ex
\langinfo{Mora}\\
\gl \textit{kelindje Alfred}\\
\glt ‘Alfred’s wife’
\ex
\langinfo{Sollerön}\\
\gl \textit{kelindji Alfred}\\
\glt ‘Alfred’s wife’
\z
\z
%\end{styleListeiii}
%\end{styleListeiii}
From these data, it appears that the plain dative construction is or has been possible over the whole dative-marking part of the Dalecarlian area.
Summing up the geographical distribution, we find two areas where dative marking of possessors is employed: Norrbotten and northern Västerbotten, and the Dalecarlian area. A possible difference is that the examples from the northern area tend to involve common nouns whereas proper names also show up fairly frequently in the Dalecarlian examples.
It may seem a little unexpected to find the dative as a marker of adnominal possession, but there is a relatively plausible diachronic source for it, namely what has been called “external possession” or “possessor raising constructions”. This is a very widespread but by no means universal type of construction in which the possessor of a referent of a noun phrase in a sentence is expressed by a separate noun phrase, marked by an oblique case or a preposition. (English is an example of a language that has no external possessor construction, where adnominal possessors have to be used instead.) The prototypical cases of external possessor constructions involve relational nouns, above all body-part nouns (which are sometimes incorporated into the verb).
In many Indo-European languages, the possessor NP is dative-marked, as in \REF{271}(a), which is more or less synonymous to \REF{271}(b), where the possessor is expressed by an adnominal genitive:
\ea%\label{}
\langinfo{\label{bkm:Ref95294441}German}{}{}\\
\ea
\gll Peter wusch \textbf{seinem} \textbf{Sohn} \textbf{die} \textbf{Füße}.\\
Paul wash.{\pst} \textbf{his.{\dat}.{\m}.{\sg}} \textbf{son} \textbf{{\deff}.{\nom}.{\pl}} \textbf{foot.{\nom}.{\pl}}\\
\ex
\gll Peter wusch \textbf{die} \textbf{Füße} \textbf{seines} \textbf{Sohns}.\\
Paul wash.{\pst} \textbf{{\deff}.{\nom}.{\pl}} \textbf{foot.{\nom}.{\pl}} \textbf{his.{\dat}.{\m}.{\sg}} \textbf{son}\\
\glt ‘Paul washed his son’s feet.’
\z
\z
In the older stages of Scandinavian, dative-marked external possessors were also possible. The following example is quoted from the Västgöta provincial law (\citealt[15]{Wessén1956}, \citealt[212]{Norde1997}):
\ea%\label{}
\langinfo{Early Written Medieval Swedish}{}{}\\
\gll Skiær tungu ör \textbf{höfþi} \textbf{manni…}\\
cut.{\prs} tongue.{\acc} out\_of \textbf{head.{\dat}} \textbf{man.{\dat}}\\
\glt ‘If one cuts the tongue out of a man’s head…’ [S2]
\z
In many Scandinavian varieties, the dative-marked external possessor construction disappeared together with the dative case in general. As a replacement, a periphrastic construction, where the external possessor phrase is marked by the preposition \textstyleLinguisticExample{på} ‘on’, is used in Central Scandinavian including many vernaculars, as in the following example from the Cat Corpus:
\ea%\label{}
\langinfo{Grytnäs (Be) }{}{}\\
\gll Sen huppa han åpp i \textbf{knäna} \textbf{på} \textbf{na}.\\
then jump.{\pst} he up in \textbf{knee.{\deff}.{\pl}} \textbf{on} \textbf{she.{\obl}}\\
\glt ‘Then he jumped onto her lap.’ (Cat Corpus)
\z
As we shall see later, however, in the Peripheral Swedish area, it is more common for another preposition – a cognate of Swedish \textstyleLinguisticExample{åt} and English \textstyleLinguisticExample{at} – to be used in this way.
There are a few examples from early Scandinavian which seem more like adnominal possessors. Thus:
\ea%\label{}
\ea
\langinfo{Runic Swedish}{}{}\\
\gll stuþ trikila i \textbf{stafn} \textbf{skibi}\\
stand.{\pst} manly in \textbf{stem.{\dat}} \textbf{ship.{\dat}}\\
\glt ‘He stood manly at the stem of the ship.’ [S35]
\ex
\langinfo{Early Written Medieval Swedish}{}{}\\
\gll Dræpær maþer man, varþær han siþen dræpin a \textbf{fotum} \textbf{hanum}\\
kill.{\prs} man.{\nom} man.{\acc} become.{\prs} he then kill.{\pp} at \textbf{foot.{\dat}.{\pl}} \textbf{he.{\dat}}\\
\glt ‘If a man kills a man, and is then killed at his [that man’s] feet.’ [S2]
\z
\z
(\citealt[15]{Wessén1956}, \citealt[211]{Norde1997})
\citet[212]{Norde1997} cites \textstyleLinguisticExample{hanum }in (\ref{274}b) as a clear example of an adnominal possessor. Her criterion is the role of the referent of the dative phrase: “the dead man at whose feet the man who murdered him is killed himself, can hardly be seen as beneficiary of this killing; in this example the dative \textstyleLinguisticExample{hanum} strictly belongs to \textstyleLinguisticExample{fotum}, not to the whole clause”. I do not find this argument wholly convincing, but given their borderline character, examples like (b) could act as a basis for the reinterpretation of external possessor NPs as adnominal possessors. There is little evidence that the process really got off the ground in Written Medieval Swedish.
For Medieval Norwegian, \citet{Larsen1895} claims that the dative tended to be confused with the genitive (which was at the time disappearing) and quotes examples such as \textstyleLinguisticExample{Kiæxstadom vældi} ‘the property of the Kekstad manor’. It is difficult to say how common this phenomenon was, and standard histories of Norwegian such as \citet{SaltveitEtAl1971} do not mention it. To me, it looks more like occasional confusion than a systematic usage – the examples cited by Larsen often seem to have occurred in contexts which would tend to induce the dative (such as following a preposition governing the dative). In any case, there seem to be no traces of the plain possessive dative in Modern Norwegian varieties. On the other hand, it is far from excluded that confusion of this kind may have contributed to the rise of the dative possessive constructions also in Swedish vernaculars. (Some of Larsen’s examples look more like the complex dative possessive, see below.)
\begin{figure}[h]
\includegraphics[height=.5\textheight]{figures/22_Attestationsoftheplain}
\caption{Attestations of the plain dative possessive construction.}
\label{map:18}
\end{figure}
\begin{figure}[h]
\includegraphics[height=.5\textheight]{figures/23_Attestationsofthecomplex}
\caption{Attestations of the complex dative possessive construction.}
\label{map:19}
\end{figure}
\subsection{The complex dative possessive}
\label{sec:5.4.2}
The dative-marking constructions that we have spoken of so far involve a straightforward combination of a possessee noun with a dative-marked possessor. Another possibility, which I shall refer to as \textbf{the complex dative possessive,} is productive only in Dalecarlian, notably in Elfdalian. The construction I am referring to is superficially quite similar to the Swedish \textstyleLinguisticExample{s}{}-genitive, and is also treated as a kind of genitive construction by \citet{Levander1909}. Let us thus look at his treatment of the genitive in Elfdalian.
According to Levander, all the traditional four cases of Germanic — nominative, genitive, dative, and accusative are found in Elfdalian. However, Levander himself notes that the genitive is fairly rare, especially in the indefinite, where it is basically restricted to two kinds of lexicalized expressions, viz.
\begin{itemize}
\item after the preposition \textit{et} ‘to’, in expressions such as \textit{et bys} ‘to the village’, \textit{et messer} ‘to the mass’, \textit{et buðer} ‘to the shielings’
\item after the preposition \textit{i} ‘in’, in expressions of time such as \textit{i wittres} ‘last winter’, \textit{i kwelds }‘yesterday evening’
\end{itemize}
%\end{styleAufzhlungszeicheniii}
%\end{listLFOxiiileveli}
%\begin{styleTextkrper}
In these uses, the genitive preserves the original endings (\textit{\nobreakdash-s }in masculine and neuter singular; \nobreakdash-\textit{er} in feminine singular and generally in the plural). This is not the case for the definite forms. Consider the following example (\citealt[96]{Levander1909}):
%\end{styleTextkrper}
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\gll Ita jar ir \textbf{kullum-es} \textbf{saing}.\\
this here be.{\prs} \textbf{girl.{\deff}.{\pl}.{\dat}-{\poss}} \textbf{bed}\\
\glt ‘This is the girls’ bed.’
\z
We would expect to find here something like \textstyleLinguisticExample{*kuller }but instead\textstyleLinguisticExample{ }we have something that looks like the dative plural form \textstyleLinguisticExample{kullum} followed by an ending \nobreakdash-\textstyleLinguisticExample{es}. This kind of formation is in fact perfectly general. Thus, we get examples such as \textstyleLinguisticExample{smiðimes} ‘the black-smith’s’, where \nobreakdash-\textstyleLinguisticExample{es }is added to the dative singular definite form \textstyleLinguisticExample{smiðim} of \textstyleLinguisticExample{smið} ‘black-smith’. Further examples:
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\ea
\gll An-dar skuägen ir \textbf{bym-es}.\\
that forest.{\deff} be.{\prs} \textbf{village.{\deff}.{\dat}-{\poss}}\\
\glt ‘This forest belongs to the village.’
\ex
\gll Isn-jär byggnan ir \textbf{sån-es}.\\
this building.{\deff} be.{\prs} \textbf{saw-mill.{\deff}.{\dat}-{\poss}}\\
\glt ‘This building belongs to the saw-mill.’
\z
\z
Moreover, as Levander notes, the \nobreakdash-\textstyleLinguisticExample{es} ending may be added to the last word in a complex noun phrase, in which case the possessor noun will still be in the dative:
\ea
\langinfo{Älvdalen (Os)}{}{}\\
\ea {
\gll Ann\footnotemark{} upp i budum-es etta \\
Anna.{\dat} up in shieling.{\dat}.{\pl}-{\poss} hood\\
\glt ‘Anna-at-the-shieling’s hood’
}
\ex {
\gll An bar \textbf{pridikantem} \textbf{jär} \textbf{upp-es} an. \\
he carry.PRET \textbf{preacher.{\deff}.{\dat}.{\sg}} \textbf{here} \textbf{up-{\poss}} he\\
\glt ‘He carried the stuff of the preacher’s [stuff] up here, he did.’
}
\z
\z
\footnotetext{The dative form of \textit{Anna} is given by Levander as \textit{Anno} but the final vowel is elided here due to the morphophonological process known as apocope (see further in the main text).}
(In (b), the possessive noun phrase is headless, i.e. the possessee is implicit.)
Indeed, if the possessor is expressed by a noun phrase determined by a possessive pronoun,\textit{ {}-}\textstyleLinguisticExample{es} is added directly to that noun phrase, with the possessive pronoun in the dative case:
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\ea {
\gll Is\k{u} jär lodǫ ar stendeð ǫ \textbf{mainum} \textbf{faðer-es } \textbf{garde.} \\
this.{\f}.{\sg}.{\nom} here barn.{\deff}.{\nom}.{\sg} have.{\prs}.{\sg} stand.{\supp} on \textbf{my.{\m}.{\sg}.{\dat}} \textbf{father-{\poss}} \textbf{farm.{\dat}.{\sg}} \\
\glt ‘This barn has stood on my father’s farm.’
}
\ex {
\gll Eð war \textbf{uorum} \textbf{fafar-es} \textbf{fafar} so byggd dǫ dar tjyälbuðe;.\footnotemark{}\\
it be.PRET.{\sg} \textbf{our.{\m}.{\dat}.{\sg}.} \textbf{father’s\_father-{\poss}} \textbf{father’s\_father} who build.PRET that.{\f}.{\sg}.{\acc} there shelter.{\deff}.{\acc}.{\sg}\\
\glt ‘It was our great-great-great-grandfather who built that shelter.’
}
\ex {
\gll Eð war \textbf{dainum} \textbf{kall-es} \textbf{mumun}.\\
it be.PRET.{\sg} \textbf{your.{\m}.{\dat}.{\sg}.} \textbf{husband-{\poss}} \textbf{mother’s\_mother}\\
\glt ‘It was your husband’s maternal grandmother.’
}
\z
\z
\footnotetext{This word, which translates into regional Swedish as\textit{ (myr)slogbod}, denotes a structure somewhat similar to a bus stop shelter used during activities in remote places such as hunting, fishing and hay-harvesting.}
The marker\textit{ {}-}\textstyleLinguisticExample{es} can also be added to headless adjectives with a definite suffix (see \sectref{sec:4.7}) and some pronouns:
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\ea {
\gll Oðra; ir \textbf{ljuätam-es}.\\
other.{\deff}.{\f}.{\sg} be.{\prs} \textbf{evil.{\deff}.{\m}.{\dat}-{\poss}}\\
\glt ‘The other one belongs to the Evil One.’
}
\ex {
\gll Ermkläd ir \textbf{dumbun-es}.\\
scarf.{\deff} be.{\prs} \textbf{dumb.{\deff}.{\f}.{\dat}-{\poss}}\\
\glt ‘The scarf belongs to the deaf-and-dumb woman.’\footnotemark{}
}
\ex {
\gll Eð ir \textbf{ingumdier-es} \textbf{stjäl} min. \\
it be.{\prs} \textbf{neither.{\dat}.{\m}.{\sg}-{\poss}} \textbf{reason} with \\
\glt ‘There is no reason for either one.’
}
\z
\z
\footnotetext{The feminine ending of the adjective indicates that the referent is a woman.}
It seems that there is a recent increase in the frequency of the \nobreakdash-\textstyleLinguisticExample{es} construction in modern Elfdalian, which is most probably due to it being seen as the closest equivalent of the Swedish \textstyleLinguisticExample{s}{}-genitive. An interesting phenomenon in this connection is the tendency for native speakers to make \textstyleLinguisticExample{es} a separate word in written Elfdalian (or sometimes hyphenated, as in \textstyleLinguisticExample{bil-es stor} ‘uncle’s walking-stick’). Perhaps most strikingly, \textstyleLinguisticExample{es} is even used after a preceding vowel, although, due to extensive apocope, hiatus is not a common phenomenon in Elfdalian. Consider a proper name such as \textstyleLinguisticExample{Anna}, for which Levander gives the dative form \textstyleLinguisticExample{Anno} and the “genitive” \textstyleLinguisticExample{Annes}, the latter being the logical outcome of apocopating the dative form before {}-\textstyleLinguisticExample{es}. In modern Elfdalian, however, proper names in\textit{ {}-a}\textstyleLinguisticExample{ }are normally treated as undeclinable and are shielded against apocope. Thus ‘Anna’s book’ comes out as \textstyleLinguisticExample{Anna es buäk}.
The tendencies mentioned in the previous paragraph come out very clearly in one of the few longer texts written in Elfdalian, [S21], where the complex dative construction is the most frequent way of expressing nominal possession, and \textstyleLinguisticExample{es} is fairly consistently written separately. There are several examples where the preceding noun ends in a vowel such as \textstyleLinguisticExample{Kung Gösta es dågå} ‘King Gösta’s days’ and \textstyleLinguisticExample{Sparre es klauter} ‘Sparre’s clothes’. Whereas proper names are generally not case-marked, most definite possessor nouns are in the dative, but there are also examples of nominative possessor preceding \textstyleLinguisticExample{es}. ([S21] is on the whole heavily influenced by Swedish – there are also a fair number of literal transfers of \textstyleLinguisticExample{s}{}-genitives, such as\textstyleLinguisticExample{ Luthers katitsies} ‘Luther’s catechesis’.) Compare \REF{280}, where the nominative form \textstyleLinguisticExample{prestsaida} ‘the clergy side’ is used rather than the dative \textstyleLinguisticExample{prestsaidun}:
\ea%\label{}
\langinfo{\label{bkm:Ref135470135}Älvdalen (Os)}{}{}\\
\gll Nu war ed \textbf{prestsaida} \textbf{es} \textbf{tur} at tytts at muotstonderer språked um nǫd eller eld ed dier uld tag stellning ad ǫ stemmun.\\
now be.{\pst} it \textbf{clergy-side.{\deff}} \textbf{{\poss}} \textbf{turn} {\infm} think.{\inf} that adversary.{\deff}.{\pl} speak.{\pst} about something other than it they shall.{\pst} take.{\inf} position to on meeting.{\deff}.{\dat}\\
\glt ‘Now it was the turn of the clergy side to think that the adversaries were talking about something other than what should be decided at the meeting.’
\z
The construction \textstyleLinguisticExample{eð ir NP es tur at V-inf} ‘it is NP’s turn to V’ is calqued quite directly on the corresponding Swedish construction \textstyleLinguisticExample{det är NPs tur att V-inf}, but seems to have been firmly entrenched in Elfdalian for quite some time. Compare the following example from a speaker born in the 1850’s:
\ea%\label{}
\langinfo{\label{bkm:Ref135470154}Älvdalen (Os)}{}{}\\
\gll …å se vart ed \textbf{bumuą̈r} \textbf{es} \textbf{tur}\\
and then become.{\pst} it \textbf{shieling hostess} \textbf{{\poss}} \textbf{turn}\\
\gll tä tag riäd o mjotsin\\
to take.{\inf} care about milk.{\deff}.{\dat}\\
\gll da gesslkallär ad fer ad raisą̈.\\
when herder\_boy.{\pl} have.{\pst} go.{\supp} to forest.{\deff}.{\dat}\\
\glt ‘…and then it was the shieling hostess’s turn to take care of the milk when the herder boys had gone to the woods.’ [S16]
\z
Here, however, the noun form \textstyleLinguisticExample{bumuą̈r} ‘shieling hostess’ is ambiguous between nominative and dative. Notice also that whereas the infinitive marker in \REF{280} is the Swedish-inspired \textstyleLinguisticExample{at}, \REF{281} has the more genuine Elfdalian \textstyleLinguisticExample{tä} (see \sectref{sec:6.4.1}).
Much of what has been said about the Elfdalian construction carries over to other Ovansiljan varieties. According to \citet[170]{Levander1928}, “definite genitive forms” formed by adding a suffix to the definite dative singular are found in most Dalecarlian varieties where the dative is preserved. In Ovansiljan (except Orsa) and Nedansiljan, the suffix is\textit{ {}-s} preceded by some vowel whose quality varies between \textstyleLinguisticExample{e, å, ä, a,} and \textstyleLinguisticExample{ô}. In Västerdalarna and Orsa, the suffix is simply\textstyleLinguisticExample{ {}-s}, except in Äppelbo, where it is\textstyleLinguisticExample{ {}-säs}. Examples can also be found in modern texts. Consider the following example from Mora:
\ea%\label{}
\langinfo{Östnor, Mora (Os)}{}{}\\
\gll Welsignarn e an\\
blessed be.{\prs} he\\
\gll så kum i \textbf{Ärram-ås} \textbf{nammen!}\\
who come.{\prs} in \textbf{Lord.{\deff}.{\dat}-{\poss}} \textbf{name}\\
\glt ‘Blessed is he who comes in the name of the \href{http://www.godrules.net/library/topics/topic1192.htm}{Lord}!’ (Matthew 21:9) [S20]
\z
In \textstyleLinguisticExample{Ärram-ås,} the suffix\textit{ {}-å}\textstyleLinguisticExample{s} has been added to the definite dative form \textstyleLinguisticExample{Ärram. }In texts from other villages, however,\textit{ {}-å}\textstyleLinguisticExample{s} is also sometimes added to the nominative:
\ea%\label{}
\langinfo{Utmeland, Mora (Os)}{}{}\\
\gll Då stod \textbf{Ärran-ås} \textbf{angel} framåmin dem…\\
then stand.{\pst} \textbf{Lord.{\deff}-{\poss}} \textbf{angel} in\_front\_of they.{\obl}\\
\glt ‘And then the angel of the Lord stood before them…’ (Luke 2:9) [S20]
\z
In the following example, the suffix is added to a postposed possessive pronoun:
\ea%\label{}
\langinfo{\textit{Önamål} (Hökberg, Mora, Os)}{}{}\\
\gll Wennfe si du twårpär i \textbf{bror} \textbf{denås} \textbf{öga…}\\
why see.{\prs} you speck.{\pl} in \textbf{brother} \textbf{your-{\poss}} \textbf{eye}\\
\glt ‘And why do you see specks in your brother’s eye…’ (Matt. 7:3) [S20]
\z
In other village varieties in Mora, the possessive pronoun is preposed and we get \textstyleLinguisticExample{den brorås.}
In Sollerön, according to \citet[357]{AnderssonEtAl1999}, the suffix\textit{ {}-a}\textstyleLinguisticExample{s} is added to the dative, or in modern varieties of the vernacular, to the nominative: \textstyleLinguisticExample{donda kallimas kelingg} or \textstyleLinguisticExample{donda kallnas kelingg} ‘that man’s wife’. Proper names in\textit{ {}-a} such as \textstyleLinguisticExample{Anna} have genitive forms such as \textstyleLinguisticExample{Annonas }(but in a questionnaire from Sollerön \textstyleLinguisticExample{Annaas} is given as an alternative).
There is also some sporadic evidence of similar constructions outside of Dalecarlian. Thus, \citet[124]{Larsson1929} quotes an unpublished description of the vernacular of Byske (NVb), Lundberg (n.y.), as mentioning “a genitive with an \textstyleLinguisticExample{s} added to the dative form, in the same way as in Dalecarlian”, e.g. \textstyleLinguisticExample{pajkoms} ‘the boy’s, the boys’’, \textstyleLinguisticExample{sanoms} ‘the son’s’, \textstyleLinguisticExample{sönjoms} ‘the sons’’, \textstyleLinguisticExample{kooms} ‘the cow’s, the cows’, but claims that no such form has been attested by later researchers (including himself). However, Larsson adds: when questioned directly, informants confirm that \textstyleLinguisticExample{s} can added to dative of masculines “in independent position”, e.g. \textstyleLinguisticExample{he jer gobboms} ‘it is the old man’s’.
\citet[126]{Hellbom1961} quotes Larsson and says that “similar constructions seem to have existed also in Medelpad, above all when a preposition precedes the genitive”.\footnote{ “Likartade bildningar ser ut att ha förekommit även i Medelpad, främst då när en prep. föregått genitiven.”} Medelpad is otherwise an area where the dative had already virtually disappeared at the end of the 19\textsuperscript{th} century. Hellbom’s first example is from Njurunda, his own native parish. The text, however, was already written down in 1874:
\ea%\label{}
\langinfo{\label{bkm:Ref126396484}Njurunda (Md)}{}{}\\
\gll Hæ var en tå \textbf{ryssôm-s} \textbf{vaktknekter}\\
it be.{\pst} one of Russian.{\pl}.{\dat}-{\poss} sentinel.{\pl}\\
\gll sôm hadde sômne åv å låg å snarke.\\
who have.{\pst} fall\_asleep.{\supp} off and lie.{\pst} and snore.{\pst}\\
\glt ‘It was one of the Russians’ sentinels who had fallen asleep and lay snoring.’ [S38]
\z
Here, there is indeed a dative-governing preposition before the possessive construction. If this were an isolated example, we would probably interpret the form \textstyleLinguisticExample{ryssôms} as resulting from a confusion of two syntactic structures. (\citet[38]{Delsing2003a} mentions \REF{285} as an example of a “group genitive”, which, however, presupposes the less likely interpretation ‘a sentinel of one of the Russians’ rather than ‘one of the Russians’ sentinels’.)
Hellbom (ibid.) quotes an unpublished note by Karl-Hampus Dahlstedt to the effect that some people in the parish of Indal in the province of Medelpad used the form \textstyleLinguisticExample{bånôms} in the genitive plural of \textit{bån} ‘child’. He also enumerates a few examples of forms where the genitive\textstyleLinguisticExample{ {}-s} is added to what looks like an oblique form of a weak noun, which at older stages of the language was ambiguous between genitive, dative, and accusative: \textstyleLinguisticExample{fårsjinnpälsa gubbas} ‘the old man’s sheep fur coat’; \textstyleLinguisticExample{gu{\textasciigrave}bbass bökksan} ‘the old man’s trousers’; \textstyleLinguisticExample{ti gu{\textasciigrave}bbass kammarn} ‘to the old man’s chamber’. His final example, however, is somewhat more spectacular,\footnote{ “Slutligen ett mera tillspetsat belägg från Stöde 1877”} in that it appears to exemplify the addition of the genitive\textstyleLinguisticExample{ {}-s} as a phrasal clitic to an NP in the dative.
\ea%\label{}
\langinfo{Stöde (Md)}{}{}\\
\gll in par jänter … som fôǠDǠDe mä\\
{\indf} couple girl.{\pl}.{\dat} {} who follow.{\pst} with\\
\gll på \textbf{fara} \textbf{senne-}\textbf{\textit{ }}\textbf{\textit{\nobreakdash-s}} \textbf{joLsättning} \\
on \textbf{father.{\dat}} \textbf{their.{\refl}.{\dat}} \textbf{{\poss}} \textbf{funeral} \\
\glt ‘…a couple of girls … who took part in their father’s funeral.’ [S15]
\z
Genitive forms where the\textstyleLinguisticExample{ {}-s} is added to an oblique form of a weak noun are quite common in Medieval Scandinavian. (Recall that weak masculine nouns had a single form for genitive, dative and accusative in the singular.) A form such as \textstyleLinguisticExample{bondans} ‘the farmer’s’ is actually fairly straightforwardly derivable from something like \textstyleLinguisticExample{bonda hins}. In other forms, we have to assume an extension by analogy of this formation, as in \textstyleLinguisticExample{kirkionnes} ‘of the church’ instead of the older \textstyleLinguisticExample{kirkionnar} (Wessén (1968: I:143)). The Medelpadian \textstyleLinguisticExample{gubbas} could be interpreted in the same way, although it might perhaps also be derivable from an older \textstyleLinguisticExample{gubbans}. A genitive ending\textit{ {}-a}\textstyleLinguisticExample{s} is in fact found in various vernaculars. In Vätö (Uppland), as described by \citet{Schagerström1882}, weak stem proper names take the endings\textit{ {}-a}\textstyleLinguisticExample{s} (masc.) and \nobreakdash-\textstyleLinguisticExample{ôs} (fem.). In Ostrobothnian,\textit{ {}-a}\textstyleLinguisticExample{s} as a genitive ending can be added to the definite form of masculine common nouns, such as \textstyleLinguisticExample{rävinas} ‘of the fox’ and \textstyleLinguisticExample{varjinas} ‘of the wolf’. This is a more radical extension than what we find in Vätö, since in these forms there is no historical motivation for the \textstyleLinguisticExample{a} vowel. In these cases, on the other hand, there is no connection to the dative case, which has been wholly lost in Ostrobothnian. However, there is an intriguing parallel to the Dalecarlian construction. \citet[43]{ErikssonEtAl1999} found a variation among their Ostrobothnian informants between\textstyleLinguisticExample{ {}-s} and\textit{ {}-a}\textstyleLinguisticExample{s} as a genitive or possessive marker, with a possible concentration of\textit{ {}-a}\textstyleLinguisticExample{s} in the southern part of the province. The general pattern was for the\textit{ {}-a}\textstyleLinguisticExample{s} marker: possessor noun +\textit{ {}-a}\textstyleLinguisticExample{s} + possessee + definite suffix. In two of the examples in the questionnaire, the possessor noun was the proper name \textstyleLinguisticExample{Anna. }Here, “the informants felt forced to mark an orthographic boundary”, yielding spellings such as \textstyleLinguisticExample{Anna’as haanden} ‘Anna’s hand’ and \textstyleLinguisticExample{Anna as gamlest systren }‘Anna’s eldest sister’, which closely parallel the Elfdalian forms quoted above (except for the definite form of the head noun).
In his discussion of the confusion between the dative and the genitive in Medieval Norwegian, \citet{Larsen1895} mentions a few examples which look like complex dative possessives, for instance in this document from Rendalen in 1546:
\ea%\label{}
\langinfo{Rendalen (Hedmark, Norway, 16\textsuperscript{th} century)}{}{}\\
\gll …med … theyriss \textbf{bondomss} Karls Jonsszon\\
…with their \textbf{husband.{\dat}.{\pl}-{\poss}} Karl.{\gen} Jonsszon\\
\gll Engilbrictz Asmarsszon oc Trondz Eyriksszon\\
Engilbrict.{\gen} Asmarsszon and Trond.{\gen} Eyriksszon\\
\gll oc theyriss \textbf{barnomss} godom vilie…\\
and their \textbf{child.{\dat}.{\pl}-{\poss}} good.{\dat}.{\sg}.{\m} will..\\
\glt ‘…with the good will of their husbands Karl Jonsson Engelbrikt Asmarsson and Trond Eyriksson and their children…’
\z
In addition, he mentions that in the Norwegian Solør vernaculars where the dative is still preserved, the construction \textstyleLinguisticExample{for NP’s skull }‘for NP’s sake’ commonly employs genitives formed from the dative, as in \textstyleLinguisticExample{for gutas (jintns, bånis, ongoms) skull} ‘for the boy’s (girl’s, child’s, kids’) sake’.
Returning to the complex dative possessive in Elfdalian, we can see that it has a number of specific properties: (i) there is a general syllabic marker \textstyleLinguisticExample{(-)es}; (ii) the marker is combined with a dative form of the possessor; (iii) the marker has the character of a clitic added to a full noun phrase rather than an affix added to a noun. The last point is supported by the following facts: (a) modifiers of the possessor NP are in the dative (at least in more conservative forms of the language); (b) the vowel of the marker is not elided after nouns ending in vowels; (c) the marker is placed on the last word of an NP rather than on the head noun; and (d) native speakers tend to write the marker as a separate word. In the Peripheral Swedish area outside Dalecarlian, we find sporadic examples of possessive constructions that share some of these properties but hardly any that have all of them. In fact, with respect to (iii) there are also parallels with the \textstyleLinguisticExample{s}{}-genitive of Central Scandinavian and English.
What can we say about the possible evolution of the complex dative construction?
The geographically quite dispersed although sporadic and rather heterogeneous manifestations suggest that the construction was more widespread earlier. It is likely that the general demise of the dative has made it either disappear or be transformed. We may note that the examples from modern Elfdalian suggest that \textstyleLinguisticExample{(-)es }now tends to be added to a noun phrase that has no case-marking, and that is also the case for the Ostrobothnian examples. It is also possible that the tendency to treat \textstyleLinguisticExample{es} as a clitic with no influence on the form of the previous word is a relatively recent phenomenon in Elfdalian.
The most natural approach to the genesis of the complex dative construction would \textit{prima facie} be to try and explain it as a result of a development similar to that described for the \textstyleLinguisticExample{s}{}-genitive by e.g. \citet{Norde1997}, that is, by a “degrammaticalization” of the genitive \textstyleLinguisticExample{s-}ending of early Scandinavian. After the introduction of suffixed articles, the \textstyleLinguisticExample{s-}ending was found in indefinite masculine and neuter singular strong nouns and in all definite masculine and neuter singulars. Later on, it spread to other paradigms, and was then typically grafted on to the old genitive forms. If these were non-distinct or similar to the dative forms, it is possible that they were reanalyzed as such, which could have triggered a generalization of the pattern dative + \textstyleLinguisticExample{s-}ending. Such a hypothesis is not unproblematic, however. If we suppose that the source of the Elfdalian \textstyleLinguisticExample{uksa-m-es} ‘of the ox’ is a medieval Scandinavian form such as \textstyleLinguisticExample{oksa-ns} ‘ox.{\deff}.{\gen}.{\sg}’, we have to assume that the apparent dative form of the stem would trigger the choice of a dative definite suffix, and we also have to explain where the vowel in the suffix comes from.
One peculiar circumstance around the complex dative possessive is that its functional load was apparently rather small in the pre-modern vernaculars where it existed. We have seen that there are only very sporadic examples from the Norrlandic dialects, and even in Elfdalian around 1900 it was, according to \citet[98--99]{Levander1909}, “rare”,\footnote{ “Bestämd genitiv är likaledes sällsynt…”} the simple dative possessive being the preferred alternative (On the other hand, this claim is in a way contradicted by the fact that Levander himself provides no fewer than 17 examples of the complex dative construction in his grammar.)
Why was it, then, kept in the language at all? One possible explanation is that the complex dative possessive had a specialized function. Something that speaks in favour of this is that a surprisingly large number of the examples quoted in the literature from older stages of the vernaculars displays the possessive NP in predicate position. This goes for the only example that Larsson quotes as still acceptable to his informants from Byske in Västerbotten, and out of Levander’s 17 examples, 12 directly follow a copula. It is also striking that ten of these are headless – which parallels Larsson’s claim that the complex dative construction is allowable “in independent position”. We might thus hypothesize that the complex dative possessive developed as an alternative to the simple dative possessive primarily in predicate position and/or when used without a head noun.
If we look around in the Germanic world, the constructions discussed in this section are not without their parallels. Consider the following examples:
\ea%\label{}
\langinfo{\label{bkm:Ref126571184}Middle English (13\textsuperscript{th} century)}{}{}\\
\gll of Seth ðe was \textbf{Adam} \textbf{is} \textbf{sune}\\
of Seth who be.{\pst} \textbf{Adam} \textbf{{\poss}} \textbf{son}\\
\glt ‘of Seth, who was Adam’s son’ [S3]
\z
\ea%\label{}
\langinfo{\label{bkm:Ref151373829}Middle Dutch}{}{}\\
\gll Grote Kaerle sijn soon\\
Great.{\dat} Charles.{\dat} his son\\
\glt ‘great Charles’ son’
\z
\ea%\label{}
\langinfo{Dutch}{}{}\\
\gll Jan z’n boek\\
Jan {\poss} book\\
\glt ‘Jan’s book’
\z
\ea%\label{}
\langinfo{\label{bkm:Ref151373831}Afrikaans }{}{}\\
\gll Marie se boek\\
Mary {\poss} book\\
\glt ‘Marie’s book’ (\REF{289}{}-\REF{291} quoted from \citealt[56]{Norde1997})
\z
Following \citet{KoptjevskajaTamm2003}, we can call these constructions \textbf{linking pronoun possessives}. In the most elaborated type, exemplified here by Middle Dutch, they contain a possessive pronoun between the case-marked possessor noun phrase and the head noun. In the Middle Dutch example \REF{289}, the possessor noun is in the dative case,\footnote{ This analysis is questioned in \citet{Allen2008}.} as it is in the following Modern German title of a best-selling book on German grammar (\citealt{Sick2004}):
\ea%\label{}
\langinfo{\label{bkm:Ref126571195}German}{}{}\\
\gll Der Dativ ist \textbf{dem} \textbf{Genitiv} \textbf{sein} \textbf{Tod}\\
{\deff}.{\m}.{\sg}.{\nom} dative be.{\prs} \textbf{{\deff}.{\m}.{\sg}.{\dat}} \textbf{genitive} \textbf{{\poss}.{\m}.{\sg}.{\nom}} \textbf{death}\\
\glt ‘The dative is the death of the genitive.’
\z
However, genitive possessor nouns are also attested in Middle Dutch/Low German:
\ea%\label{}
\langinfo{Middle Dutch}{}{}\\
\gll alle des konincks sijn landen\\
all {\deff}.{\m}.{\gen} king.{\gen} his land.{\pl}\\
\glt ‘all the king’s lands’ (\citealt[58]{Norde1997})
\z
In Germanic varieties where the dative case is no longer alive, e.g. Middle English, Modern Dutch and Afrikaans, the possessor NP in linking pronoun possessives has no case marking (cf. \REF{288}{}-\REF{292}). In Afrikaans, we can also see that the linking morpheme \textstyleLinguisticExample{se} has been differentiated in form from the masculine possessive \textstyleLinguisticExample{sy} and has been generalized also to feminines (and plurals).
In Scandinavian languages, there are at least two types of linking pronoun constructions. One involves non-reflexive possessive pronouns and was apparently quite common in written Danish from the Late Middle Age on (\citealt[61]{Knudsen1941}): \textstyleLinguisticExample{Graamand hans vrede }‘Gramand’s wrath’; \textstyleLinguisticExample{en enkkæ hennes søn} ‘a widow’s son’. This construction is now only marginally possible in bureaucratic style (Modern Norwegian: \textstyleLinguisticExample{Oasen Grillbar Dets Konkursbo} ‘the insolvent estate of the grill bar \textstyleLinguisticExample{Oasen}’ (Internet)). The construction was often used in cases where a group genitive might be expected in the modern Central Scandinavian, such as \textstyleLinguisticExample{prestens i Midian hans queg} ‘the priest in Midian’s cattle’ (16\textsuperscript{th} century Bible translation). As the last example illustrates, the head noun could also be genitive-marked (according to Knudsen this was relatively uncommon, however). The construction still exists in Jutland, “in particular northern Jutish” (\citealt[62]{Knudsen1941}): \textstyleLinguisticExample{æ skrædder hans hus} ‘the tailor’s house’.
The second Scandinavian linking pronoun construction is found in Norwegian (at least originally predominantly in western and northern varieties) and involves reflexive linking pronouns:
\ea%\label{}
\langinfo{Norwegian}{}{}\\
\gll mannen sin hatt\\
man.{\deff} {\poss}.{\refl}.3{\sg} hat\\
\glt ‘the man’s hat’
\z
This construction is generally assumed to have arisen under German influence and is therefore traditionally called “garpegenitiv”, \textstyleLinguisticExample{garp} being a derogatory term for ‘German’.
Typological parallels to the Germanic linking pronoun possessives are found, for example, in Ossetian (Iranian; \citealt[669]{KoptjevskajaTamm2003}). One could also see them as the analytic analogue to possessive constructions in which a possessive affix on the head noun agrees with the possessor noun phrase, as in Hungarian:
\ea%\label{}
\langinfo{Hungarian}{}{}\\
\gll a szomszéd kert-je\\
{\deff} neighbour garden-3{\sg}.{\poss}\\
\glt ‘the neighbour’s garden’ \citep[648]{KoptjevskajaTamm2003}
\z
The Germanic linking pronoun possessive constructions are controversial, both with respect to their origin and their possible role in the history of the \textstyleLinguisticExample{s}{}-genitive. They could have originated, as claimed by some scholars, from a reanalysis of an indirect object construction (\citealt[638]{Behaghel1923}), such as:
\ea%\label{}
\langinfo{German}{}{}\\
\gll Er hat \textbf{meinem} \textbf{Vater} \textit{seinen}\textit{ Hut}\textit{ }genommen.\\
he have.{\prs} \textbf{my.{\dat}.{\m}.{\sg}} \textbf{father} \textit{his.{\acc}.{\m}.{\sg}}\textit{ }\textit{hat}\textit{ }take.{\pp}\\
\glt ‘He has taken from my father his hat.’?’he has taken my father’s hat.’
\z
Some Dutch scholars, quoted by \citet[58]{Norde1997}, have suggested that the linking pronoun is a “pleonastic addition”, added for clarity. For Middle English, a common view is that the linking pronoun \textstyleLinguisticExample{(h)is} is actually a reanalysis of the old genitive ending.
Independently of what the origin of the linker is, it may or may not have played a role in the development of the English \textstyleLinguisticExample{s}{}-genitive (\citealt{Janda1980}). The reanalysis of the \textstyleLinguisticExample{s-}ending as a pronoun could have facilitated the rise of group genitives, where the possessive marker was placed at the end of the noun phrase. \citet[91]{Norde1997} comments on this hypothesis as follows: “Even though this may seem to be a plausible scenario for English, it should be borne in mind that the emergence of the Swedish \textstyleLinguisticExample{s}{}-genitive was not mediated by RPP’s [linking pronoun constructions].” Her argument for this is that (i) there was no homonymy between\textit{ {}-s} and the possessive pronouns in Swedish, and (ii) “there are no indications that RPP-constructions were ever relevant in Swedish”. Although the latter claim is true of Standard Swedish, it is, as we have seen, not true of Scandinavian as a whole. In particular, it is not true of Danish, which has probably provided the model for the Swedish \textstyleLinguisticExample{s}{}-genitive. Nor is it necessarily true of the Peripheral Swedish varieties, where homonymy between a possessive and a genitive ending is far from excluded. In Elfdalian, there are two forms of the 3\textsuperscript{rd} person masculine singular possessive pronoun: \textstyleLinguisticExample{onumes} and \textstyleLinguisticExample{os}. The former is analogous to what we find with lexical possessors in the complex dative construction: it consists of the dative pronoun \textstyleLinguisticExample{onum} and the possessive marker \nobreakdash-\textstyleLinguisticExample{es}. The latter – \textstyleLinguisticExample{os} – has developed out of the old genitive form \textstyleLinguisticExample{hans} ‘his’. In other Ovansiljan varieties, the shorter forms of the possessive pronouns seem to have been replaced by the longer ones. However, as has already been mentioned, the quality of the vowel in the possessive marker is highly variable and at times must have been identical to what was found in the short possessive pronoun (when it still existed). This would give the Dalecarlian complex dative possessives the same make-up as the linking pronoun constructions in German and Middle Dutch.
As we have seen, the origin of the linking pronoun constructions in the West Germanic languages is disputed. Still, the documentation of the medieval stages of these languages is much better than that of the corresponding period of Dalecarlian and other Peripheral Swedish varieties. This fact makes it rather doubtful whether we shall ever be able to find out the details of the early history of the complex dative possessive in Scandinavian. It is not unlikely, however, that its origin involves more than one source – probably both re-interpreted oblique forms of nouns and linking pronoun constructions have played a role.
\section{“\textit{H}{}-genitive”}
\label{sec:5.5}
Following \citet{Delsing2003b}, I shall use the label ‘\textstyleLinguisticExample{h-}genitive’ for the pronominal periphrasis construction \textstyleLinguisticExample{huset hans Per} ‘(lit.) the-house his Per’. This construction is superficially somewhat similar to the linking pronoun constructions discussed in \sectref{sec:5.4.2}, and it may not be out of place to point to the major difference between them: although both involve pronouns in the middle, the order of the lexical parts is the opposite: the \textstyleLinguisticExample{h-}genitive has the structure possessee – pronoun – possessor, the linking pronoun constructions possessor – pronoun – possessee.
An account of the geographical distribution of the \textstyleLinguisticExample{h-}genitive in Norway, Sweden and Iceland is given in \citet[34]{Delsing2003a}. The Scandinavian \textstyleLinguisticExample{h-}genitive area can be conveniently divided into four zones, in which the construction has somewhat different properties:
%\begin{listLFOxivleveli}
\begin{enumerate}
\item Iceland
\item Norway (excluding a few areas in the south)
\item an inland zone in Sweden comprising parts of Jämtland and Medelpad, Härjedalen, Västerdalarna and probably also the previous Norwegian parishes Särna and Idre, and parts of Värmland
\item a coastal zone in Sweden comprising the provinces of Västerbotten and Norrbotten (but excluding Lapland).
\end{enumerate}
%\end{listLFOxivleveli}
It may be noted that the two Swedish zones are non-contiguous: there seem to be no examples of the construction in the intermediate area: eastern Jämtland, Ångermanland and southern Lapland.
The pronoun that precedes the possessor noun in \textstyleLinguisticExample{h-}genitive looks like a preproprial article, and the geographical distributions of these two phenomena are also very similar. However, as \citet[67]{Delsing2003b} notes, there are discrepancies: preproprial articles are used\textstyleLinguisticExample{ }in the area between the inland and the coastal \textit{h}{}-genitive zones, and there are certain parts of Norway (the inner parts of Agder and Western Telemark) where \textstyleLinguisticExample{h-}genitives occur without there being any preproprial articles. Furthermore, in the Northern Västerbotten dialect area, the \textstyleLinguisticExample{h-}genitive is also possible with common nouns such as \textstyleLinguisticExample{saitjen hansj hannlaråm} ‘the shop-owner’s sack’ (\textit{Skelletmål}, \citealt[23]{Marklund1976}). Here, the possessor noun is in the dative, a fact that I shall return to below. In addition, as noted in \citet{HolmbergEtAl2003}, there are also attested examples from the same area where a possessive pronoun and a preproprial article are combined. Thus, in the Cat Corpus we find the following:
\ea%\label{}
\langinfo{\textit{Skelletmål} (NVb)}{}{}\\
\gll Kattkalln begrifft\\
tomcat.{\deff} understand.PRET\\
\gll att händäna var \textbf{kelinga} \textbf{håns} \textbf{n} \textbf{Alfre}.\\
that that be.{\pst} \textbf{wife.{\deff}} \textbf{his} \textbf{{\pda}.{\m}} \textbf{Alfred}\\
\glt ‘Cat understood that that was Alfred’s wife.’ (Cat Corpus)
\z
%\begin{styleTextkrper}
\citet[51]{Dahlstedt1971} quotes several examples from Sara Lidman’s novel \textit{Tjärdalen}\textit{:}\footnote{ In Dahlstedt’s opinion, however, these examples represent “an unequivocal hyperdialectism without support in the spoken vernacular” (“en otvetydig dialektism utan stöd i det talade folkmålet”). This conclusion, which he bases on a term paper by a native speaker of Northern Westrobothnian, seems somewhat rash, given the quite numerous attestations of the construction in question. Also, “hyperdialectisms” do not seem to be characteristic of Sara Lidman’s work.\par }
%\end{styleTextkrper}
\ea
\ea {
\gll golvet hans n’ Jonas\\
floor his {\pda}.{\m} Jesus\\
\glt ‘Jonas’ floor’ [S23]
}
\ex {
\gll bokhyllan hans n’ Petrus\\
bookshelf.{\deff} his {\pda}.{\m} Petrus\\
\glt ‘Petrus’ bookshelf’ [S23]
}
\ex {
\gll tjärdalen hans n’ Nisj\\
tar\_pile his {\pda}.{\m} Nisj\\
\glt ‘Nils’s tar pile’ [S23]
}
\z
\z
Similar cases are also found in Norrbothnian and Southern Westrobothnian. Thus, for \textit{Pitemål}, \citet{Brännström1993} gives examples like the following as the major way of expressing possessive constructions:
\ea%\label{}
\langinfo{\textit{Pitemål }(Pm)}{}{}\\
\ea {
\gll båoka haNs en Erik \\
book.{\deff} his {\pda}.{\m} Erik \\
\glt ‘Erik’s book’
}
\ex {
\gll lärjunga haNs en Jesus\\
disciple.{\deff}.{\pl} his {\pda}.{\m} Jesus\\
\glt ‘the disciples of Jesus’
}
\z
\z
%\begin{styleTextkrper}
The Cat Corpus provides us with an example also from Southern Westrobothnian:
%\end{styleTextkrper}
\ea%\label{}
\langinfo{Sävar (SVb)}{}{}\\
\gll Kattgöbben försto, att hanna va \textbf{kälinga} \textbf{hansch} \textbf{‘n} \textbf{Allfre}.\\
tomcat.{\deff} understand.{\pst} that {\dem} be.{\pst} \textbf{wife.{\deff}} \textbf{his} \textbf{{\pda}.{\m}} \\
\glt ‘Cat understood that this was Alfred’s wife.’ (Cat Corpus)
\z
Apparently, in these varieties, the possessive pronoun can be combined with a complete noun phrase rather than with a bare proper name. It may be concluded that the analysis of the \textstyleLinguisticExample{h-}genitive as consisting of a head noun followed by a proper name with a preproprial article is not correct for Västerbotten. Delsing draws the conclusion that the preproprial article analysis of the \textstyleLinguisticExample{h-}genitive is generally inadequate and proposes that it instead involves an “ordinary possessive pronoun”, amenable to a unified analysis for all \textstyleLinguisticExample{h-}genitives within generative syntax. \citet{KoptjevskajaTamm2003} also questions the applicability of the preproprial article analysis, at least for some Norwegian and Swedish dialects where the pronouns showing up in the \textstyleLinguisticExample{h-}genitives “have become analytic construction markers”.
It seems relevant here that one of the competitors of the \textstyleLinguisticExample{h-}genitive in the coastal zone is the dative possessive construction (see \sectref{sec:5.4}). In many cases the two constructions will differ only in the form of the pronoun: cf. \textit{Skelletmål} examples in \citet{Marklund1976}: \textstyleLinguisticExample{kæppa n’Greta} ‘Greta’s coat’ (dative possessive) vs. \textstyleLinguisticExample{kLänninga hännasj} \textstyleLinguisticExample{Lina} ‘Lina’s dress’ (\textstyleLinguisticExample{h-}genitive), or Överkalix \textstyleLinguisticExample{sjåongma:Le henars/n/en Anna} ‘Anna’s voice’ (\citealt[153]{Källskog1992}). Also in this connection, notice examples like the following from \citet[125]{Larsson1929}, \textstyleLinguisticExample{bökjsen n Nikkje }‘Nicke’s trousers’ and \textstyleLinguisticExample{strompen a Greta} ‘Greta’s stockings’, where the pronouns are in the nominative, and where Larsson also gives the alternatives \textstyleLinguisticExample{bökjsen hansj Nikkje} and \textstyleLinguisticExample{strompen hannasj Greta}.
It would not be too amazing if the two constructions tended to be confused, especially in a situation where the vernacular in general becomes unstable. Such a confusion is arguably found in the \textit{Skelletmål} example \textstyleLinguisticExample{saitjen hansj hannlaråm }‘the shop-owner’s sack’, quoted above, which differs from the “normal” \textstyleLinguisticExample{h-}genitive in at least two ways: the possessor is not a proper name but a common noun, and in addition this noun is in the dative case. \citet[23]{Marklund1976} says that dative marking on the noun is “usually” present in this construction, other examples being
\ea%\label{}
\langinfo{\label{bkm:Ref136428236}\textit{Skelletmål} (NVb)}{}{}\\
\ea {
\gll vävsjea hännasj mo’rrmon\\
reed her granny.{\dat}\\
\glt ‘Granny’s reed’
}
\ex {
\gll bökkreN däres skolbâNåm\\
book.{\deff}.{\pl} their school-child.{\deff}.{\pl}.{\dat}\\
\glt ‘the books of the school-children’
}
\z
\z
Similar examples are \textstyleLinguisticExample{i galar } ‘on Grandfather’s farm’, in a text from Burträsk quoted in \citet[104]{Wessén1966}, and \textstyleLinguisticExample{hemme hannasj mormorn} ‘Granny’s home’, quoted as Westrobothnian without specification of the location by \citet[131]{Larsson1929}. We may see the rise of the mixed construction as a special case of the more general process (hinted at in the quotation from \citet{KoptjevskajaTamm2003} by which the pronoun becomes gradually detached from the possessor NP and is reinterpreted as a marker of the possessive construction. The arguments for treating the pronoun in the \textstyleLinguisticExample{h-}genitive as a preproprial article appear to be strongest for Icelandic, where the pronoun and the following noun both take the genitive case: \textstyleLinguisticExample{húsið hans Péturs }‘Peter’s house’, and the possessor noun phrase can also be interpreted as an associative plural, if the pronoun is in the plural: \textstyleLinguisticExample{húsið} \textstyleLinguisticExample{þeirra Jóns }‘Jon and his family’s house’ (\citealt[69]{Delsing2003b}). This (as \citealt[632]{KoptjevskajaTamm2003} suggests) can be seen as indicating that Icelandic represents an early stage in the development of the construction, and that the first step towards the dissociation of the pronoun from the possessor NP comes when the genitive marking is lost, as has happened in all mainland Scandinavian dialects. The coastal zone vernaculars would then represent a further developmental stage, which, however, seems rather unstable. Thus, the dative marking is disappearing with the general deterioration of that case. The following example from the Cat Corpus is from the same vernacular as \REF{301}(a), and the grammatical construction is identical, except for the form of the possessor noun (here a definite unmarked for case):
\ea%\label{}
\langinfo{\textit{Skelletmål } (NVb)}{}{}\\
\gll leill-vegän hännärs Mormora\\
little\_road.{\deff} her Granny.{\deff}\\
\glt ‘Granny’s little road’
\z
The final stage in the transition from preproprial article to possessive construction marker is possibly seen in the following Cat Corpus example from the South Westrobothnian Sävar vernacular: \textstyleLinguisticExample{lill-vegen hansch Mormora }‘Granny’s little road’, where a masculine pronoun is combined with a female kin term. A parallel to this is found in Romanian \citep[632]{KoptjevskajaTamm2003}, where the masculine pronoun \textstyleLinguisticExample{lui} is also used with feminine nouns, as in the example \textstyleLinguisticExample{casa lui Mary} ‘Mary’s house’.
What has been said so far applies to the coastal \textstyleLinguisticExample{h-}genitive zone (Westrobothnian and Norrbothnian). The inland vernaculars where the \textstyleLinguisticExample{h-}genitive is found, on the other hand, have chosen a rather different route. Here, we do not find double pronouns or an extension to common nouns. Instead, there has been a differentiation between the pronoun used in the \textstyleLinguisticExample{h-}genitive and 3\textsuperscript{rd} person genitive pronouns used independently. In most Scandinavian vernaculars, the feminine possessive pronoun has taken on the\textit{ {}-s}\textstyleLinguisticExample{ }ending originally characteristic only of the masculine \textstyleLinguisticExample{hans}. We thus find forms such as \textstyleLinguisticExample{hännärs} which was quoted above from \textit{Skelletmål}. This has also happened in the inland vernaculars, but only when the pronoun is used by itself, not in the \textstyleLinguisticExample{h-}genitive construction. We thus get different forms in sentence pairs such as the following example from the Cat Corpus (Västhärjedalen):\footnote{ The same for Tännäs (Hd) (\citealt[22]{Olofsson1999}).}
\ea%\label{}
\langinfo{Ljusnedal (Hd)}{}{}\\
\ea {
\gll … ô kahtta hadde håhppâ ohppi \textbf{knea} \textbf{hinnjis.} \\
and cat.{\deff} have.{\pst} jump.{\supp} up in \textbf{knee.{\deff}} \textbf{her} \\
\glt ‘…and the cat had jumped up on her lap.’ (Cat Corpus)
}
\ex {
\gll Ho håhppâ ohpp i \textbf{knea} \textbf{hinnji} \textbf{mor}.\\
she jump.{\pst} up in \textbf{knee.{\deff}} \textbf{{\pda}.{\f}.{\gen}} \textbf{mother}\\
\glt ‘She [the cat] jumped up on Granny’s lap.’ (Cat Corpus)
}
\z
\z
%\begin{styleTextkrper}
For Malung (Vd), \citet[2:211]{Levander1925} gives the form \textit{hännäsäs} for ‘her’ – in the \textit{h-}genitive construction, however, the form is \textit{in}:
%\end{styleTextkrper}
\ea%\label{}
\langinfo{Malung (Vd)}{}{}\\
\gll O hôpp ôpp ô sätt sä’ \\
she jump.{\pst} up and set.{\pst} {\refl} \\
\gll ti \textbf{knenon} \textbf{in} \textbf{Mormor}.\\
in \textbf{knee.{\deff}.{\pl}} \textbf{{\pda}.{\f}.{\gen}} \textbf{mother}\\
\glt ‘She jumped up and sat on Granny’s lap.’ (Cat Corpus)
\z
For Hammerdal (Jm), \citet{Reinhammar2005} gives \textstyleLinguisticExample{en} as the form used in the \textit{h}{}-genitive construction, and in Lit (Jm), we get \textstyleLinguisticExample{pyne sängâ n Momma} ‘under Granny’s bed’ (Cat Corpus). This means that in the inland area, the pronoun used with feminine names in the \textstyleLinguisticExample{h-}genitive is identical to the preproprial dative pronoun, rather than to the independent genitive pronoun. (However, in the older text [S11] from Kall (Jm), we find the form \textstyleLinguisticExample{henn} in \textstyleLinguisticExample{rättuheita henn mor} ‘Mother’s rights’ as opposed to both the independent possessive pronoun \textstyleLinguisticExample{hennes}, as in \textstyleLinguisticExample{bröran hennes} ‘her brothers’, and the preproprial dative ‘\textstyleLinguisticExample{n}, as in \textstyleLinguisticExample{i la ma ‘n mor} ‘together with mother’.) The masculine pronoun in the \textstyleLinguisticExample{h-}genitive construction, on the other hand, is unmistakably genitive, although it may also differ from the independent genitive. Thus, in Malung (Vd), we get \textstyleLinguisticExample{as} in the \textstyleLinguisticExample{h-}genitive – a straightforward development of the original \textstyleLinguisticExample{hans} – whereas the independent pronoun is \textstyleLinguisticExample{honômäs} – an expansion of the original dative form. In other places, the forms are identical (e.g. \textstyleLinguisticExample{hans} in Lit (Jm), \textstyleLinguisticExample{hâns} in Hammerdal (Jm)).
We thus find that the arguments for rejecting the preproprial article analysis of the \textstyleLinguisticExample{h-}genitive do not work very well for the inland zone. It may still be the case that a unified analysis of the \textstyleLinguisticExample{h-}genitive is possible, as Delsing proposes. On the other hand, there is much to suggest that preproprial articles are the diachronic source of the \textstyleLinguisticExample{h-}genitive, and it is not clear if the idea of a gradual movement away from that source is compatible with a unified synchronic analysis.
\section{Prepositional constructions}
\label{sec:5.6}
Adnominal possession is frequently expressed by adpositional constructions – English \textstyleLinguisticExample{of }is a well-known example. Our interest here will be focused on those constructions which have grammaticalized far enough to be able to function more generally as possessives rather than being restricted to a certain class of head nouns. As noted by \citet[43]{Delsing2003a}, Standard Danish and Swedish lack prepositional constructions that can be used with non-relational nouns (“alienable possession”) to say things like ‘John’s car’ – here, the \textstyleLinguisticExample{s}{}-genitive is the only option. In many other Scandinavian varieties such prepositional constructions exist. In Standard Bokmål Norwegian, \textstyleLinguisticExample{til} is the most common preposition used: \textstyleLinguisticExample{boka til Per} ‘Per’s book’. In Nynorsk Norwegian and various Norwegian dialects, an alternative is \textstyleLinguisticExample{åt }(\citealtt[263]{FaarlundEtAl1997}, \citealt[43]{Delsing2003a}), which is a cognate of the English \textstyleLinguisticExample{at} – this preposition is also used in parts of the Peripheral Swedish area to form a periphrastic adnominal possessive construction, as in the title of the Cat story in the Lit (Jm) vernacular: \textstyleLinguisticExample{Fresn at a Momma }‘Granny’s cat’. More generally in the Peripheral Swedish area, however, the same preposition is found in what is arguably an external possessor construction, plausibly representing an earlier stage in the evolution of the construction. I shall therefore discuss the external possessor construction first, but before doing so, I shall say a few words about the preposition \textstyleLinguisticExample{at }as such, since it has a rather interesting history of its own.
In Written Medieval Swedish, as well as in other earlier forms of Scandinavian, the preposition \textstyleLinguisticExample{at} could be used similarly to its English cognate, e.g. \textstyleLinguisticExample{aat kirkio} ‘at church’, but it also had several other uses (\citealt{Söderwall1884}). Frequently, it indicated ‘direction’, as in
\ea%\label{}
\langinfo{Written Medieval Swedish}{}{}\\
\gll …for han stragx \textbf{ath} \textbf{danmark} {j gen}\\
…go.{\pst} he at\_once \textbf{to} \textbf{Denmark} again\\
\glt ‘…he went at once to Denmark again.’ [S13]
\z
%\begin{styleTextkrper}
It could also signal ‘beneficient’ or ‘path’:
%\end{styleTextkrper}
\ea%\label{}
\langinfo{Written Medieval Swedish}{}{}\\
\ea {
\gll …göra brullöp \textbf{aat} \textbf{sinom} \textbf{son} \textbf{iohanni…}\\
make.{\inf} wedding \textbf{for} \textbf{{\poss}.3{\sg}.{\refl}.{\dat}.{\m}.{\sg}} \textbf{son} \textbf{Johan.{\dat}}\\
\glt ‘…arrange a wedding for his son Johan.’ [S13]
}
\ex {
\gll Þe þär fram foro \textbf{at} \textbf{väghenom}.\\
they there forth go.{\pst}.{\pl} \textbf{along} \textbf{road.{\deff}.{\dat}}\\
\glt ‘They went along the road.’ [S8]
}
\z
\z
In the modern Central Scandinavian languages, the prepositions descending from \textstyleLinguisticExample{at} in general have much narrower ranges of meaning. In Danish, \textstyleLinguisticExample{ad }mainly seems to be used in the ‘path’ meaning and as part of verb collocations such as \textstyleLinguisticExample{le ad} ‘laugh at’. In Norwegian, \textstyleLinguisticExample{åt} is fairly marginal – some Bokmål dictionaries do not even list it. In Nynorsk, it appears to have more or less the same range as in Swedish, although it is rather infrequent. In Swedish, both the locational and the directional uses have more or less disappeared; instead the beneficiary use has expanded and \textstyleLinguisticExample{åt }is now commonly used as the head of an analytic counterpart to indirect objects with verbs of giving. This goes also for most vernaculars, although the directional use is preserved in at least parts of Ovansiljan and in Nyland and Åboland.
The form of the descendants of Old Nordic \textstyleLinguisticExample{at} also shows variation, with a somewhat unexpected geographical pattern. The vowel was originally a short \textstyleLinguisticExample{a}, which should not have changed in the standard languages, under normal circumstances. However, already in the medieval period, a “secondary prolongation” (\citealt[1204]{Hellquist1922}) took place in Swedish and at least some forms of Norwegian. The long \textstyleLinguisticExample{a} then developed into \textstyleLinguisticExample{å}, in the Scandinavian Vowel Shift. What is peculiar here is that some Swedish varieties which otherwise took part in the \textstyleLinguisticExample{\=a }{\textgreater} \textstyleLinguisticExample{å} shift seem to have missed out on the prolongation, and thus preserve the original short \textstyleLinguisticExample{a }in\textstyleLinguisticExample{ at.} Such forms, to judge from the Cat Corpus, are predominant in the Dalecarlian area and in Jämtland and Hälsingland. (It may be noted that Jämtland does not follow the neighbouring Trøndelag here.) A hybrid form \textstyleLinguisticExample{ått} is found in Sävar (SVb) and Åsele (Åm).
As a regular counterpart of Swedish \textstyleLinguisticExample{s}{}-genitive, the \textstyleLinguisticExample{at} construction is\textit{ }found most systematically in three of the Cat Corpus texts, viz. Åsele (Åm), Lit (Jm), and Junsele (Åm). The Lit text is the only one where the \textstyleLinguisticExample{at} construction shows up in the title of the Cat story, although \textstyleLinguisticExample{Fresn at a Momma }‘Granny’s cat’, quoted above, does not display the traditional dative form \textstyleLinguisticExample{n} of the preproprial article exemplified in the following example from the same text:
\ea%\label{}
\langinfo{Lit (Jm)}{}{}\\
\gll …han skull sväng ta på lillvein \textbf{at} \textbf{n} \textbf{Momma}.\\
…he shall.{\pst} turn.{\inf} off on little\_road.{\deff} \textbf{{\poss}} \textbf{{\pda}.{\f}.{\dat}} \textbf{Granny}\\
\glt ‘…he was going to turn into Granny’s little road.’
\z
In this corpus sentence, all three vernaculars mentioned use the \textstyleLinguisticExample{at} construction, as they also do in the following sentence:
\ea%\label{}
\langinfo{\label{bkm:Ref137369876}Junsele (Åm)}{}{}\\
\gll Katta begrep att ä dänne va käringa \textbf{åt’n} \textbf{Alfred}.\\
Cat.{\deff} understand.{\pst} that it there be.{\pst} wife.{\deff} \textbf{{\poss}-{\pda}.{\m}} \textbf{Alfred}\\
\glt ‘Cat understood that this was Alfred’s wife.’
\z
%\begin{styleTextkrper}
Here, the construction is also found in the text from Luleå:
%\end{styleTextkrper}
\ea%\label{}
\langinfo{\textit{Lulemål} (Ll)}{}{}\\
\gll Kätta förstöo att hein vär freo \textbf{att} \textbf{n’} \textbf{Alfri}.\\
Cat.{\deff} understand.{\pst} that this be.{\pst} wife.{\deff} \textbf{{\poss}} \textbf{{\pda}.{\m}} \textbf{Alfred}\\
\glt ‘Cat understood that this was Alfred’s wife.’
\z
In transcribed texts from Hössjö village in Umeå parish, we find several examples, thus:
\ea%\label{}
\langinfo{Hössjö (SVb)}{}{}\\
\ea {
\gll Hä va n’ syster \textbf{åt} \textbf{mamma} \textbf{min}\\
it be.{\pst} {\pda}.{\f} sister \textbf{{\poss}} \textbf{mother} \textbf{my}\\
\gll som var här i Hössjö å\\
who be.{\pst} here in Hössjö and\\
\gll\bfseries
en syster åt n’Ol Orssa å n’Anners Orssa.\\
\bfseries {\indf} sister {\poss} {\pda}.Ol Orssa and {\pda}.Anners Orssa\\
\glt ‘It was the sister of my mother who was here in Hössjö and a sister of Ol Orssa and Anners Orssa.’ [S44]
}
\ex {
\gll Hä va ju \textbf{n’} \textbf{doter} \textbf{åt} \textbf{mormora}.\\
it be.{\pst} {\prag} \textbf{{\pda}.{\f}} \textbf{daughter} \textbf{{\poss}} \textbf{Granny.{\deff}}\\
\glt ‘It was Granny’s daughter.’ [S44]
}
\z
\z
We thus have examples from Jämtland, Norrbothnian, the Angermannian dialect area, and Southern Westrobothnian. \citet[44]{Delsing2003a} quotes examples from earlier descriptions of vernaculars from Västerbotten, Jämtland, Medelpad, and Värmland and text examples from Västerbotten, Medelpad, Jämtland, Hälsingland, and Värmland, but refers to the text examples as “sporadic”.\footnote{ “I dialekttexterna har jag funnit enstaka belägg från norra Sverige.” } This probably gives too bleak a picture of the strength of the construction. \citet[61]{Hedblom1978}\footnote{ “Genitiven uttryckes i äldre dial. ofta med preposition…” } says about Hälsingland that “the genitive is often expressed by a preposition in the older dialect”, and gives the examples \textstyleLinguisticExample{mo´r at Gus{\textasciigrave}tav} ‘Gustav’s mother’ and \textstyleLinguisticExample{bins{\textasciigrave}`lo̵no̵ at ju´r o̵no̵ }‘the fastenings of the animals’. \citet[157]{Källskog1992} says that in Överkalix \textstyleLinguisticExample{at} is common as a “paraphrase of the genitive concept, in particular with expressions denoting kinship”.
Most of the ones quoted here seem to involve kin terms as head nouns. \citet{BergholmEtAl1999} are skeptical towards the possibility of using prepositional constructions with non-relational head nouns, noting that their informants in Västerbotten reject examples such as *\textstyleLinguisticExample{hattn åt (n) Johan} ‘Johan’s hat’ and *\textstyleLinguisticExample{glassn åt (a) Lisa} ‘Lisa’s ice cream’. The examples from Lit (Jm) and Hälsingland above seem to show that this restriction is not general, and some of Delsing’s examples from the southern part of the area also seem to be quite clearly non-relational. \citet[157]{Källskog1992} quotes a number of non-relational examples from Överkalix, but they may be interpreted as meaning ‘(intended) for’ (e.g. \textstyleLinguisticExample{kräfftfåore at kollo} ‘the special fodder for the cows’), where also Swedish could have the preposition \textstyleLinguisticExample{åt} (perhaps somewhat marginally).
There are quite a few other texts in the Cat Corpus than the ones mentioned above where the \textstyleLinguisticExample{at }construction is used, but in a more restricted fashion. What I want to claim is that in those vernaculars, \textstyleLinguisticExample{at }is not a general possessive marker but rather signals an external possessor construction. This possibility has to my knowledge not attracted any serious attention – maybe because the notion of “external possession” has not been salient for most people who have worked in the area. Another reason is that the construction is rather infrequent in most texts. In the Cat Corpus, however, it happens to be very well represented, mainly thanks to the protagonist’s jumping habits. The text with the largest number of examples is from Mora, where there are eight fairly clear examples, typical ones being:
\ea%\label{}
\langinfo{\label{bkm:Ref137369958}Mora (Os)}{}{}\\
\ea {
\gll An upped upp i knim \textbf{a} \textbf{Mårmår}.\\
he jump.{\pst} up in lap.{\deff}.{\dat} \textbf{to} \textbf{Granny}\\
\glt ‘He jumped up onto Granny’s lap.’ (Cat Corpus)
}
\ex {
\gll Men då byrd ä å swäir i ogum \textbf{a} \textbf{Missan…}\\
but then begin.{\pst} it {\infm} smart in eye.{\deff}.{\pl} \textbf{to} \textbf{Cat…}\\
\glt ‘But then Cat’s eyes started smarting...’ (Cat Corpus)
}
\z
\z
In the Mora Cat text, the preposition \textstyleLinguisticExample{a }is also used in the original, directional, sense, as in \REF{312}(a), and in its modern Swedish beneficiary/recipient sense, as in \REF{312}(b):
\ea%\label{}
\langinfo{\label{bkm:Ref135470190}Mora (Os)}{}{}\\
\ea {
\gll Gamblest påjtsen add fe \textbf{a} \textbf{Merikun…}\\
old.{\superl} boy.{\deff} have.{\pst} go.{\supp} \textbf{to} \textbf{America{\deff}}.\\
\glt ‘The eldest boy had gone to America…’ (Cat Corpus)
}
\ex {
\gll A du skreva dånda lappen \textbf{a} \textbf{me?}\\
have.{\prs} you write.{\supp} that slip.{\deff} \textbf{to} \textbf{me.{\obl}}\\
\glt ‘Have you written that note to me?’ (Cat Corpus)
}
\z
\z
However, in this text, it is not used in examples of possession which cannot naturally be understood as external possession, such as \REF{308}. This might of course be an accident, but as it turns out, the same is true of more than ten other texts in which \textstyleLinguisticExample{at} is found in examples such as \REF{311}(a-b). \figref{map:20} shows the distribution of external possessor \textstyleLinguisticExample{at} in the Cat Corpus. The vernaculars where \textstyleLinguisticExample{at} is used as an adnominal possessive marker are encircled. As we can see, the external possessor construction has a much larger geographical distribution, covering large parts of the Peripheral Swedish area.
My interpretation of the situation depicted in the map is that the adnominal uses of \textstyleLinguisticExample{at }are a more recent development, and that they have originated as a reanalysis of the external possessor construction. There are indications that an adnominal use is also developing in places where it has not yet become properly established. Thus, in Elfdalian, informants tend to find adnominal uses rather questionable, but it is possible to find examples, such as the following relatively old recording, which do not quite fit the criteria for external possession:
\ea%\label{}
\langinfo{Älvdalen (Os)}{}{}\\
\gll Og ǫ add gaið fromǫ \textbf{gamman} \textbf{að} \textbf{nogum} \textbf{momstaskallum}.\\
and she have.{\pst} go.{\supp} in\_front\_of \textbf{front\_roof} \textbf{to} \textbf{some.{\deff}.{\pl}} \textbf{Månsta\_people}\\
\gll\\
\\
\glt ‘and she had passed by the front roof of some Månsta people.’ [S34]
\z
If the hypothesis that the \textstyleLinguisticExample{at} construction has developed from external possession to an adnominal possession is correct, it may be the second time this has happened in the area: above, we saw that dative-marked adnominal possessors may have the same kind of origin.
\section{Possessor incorporation}
A further type of possessive construction found in some Peripheral Swedish vernaculars is possessor incorporation – alternatively described as a construction involving a compound noun whose first element is a noun referring to the possessor. Typologically, this is a relatively uncommon type which I discuss in \citet{Dahl2004}. The clearest examples outside Scandinavian are found in the Egyptian branch of the Afro-Asiatic languages. In the following two examples from Old Egyptian (\citealt{Kammerzell2000}) as spoken around 2500 \textsc{B.C.E.}, the possessor and the possessee are expressed in one word unit, and the possessee takes the special “construct state” form typical of possessive constructions in many Afro-Asiatic languages:\footnote{ I am using Kammerzell’s phonological representation rather than the traditional Egyptologist transcription that leaves out the vowels.}
\ea
\langinfo{Old Egyptian}{}{}\\
\ea
\langinfo{inalienable}{}{}\\
\gll ħal-ˈʃan\\
4face.{\cs}-brother\\
\glt ‘the brother’s face’
\ex
\langinfo{alienable}{}{}\\
\gll t’apat-ˈʃan\\
boat.{\cs}-brother\\
\glt ‘the brother’s boat’
\z
\z
Possessive constructions with incorporated possessors are remarkable in involving the incorporation of highly referential noun phrases (see \citealtt{Dahl2004} for further discussion). This holds also for the Norrlandic examples. The examples in the literature tend to be of the type personal name + kin term (including “improper” kin terms in the sense of \citet{DahlEtAl2001}, e.g. \textstyleLinguisticExample{Svän-Jons-pojken} ‘Svän-Jon’s boy’ (quoted by \citet[38]{Delsing2003a} from Delsbo). The last element can also be a noun denoting an animal:
\ea%\label{}
\langinfo{Överkalix (Kx)}{}{}\\
\gll \textbf{Per-Ajsja-mä:ra} å \textbf{Läs-Ändersa-hesstn}\\
{\textless}firstname{\textgreater}-{\textless}patronymic{\textgreater}-mare.{\deff} and {\textless}firstname{\textgreater}-{\textless}patronymic{\textgreater}-horse.{\deff}\\
\gll gär din opa aindjen.\\
walk.{\prs} there on meadow.{\deff}.{\dat}\\
\glt ‘Per Eriksson’s mare and Lars Andersson’s horse are in the meadow.’ (\citealt[164]{Källskog1992})
\z
Inanimate possessees do also occur, although they are mentioned less frequently: \textstyleLinguisticExample{pappaskjorta} ‘father’s shirt’ (Lövånger (SVb), \citealt{Holm1942}), \textstyleLinguisticExample{Ilmesnäsduken} ‘Hilma’s scarf’ (Fasterna (Up), \citealt[134]{Tiselius1902}), \textstyleLinguisticExample{Halvarluva} ‘Halvar’s cap’ (\citealt{Oscarsson2007}). (For some reason, all these examples involve items of clothing.)
As for the distribution within the Peripheral Swedish area, Delsing gives attestations from Västerbotten (more specifically, Northern Westrobothnian) and Hälsingland; as the examples above reveal, the phenomenon is also found in Norrbotten and Jämtland. In addition, it is attested as far south as Värmland and Uppland.\footnote{ Possessor incorporation may well turn out to be more common typologically than I have suggested here; it may just be something that has not been paid attention to. From his children’s colloquial German, Wolfgang Schulze (pers. comm.) mentions examples such as \textit{das ist der Lenny-Platz} ‘that is Lenny’s place [at the table]’.\par }
\section{Pronominal possession}
In the realm of possessive constructions with pronominal possessors, including both 1\textsuperscript{st} and 2\textsuperscript{nd} person possessive pronouns and what is traditionally called genitive forms of 3\textsuperscript{rd} person pronouns, there has been less turbulence in the Peripheral Swedish area than is the case for nominal possessors. In fact, the Peripheral Swedish vernaculars are on the whole rather conservative here, in that they have not in general followed the general trend towards preposed rather than postposed pronominal possessors.
In Runic Swedish, possessive pronouns were generally postposed, except when strongly stressed, and this is consistent with the oldest attested stages of Germanic varieties (Wessén (1956: 107ff.)). The same holds for the Swedish provincial laws. However, the situation seems to have changed quickly and drastically: in the rest of Written Medieval Swedish post-position is a “rare exception” (\citealt[110ff.]{Wessén1956})\todo{Please give full number range}. Wessén comments that this change can hardly have taken place without external influence – he assumes that it spread from the West Germanic languages via Germany to Denmark and Sweden. In Central Scandinavian, preposed pronominal possessors are now the normal case, except for Norwegian where both orders are possible, although post-position seems to be preferred in spoken language and in Nynorsk. In written Standard Swedish, postposed possessors live on as a not too frequent alternative for kin terms in expressions such as f\textstyleLinguisticExample{ar min} ‘my father’. In corpora of belletristic prose, such expressions make up 1--2 per cent of the combinations that contain the nouns in question. This situation appears to be relatively stable. The postposed variants have a clear colloquial or even “rustic” character.
\citet[32]{Delsing2003a} has mapped the distribution of pronominal possession constructions in Swedish written dialect materials in detail. (Regrettably, for some areas, the number of attested examples in his statistics is really too low to allow for any reliable judgments.) In Delsing’s material, the Swedish dialect area divides fairly nicely into three zones (see \figref{map:21}): a southern one, coinciding with the Southern Swedish area of traditional dialectology, with exclusively preposed pronominal possessors pronouns, a north-eastern one, roughly coinciding with what I call the Peripheral Swedish area (but excluding Gotland and Estonia), where postposed possessive pronouns predominate, and one intermediate area – the rest, where preposed possessives are the norm but post-position is possible with kin terms.
It appears that the postposed alternative is losing ground in present-day vernaculars. In the Cat Corpus, there are relatively few examples of possessive pronouns, and some of them are in focused position where the preposed alternative is fairly general, but even in the others it can be seen that pre-position is used in most of Dalarna, including the usually conservative Ovansiljan area. \citet[111]{Levander1909} states that pre-position is possible only when the pronoun bears strong stress (in the third person singular masculine, the preposed form is apparently a “reinforced” one, formed on the pattern of the complex dative possessive discussed in \sectref{sec:5.4.2}):
\ea
\langinfo {Älvdalen (Os)}{}{}\\
\ea {
\langinfo{preposed (strong stress)}{}{}\\
\gll Eð ir \textbf{onumes} \textbf{gard}.\\
it be.{\prs} \textbf{his} \textbf{farm}\\
\glt ‘It is his farm.’
}
\ex {
\langinfo{postposed (weak stress):}{}{}\\
\gll \textbf{Gardn} \textbf{{}-os} ar buolageð tjyöpt.\\
\textbf{farm} \textbf{his} have.{\prs} company.{\deff} buy.{\supp}\\
\glt ‘His farm has been bought by the company.’
}
\z
\z
In the intermediate area, pre- and post-position are equally probable with kin terms in Delsing’s material – 45 per cent of the occurrences are postposed. There is considerable variation within the area, though. The following provinces have a clear majority for the preposed alternative: Östergötland, north Småland, Bohuslän, (Halland), Närke, Dalsland. The following prefer the postposed construction: Södermanland, (Västmanland), south Värmland, Västergötland. (Provinces with total numbers that are too low are in parentheses.)
It thus appears that much of Sweden – not only the Peripheral Swedish area – has for a long time withstood wholly or partly the trend towards preposed pronominal possessors. What is somewhat remarkable in this context is that Written Medieval Swedish, except for the provincial laws, went further in this trend than virtually any of the vernaculars spoken within the borders of medieval Sweden, in that preposed possessive pronouns are the norm even with kin terms. Thus, in the Källtext corpus, I found only one instance of the phrase \textstyleLinguisticExample{fadher min }‘my father’\textstyleLinguisticExample{ }as compared to about 30 instances with the preposed pronoun. Among the vernaculars, it is only the old Danish provinces and the adjacent southern Småland where the frequency of postposed pronouns in Delsing’s material is as low or lower than in Källtext. The contrast with the Peripheral Swedish area is of course even more striking. It seems fairly clear that with regard to the placement of possessive pronouns, the usage in Written Medieval Swedish has little support in the surviving vernaculars. We may speculate that it was based on a prestige dialect heavily influenced by foreign models, probably primarily Danish ones.
%\begin{styleBeschriftung}
\begin{figure}[h]
%\begin{styleBeschriftung}
\includegraphics[height=.5\textheight]{figures/24_ExternalPosessionAt}
\caption{External possession in the Cat Corpus}
\label{map:20}
\end{figure}
\begin{figure}[h]
%\begin{styleBeschriftung}
\includegraphics[height=.5\textheight]{figures/25_PlacementofPosPron}
\caption{Placement of possessive pronouns (adapted from \citealt{Delsing2003a})}
\label{map:21}
\end{figure}
\section{Concluding discussion: The evolution of possessive constructions in the Peripheral Swedish area}
It is not so easy to sort out the geographical patterns in the diversity of possessive constructions in the Peripheral Swedish area, especially in view of their frequent overlapping. Still, a possible scenario can be sketched.
Two constructions that do not overlap to any great extent but rather are in complementary distribution are the plain dative construction and the prepositional construction with \textstyleLinguisticExample{at}. As we can see from \figref{map:18}, the plain dative construction has a discontinuous distribution, the two parts of which are the two parts of which are on opposite sides of the distribution of the \textit{at} construction. Furthermore, the two constructions appear to have similar origins – from external possessor constructions. A dative external possessor construction is attested from Written Medieval Swedish, whereas an external possessor construction with \textstyleLinguisticExample{at} is found in a large part of the Peripheral Swedish area, notably in the areas where the plain dative construction is still alive. It is thus highly probable that the plain dative construction is the older one and that the \textstyleLinguisticExample{at} construction may have replaced it in Middle Norrland.
Even if there are some discrepancies (see \sectref{sec:5.5}), the distributions of the \textstyleLinguisticExample{h-}genitive and preproprial articles are similar enough for it to be likely that the former originates in the latter, and Norway is a likely candidate as the origin. Like the plain dative construction, the \textstyleLinguisticExample{h-}genitive has a discontinuous distribution; in fact, the “hole” in the middle is partly the same for the two constructions, and in both cases largely overlaps with the distribution of the \textstyleLinguisticExample{at} construction. Using the same logic as before, we may assume as a possibility that the \textstyleLinguisticExample{at} construction has pushed out not only the dative construction but also the \textstyleLinguisticExample{h-}genitive in parts of Middle Norrland. (Alternatively, the dative was first pushed out by the \textstyleLinguisticExample{h-}genitive, then the \textstyleLinguisticExample{at }construction took over.) Admittedly, we cannot exclude that the coastal \textstyleLinguisticExample{h-}genitive is an independent development. However, one may wonder, if given all the possessive constructions they already had, these vernaculars would have developed another possessive construction if there were no pressure from the outside.
The geographical distribution of the \textstyleLinguisticExample{s}{}-genitive with a definite head suggests that it has expanded from the south along both sides of the Baltic.
| {
"alphanum_fraction": 0.7682651596,
"avg_line_length": 93.4838709677,
"ext": "tex",
"hexsha": "fcd464032e58b6ac9d9dfee9a34cbff216c093c2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/sidl",
"max_forks_repo_path": "finished/Dahl/chapters/05.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/sidl",
"max_issues_repo_path": "finished/Dahl/chapters/05.tex",
"max_line_length": 2736,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/sidl",
"max_stars_repo_path": "finished/Dahl/chapters/05.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 31470,
"size": 107226
} |
\begin{homeworkProblem}
\section{Introduction}
Dice is a general purpose, object-oriented programming language. The principal is simplicity, pulling many themes of the language from Java. Dice is a high level language that utilizes LLVM IR to abstract away hardware implementation of code. Utilizing the LLVM as a backend allows for automatic garbage collection of variables as well. \\
Dice is a strongly typed programming language, meaning that at compile time the language will be type-checked, thus preventing runtime errors of type. \\
This language reference manual is organized as follows:\\
\begin{itemize}
\item Section 2 Describes types, values, and variables, subdivided into primitive types and reference types
\item Section 3 Describes the lexical structure of Dice, based on Java. The language is written in the ASCII character set
\item Section 4 Describes the expressions and operators that are available to be used in the language
\item Section 5 Describes different statements and how to invoke them
\item Section 6 Describes the structure of a program and how to determine scope
\item Section 7 Describes classes, how they are defined, fields of classes or their variables, and their methods
\end{itemize}
The syntax of the language is meant to be reminescent of Java, thereby allowing ease of use for the programmer.
\end{homeworkProblem}
| {
"alphanum_fraction": 0.8005801305,
"avg_line_length": 86.1875,
"ext": "tex",
"hexsha": "ec9cf75b8d901ee4c76b79f2a984931c2f87a536",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "627ad07aeb16cd9ceb7bb283f9861013877d1635",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DavidWatkins/JFlat",
"max_forks_repo_path": "report/Includes/Introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "627ad07aeb16cd9ceb7bb283f9861013877d1635",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DavidWatkins/JFlat",
"max_issues_repo_path": "report/Includes/Introduction.tex",
"max_line_length": 340,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "627ad07aeb16cd9ceb7bb283f9861013877d1635",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DavidWatkins/JFlat",
"max_stars_repo_path": "report/Includes/Introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 301,
"size": 1379
} |
\documentclass[letterpaper]{article}
% used for enumerating problem parts by letter
\usepackage{enumerate}
% used for margins
\usepackage[letterpaper,left=1.25in,right=1.25in,top=1in,bottom=1in]{geometry}
% used for header and page numbers
\usepackage{fancyhdr}
% used for preventing paragraph indentation
\usepackage[parfill]{parskip}
% used for subsection indentation
\usepackage{changepage}
% used for inserting images
\usepackage{graphicx}
% used for eps graphics
\usepackage{epstopdf}
\usepackage{amssymb}
% used for links
%\usepackage{hyperref}
\pagestyle{fancy}
\fancyfoot{}
% header/footer settings
\lhead{Kevin Nash (kjn33)}
\chead{EECS 325 -- Project 4}
\rhead{\today}
\cfoot{\thepage}
\begin{document}
\section*{CANVAS}
% stuff here?
\subsection*{The Application Protocol}
My protocol, \textbf{C}oloring \textbf{AN}d \textbf{V}iewing \textbf{A}rt
\textbf{S}quares, provides a simple means of changing the ``pixels'' of a
single common ``image.'' In reality the image is an array of text and its
pixels are Full Block (\texttt{U+2588}) characters. Message passing is handled
by UDP over POSIX sockets. Clients send instructions to the server, and in
response the server modifies or erases a pixel in the image. Clients can
also request the current state of the image, as well as a sample image.
The idea of having an indefinite number of clients modify an image
simultaneously was partially inspired by the Reddit's April Fools Day gag for
2017, /r/place.
(https://redditblog.com/2017/04/18/place-part-two)
\subsection*{Running Client and Server}
First ensure that the executables are ready to run by compiling/cleaning
with \texttt{make}.
To run the server, simply invoke \texttt{./proj4d port}, where \texttt{port} is
the port number that is available for you to use on your system.
To run a client, simply invoke \texttt{./proj4 host port}, where \texttt{host}
is the host name of the server you wish to reach (e.g.
\texttt{localhost}, \texttt{eecslab-5.case.edu}) and \texttt{port} is the port
number that the server is expecting connections on.
\subsection*{Protocol Commands}
\textit{Note: All commands are case-sensitive. In general all capitals should be used.}
\textbf{MARK}\quad Marks (colors or recolors) a certain pixel in the image.
\begin{adjustwidth}{2em}{0em}
Format: MARK X Y COLOR\\
Parameters:
\begin{adjustwidth}{1em}{0em}
\begin{tabular}{ l l }
X & the x-coordinate of the pixel\\
Y & the y-coordinate of the pixel\\
COLOR & the color that replaces the pixel's current color\\
& [ RED $\vert$ GRN $\vert$ YEL $\vert$ BLU $\vert$ MAG $\vert$ CYN $\vert$ WHT ]
\end{tabular}
\end{adjustwidth}
\end{adjustwidth}
\textbf{ERAS}\quad Erases a certain pixel in the image.
\begin{adjustwidth}{2em}{0em}
Format: ERAS X Y\\
Parameters:
\begin{adjustwidth}{1em}{0em}
\begin{tabular}{ l l }
X & the x-coordinate of the pixel\\
Y & the y-coordinate of the pixel
\end{tabular}
\end{adjustwidth}
\end{adjustwidth}
\textbf{PRNT}\quad Prints the current image.
\begin{adjustwidth}{2em}{0em}
Format: PRNT\\
Parameters:
\begin{adjustwidth}{1em}{0em}
none
\end{adjustwidth}
\end{adjustwidth}
\textbf{TEST}\quad Prints a sample image for testing display compatibility.
\begin{adjustwidth}{2em}{0em}
Format: TEST\\
Parameters:
\begin{adjustwidth}{1em}{0em}
none
\end{adjustwidth}
\end{adjustwidth}
\textbf{TIME}\quad Prints the current server time.
\begin{adjustwidth}{2em}{0em}
Format: TIME\\
Parameters:
\begin{adjustwidth}{1em}{0em}
none
\end{adjustwidth}
\end{adjustwidth}
\subsection*{Justifications}
UDP was selected for use instead of TCP due to the nature of the application
compared with UDP's strengths. CANVAS does not require a persistent connection,
messages are short and the data transfer is minimal. Additionally, the server
does not need to retain information about clients after it gives a response.
\subsection*{How Things Should Look}
The sample image, requested with TEST, on the programmer's machine and on eecslab-5.
\includegraphics[scale=0.9]{test.png}
\end{document}
| {
"alphanum_fraction": 0.7318062515,
"avg_line_length": 29.7234042553,
"ext": "tex",
"hexsha": "8067333e3f80bb5472bba9e941fc03b3ce148e7f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3c92719e67b3a00bc285ff6d84498b59bbf4dc53",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "nashkevin/EECS-325-2017",
"max_forks_repo_path": "Project 4/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3c92719e67b3a00bc285ff6d84498b59bbf4dc53",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "nashkevin/EECS-325-2017",
"max_issues_repo_path": "Project 4/report.tex",
"max_line_length": 89,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3c92719e67b3a00bc285ff6d84498b59bbf4dc53",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "nashkevin/EECS-325-2017",
"max_stars_repo_path": "Project 4/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1177,
"size": 4191
} |
\subsection{Data field values}
\label{subsec:library_of_transformations:instance_level_transformations:data_field_values}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics{images/05_library_of_transformations/03_instance_level_transformations/06_data_field_values/data_field_value.pdf}
\caption{$Im_{DataField}$ with one object and string value ``some value''}
\label{fig:library_of_transformations:instance_level_transformations:data_field_values:visualisation:ecore}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\input{images/05_library_of_transformations/03_instance_level_transformations/06_data_field_values/data_field_as_edge_type_value.tikz}
\caption{$IG_{DataField}$ with one node and string value ``some value''}
\label{fig:library_of_transformations:instance_level_transformations:data_field_values:visualisation:groove}
\end{subfigure}
\caption{Visualisation of the transformation of field values from fields typed by data types}
\label{fig:library_of_transformations:instance_level_transformations:data_field_values:visualisation}
\end{figure}
The previous sections have shown the instance level transformations of the introduction of all kinds of types and their instances. From this section onward, these types and their instances will be enriched by introducing fields. In this section, the instance level transformation belonging to the transformation of a data field is discussed. The type level transformation for data fields can be found in \cref{subsec:library_of_transformations:type_level_transformations:data_fields}. On the instance level, values for the data fields are introduced.
\begin{defin}[Instance model $Im_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field}
Let $Im_{DataField}$ be an instance model typed by $Tm_{DataField}$ (\cref{defin:library_of_transformations:type_level_transformations:data_fields:tmod_data_field}). Define a set $objects$, which represent the objects that will get a value for the field introduced by $Tm_{DataField}$. Furthermore, define a function $obids$ which maps each of these objects to their corresponding identifier and a function $values$, which maps each of these objects to its value for the field introduced by $Tm_{DataField}$. $Im_{DataField}$ is defined as:
\begin{align*}
Object =\ &objects \\
\mathrm{ObjectClass} =\ & \begin{cases}
(ob, classtype) & \mathrm{if }\ ob \in objects
\end{cases}\\
\mathrm{ObjectId} =\ & \begin{cases}
(ob, obids(ob)) & \mathrm{if }\ ob \in objects
\end{cases}\\
\mathrm{FieldValue} =\ & \begin{cases}
((ob, (classtype, name)), values(ob)) & \mathrm{if }\ ob \in objects
\end{cases} \\
\mathrm{DefaultValue} =\ & \{\}
\end{align*}
\isabellelref{imod_data_field}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{defin}
\begin{thm}[Correctness of $Im_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field_correct}
$Im_{DataField}$ (\cref{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field}) is a valid instance model in the sense of \cref{defin:formalisations:ecore_formalisation:instance_models:model_validity}.
\isabellelref{imod_data_field_correct}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{thm}
A visual representation of $Im_{DataField}$ with $objects = \{ob\}$ and $obids(ob) = someId$ can be seen in \cref{fig:library_of_transformations:instance_level_transformations:data_field_values:visualisation:ecore}. In this visualisation, the field value for $ob$ is defined as $values(ob) = \text{``some value''}$. Although this visualisation only shows one object, it is required to define a value for all objects that contain the field. Failing to do so would result in an invalid instance model after it is combined with another model, as the next definition will show. The correctness proof of $Im_{DataField}$ only is already quite involved, but not be included here for conciseness. It can be found as part of the validated Isabelle proofs.
In order to make composing transformation functions possible, $Im_{DataField}$ should be compatible with the instance model it is combined with.
\begin{thm}[Correctness of $\mathrm{combine}(Im, Im_{DataField})$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field_combine_correct}
Assume an instance model $Im$ that is valid in the sense of \cref{defin:formalisations:ecore_formalisation:instance_models:model_validity}. Then $Im$ is compatible with $Im_{DataField}$ (in the sense of \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_models:compatibility}) if:
\begin{itemize}
\item All requirements of \cref{defin:library_of_transformations:type_level_transformations:data_fields:tmod_data_field_combine_correct} are met, to ensure the combination of the corresponding type models is valid;
\item The class type on which the field is defined by $Tm_{DataField}$ may not be extended by another class type in the type model corresponding to $Im$;
\item All of the objects in the set $objects$ must already be objects in $Im$;
\item All objects typed by the class type on which the field is defined must occur in the set $objects$ and thus have a value in $Im_{DataField}$;
\item For all of the objects in the set $objects$, the identifier set by $obids$ must be the same identifier as set by $Im$ for that object;
\item For all objects in set $objects$, the value set by the $values$ function must be valid.
\end{itemize}
\isabellelref{imod_data_field_combine_correct}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{thm}
\begin{proof}
Use \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_models:imod_combine_merge_correct}. It is possible to show that all assumptions hold. Now we have shown that $\mathrm{combine}(Im, Im_{DataField})$ is consistent in the sense of \cref{defin:formalisations:ecore_formalisation:instance_models:model_validity}.
\end{proof}
As explained earlier, $Im_{DataField}$ needs to introduce values for all objects that are typed by the class type on which the field is defined. This is enforced by the requirements of \cref{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field_combine_correct}. The proof is not included here for conciseness, but can be found as part of the validated proofs in Isabelle.
The definitions and theorems for introducing values for fields of data types within Ecore are now complete.
\subsubsection{Encoding as edges and nodes}
In the type level transformation of data fields, data fields were encoded in GROOVE as edge types to an primitive type. On the instance level, this edge type will be used and edges will be created to give a value to each node type that has the field defined. The encoding corresponding to $Im_{DataField}$ can then be represented as $IG_{DataField}$, defined in the following definition:
\begin{defin}[Instance graph $IG_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type}
Let $IG_{DataField}$ be the instance graph typed by type graph $TG_{DataField}$ (\cref{defin:library_of_transformations:type_level_transformations:data_fields:tg_data_field_as_edge_type}). Reuse the set $objects$ from $Im_{DataField}$. Moreover, reuse the functions $obids$ and $values$ from $Im_{DataField}$.
The objects in the set $objects$ are converted to nodes in $Im_{DataField}$. For each of these objects, an edge of the encoded field is created. This edge targets a node that corresponds to the value set by $values$ for the corresponding object. Finally, the identity of the objects is defined using $obids$. $IG_{DataField}$ is defined as:
\begin{align*}
N =\ & objects \cup \{values(ob) \mid ob \in objects\} \\
E =\ & \big\{\big(ob, (\mathrm{ns\_\!to\_\!list}(classtype), \langle name \rangle, fieldtype), values(ob)\big) \mid ob \in objects \big\} \\
\mathrm{ident} =\ & \begin{cases}
(obids(ob), ob) & \mathrm{if }\ ob \in objects
\end{cases}
\end{align*}
with
\begin{align*}
\mathrm{type}_n =\ & \begin{cases}
(ob, \mathrm{ns\_\!to\_\!list}(classtype)) & \mathrm{if }\ ob \in objects
\end{cases}
\end{align*}
\isabellelref{ig_data_field_as_edge_type}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{defin}
\begin{thm}[Correctness of $IG_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type_correct}
$IG_{DataField}$ (\cref{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type}) is a valid instance graph in the sense of \cref{defin:formalisations:groove_formalisation:instance_graphs:instance_graph_validity}.
\isabellelref{ig_data_field_as_edge_type_correct}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{thm}
A visual representation of $IG_{DataField}$ with $objects = \{ob\}$ and $obids(ob) = someId$ can be seen in \cref{fig:library_of_transformations:instance_level_transformations:data_field_values:visualisation:groove}. Like the previous visualisation, the field value for $ob$ is defined as $values(ob) = \text{``some value''}$. Although this visualisation only shows one node, it is required to define a value for all nodes typed by the node type corresponding to the field. Failing to do so would result in an invalid instance graph after it is combined with another graph, as the next definition will show. The correctness proof of $IG_{DataField}$ only is already quite involved, but not be included here for conciseness. It can be found as part of the validated Isabelle proofs.
In order to make composing transformation functions possible, $IG_{DataField}$ should be compatible with the instance graph it is combined with.
\begin{thm}[Correctness of $\mathrm{combine}(IG, IG_{DataField})$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type_combine_correct}
Assume an instance graph $IG$ that is valid in the sense of \cref{defin:formalisations:groove_formalisation:instance_graphs:instance_graph_validity}. Then $IG$ is compatible with $IG_{DataField}$ (in the sense of \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_graphs:compatibility}) if:
\begin{itemize}
\item All requirements of \cref{defin:library_of_transformations:type_level_transformations:data_fields:tg_data_field_as_edge_type_combine_correct} are met, to ensure the combination of the corresponding type graphs is valid;
\item The node type on which the corresponding field is defined is not extended by other node types within the type graph corresponding to $IG$;
\item All nodes in $IG$ that are typed by the node type on which the field is defined are also nodes in $IG_{DataField}$;
\item For all nodes shared between $IG$ and $IG_{DataField}$, each node must have the same identifier in both $IG$ and $IG_{DataField}$;
\item For all nodes for which the field is set, the $values$ function must define a valid value;
\item If an primitive type has incoming or outgoing edge types in the type graph corresponding to $IG$, then the lower multiplicity of these edge types must be 0.
\end{itemize}
\isabellelref{ig_data_field_as_edge_type_combine_correct}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{thm}
\begin{proof}
Use \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_graphs:ig_combine_merge_correct}. It is possible to show that all assumptions hold. Now we have shown that $\mathrm{combine}(IG, IG_{DataField})$ is valid in the sense of \cref{defin:formalisations:groove_formalisation:instance_graphs:instance_graph_validity}.
\end{proof}
Like the definition for the combination of instance models, the combination of instance graphs also requires the user to set a value for all nodes that are typed by the node type that corresponds to the field type. This is to keep the graph valid.
The next definitions define the transformation function from $Im_{DataField}$ to $IG_{DataField}$:
\begin{defin}[Transformation function $f_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field_to_ig_data_field_as_edge_type}
The transformation function $f_{DataField}(Im)$ is defined as:
\begin{align*}
N =\ & Object_{Im} \cup \{values(ob) \mid ob \in Object_{Im}\} \\
E =\ & \big\{\big(ob, (\mathrm{ns\_\!to\_\!list}(classtype), \langle name \rangle, fieldtype), values(ob)\big) \mid ob \in Object_{Im} \big\} \\
\mathrm{ident} =\ & \begin{cases}
(obids(ob), ob) & \mathrm{if }\ ob \in Object_{Im}
\end{cases}
\end{align*}
with
\begin{align*}
\mathrm{type}_n =\ & \begin{cases}
(ob, \mathrm{ns\_\!to\_\!list}(name)) & \mathrm{if }\ ob \in Object_{Im}
\end{cases}
\end{align*}
\isabellelref{imod_data_field_to_ig_data_field_as_edge_type}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{defin}
\begin{thm}[Correctness of $f_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field_to_ig_data_field_as_edge_type_func}
$f_{DataField}(Im)$ (\cref{defin:library_of_transformations:instance_level_transformations:data_field_values:imod_data_field_to_ig_data_field_as_edge_type}) is a valid transformation function in the sense of \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:transformation_function_instance_model_instance_graph} transforming $Im_{DataField}$ into $IG_{DataField}$.
\isabellelref{imod_data_field_to_ig_data_field_as_edge_type_func}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{thm}
The proof of the correctness of $f_{DataField}$ will not be included here. Instead, it can be found in the validated Isabelle theories.
Finally, to complete the transformation, the transformation function that transforms $IG_{DataField}$ into $Im_{DataField}$ is defined:
\begin{defin}[Transformation function $f'_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type_to_imod_data_field}
The transformation function $f'_{DataField}(IG)$ is defined as:
\begin{align*}
Object =\ &\{\mathrm{src}(e) \mid e \in E_{IG}\} \\
\mathrm{ObjectClass} =\ & \begin{cases}
(ob, classtype) & \mathrm{if }\ ob \in \{\mathrm{src}(e) \mid e \in E_{IG}\}
\end{cases}\\
\mathrm{ObjectId} =\ & \begin{cases}
(ob, obids(ob)) & \mathrm{if }\ ob \in \{\mathrm{src}(e) \mid e \in E_{IG}\}
\end{cases}\\
\mathrm{FieldValue} =\ & \begin{cases}
((ob, (classtype, name)), values(ob)) & \mathrm{if }\ ob \in \{\mathrm{src}(e) \mid e \in E_{IG}\}
\end{cases} \\
\mathrm{DefaultValue} =\ & \{\}
\end{align*}
\isabellelref{ig_data_field_as_edge_type_to_imod_data_field}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{defin}
\begin{thm}[Correctness of $f'_{DataField}$]
\label{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type_to_tmod_class_func}
$f'_{DataField}(IG)$ (\cref{defin:library_of_transformations:instance_level_transformations:data_field_values:ig_data_field_as_edge_type_to_imod_data_field}) is a valid transformation function in the sense of \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:transformation_function_instance_graph_instance_model} transforming $IG_{DataField}$ into $Im_{DataField}$.
\isabellelref{ig_data_field_as_edge_type_to_imod_data_field_func}{Ecore-GROOVE-Mapping-Library.DataFieldValue}
\end{thm}
Once more, the correctness proof is not included here but can be found in the validated Isabelle proofs of this thesis. | {
"alphanum_fraction": 0.7951551477,
"avg_line_length": 86.5783783784,
"ext": "tex",
"hexsha": "f78aca3f855d6798d2110a1a236c74ec93017aab",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca",
"max_forks_repo_licenses": [
"AFL-3.0"
],
"max_forks_repo_name": "RemcodM/thesis-ecore-groove-formalisation",
"max_forks_repo_path": "thesis/tex/05_library_of_transformations/03_instance_level_transformations/06_data_field_values.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"AFL-3.0"
],
"max_issues_repo_name": "RemcodM/thesis-ecore-groove-formalisation",
"max_issues_repo_path": "thesis/tex/05_library_of_transformations/03_instance_level_transformations/06_data_field_values.tex",
"max_line_length": 781,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca",
"max_stars_repo_licenses": [
"AFL-3.0"
],
"max_stars_repo_name": "RemcodM/thesis-ecore-groove-formalisation",
"max_stars_repo_path": "thesis/tex/05_library_of_transformations/03_instance_level_transformations/06_data_field_values.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4137,
"size": 16017
} |
\section{Results} \label{sec:r}
\subsection{2D} \label{sec:r:2d}
\subsection{3D} \label{sec:r:3d} | {
"alphanum_fraction": 0.696969697,
"avg_line_length": 19.8,
"ext": "tex",
"hexsha": "80f4a0f305027f968dc51bb99eeb681da974643a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "898134a96e299d8106d9deb7b217671c39bfeca2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "romil797/ibex",
"max_forks_repo_path": "papers/miccai2017/sections/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "898134a96e299d8106d9deb7b217671c39bfeca2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "romil797/ibex",
"max_issues_repo_path": "papers/miccai2017/sections/results.tex",
"max_line_length": 32,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "898134a96e299d8106d9deb7b217671c39bfeca2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "romil797/ibex",
"max_stars_repo_path": "papers/miccai2017/sections/results.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 41,
"size": 99
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Paul McKee
% Rensselaer Polytechnic Institute
% 1/31/18
% Master's Thesis
% with Dr. Kurt Anderson
% LaTeX Template: Project Titlepage Modified (v 0.1) by rcx
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[12pt]{article}
%\usepackage[demo]{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{blindtext}
\usepackage[utf8]{inputenc}
\usepackage{graphicx, wrapfig, subcaption, setspace, booktabs}
\usepackage{sectsty}
\usepackage{url, lipsum}
\usepackage{makecell}
\usepackage{amsmath}
\usepackage{setspace}
\usepackage{amsmath}
\usepackage{color} %red, green, blue, yellow, cyan, magenta, black, white
\definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue
\definecolor{mylilas}{RGB}{170,55,241}
\usepackage[table,xcdraw]{xcolor}
\usepackage[margin=.6in]{geometry}
\usepackage{amsmath,amsthm,amssymb}
\usepackage{color}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{graphicx}
\usepackage{cite}%AddedbyChris
\usepackage{graphicx}%AddedbyChris
\usepackage{array}%Added by Chris
\usepackage{caption}
\usepackage{amsmath}%Added by Chri
\pagestyle{fancy}
\fancyhf{}
%\fancyhead[L]{Independant Study}
%\fancyhead[C]{\rightmark}
\fancyhead[C]{}
\fancyhead[L]{\nouppercase{\leftmark}}
\fancyhead[R]{Philip Hoddinott}
\rfoot{Page \thepage \hspace{1pt} of \pageref{LastPage}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newenvironment{theorem}[2][Theorem]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\newenvironment{lemma}[2][Lemma]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\newenvironment{exercise}[2][Exercise]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\newenvironment{problem}[2][Problem]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\newenvironment{question}[2][Question]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\newenvironment{corollary}[2][Corollary]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\newenvironment{solution}{\begin{proof}[Solution]}{\end{proof}}
\usepackage{multicol}
\newcommand{\mysize}{0.5}
\usepackage{subcaption}
\usepackage{float}
\usepackage{listings}
\usepackage{color}
\newcolumntype{L}{>{\centering\arraybackslash}m{3cm}}
\usepackage{setspace}
\usepackage[framed,numbered,autolinebreaks,useliterate]{mcode}
\newlength\longest
% %---------------------------------------------------------------
% % HEADER & FOOTER
% %---------------------------------------------------------------
%\fancyhf{}
%\pagestyle{fancy}
%\renewcommand{\headrulewidth}{0pt}
%\setlength\headheight{0pt}
%\fancyhead[L]{ Paul McKee }
%\fancyhead[R]{Rensselaer Polytechnic Institute}
%\cfoot{ \thepage\ }
%--------------------------------------------------------------
% TITLE PAGE
%--------------------------------------------------------------
\iffalse
\begin{titlepage}
\title{
\LARGE \textbf{\uppercase{Put Title Here}} \\
\vspace{0.25cm}
\LARGE \textbf{Philip Hoddinott}
}
\author{\small{Submitted in Partial Fulfillment of the Requirements} \\ \small{for the Degree of} \\
\uppercase{Master of Science} \\ \\
Approved by:
\\ Kurt Anderson, Chair \\ John Christian \\ Matthew Oehlschlaeger \\ \\ %% from paul's template
\includegraphics[width=2.5cm]{rensselaer_seal.png} \\
\small{\textit{Department of Mechanical, Aerospace, and Nuclear Engineering}} \\
\small{Rensselaer Polytechnic Institute} \\
\small{Troy, New York} \\
\small{November 2018}
}
\end{titlepage}
\fi
\begin{document}
%\thispagestyle{empty}
\clearpage
\title{Deep Neural Network}
\author{Philip Hoddinott}
\maketitle
%\pagenumbering{roman}
%
%
%\newpage
%\thispagestyle{fancy}
%\addcontentsline{toc}{section}{\uppercase{Table of Contents}}
%\listoftables
%\addcontentsline{toc}{section}{\uppercase{List of Tables}}
%\listoffigures
%\addcontentsline{toc}{section}{\uppercase{List of Figures}}
% -----------------------------
% ------------------------------------------------------------
% Acknowledgement
% ------------------------------------------------------------
% ------------------------------------------------------------
% Abstract
% ------------------------------------------------------------
\tableofcontents
\listoffigures
%\doublespacing
\newpage
\section*{Abstract}
The purpose of this report is to develop a neural net that can identify handwritten digits in the MNIST database at near human levels of accuracy. The neural net will be developed without the assistance of libraries such as Python's tensor flow or MATLAB's Deep Learning.\par
The author would like to express his gratitude to Professor Hicken for the suggestion of this project. The author would also like to thank Theodore Ross and Varun Rao for their assistance with artificial neural networks.
% \newpage
%\setcounter{page}{1}
%\textcolor{red}{ Do More}
% ------------------------------------------------------------
% Introduction
% ------------------------------------------------------------
%\newpage
% this should start the normal numberinbg
\section{Introduction}
The first computational models for neural networks were thought up in the 1940s, however it would take 50 years for computers to achieve the processing power to implement the first neural networks. Today neural networks can be used for a variety of tasks. One of these tasks is image recognition of numbers. A famous database for digit recognition is the MNIST database.
\subsection{The MNIST database}
The Modified National Institute of Standards and Technology database or MNIST database\cite{mnistDATABASE} is a database of handwritten numbers used to train image processing systems. It contains 60,000 training images and 10,000 testing images. The database is comprised of images that are made up of a grid of 28x28 pixels. Some of these are seen in figure \ref{fig:mathworksmnistneuralnetfinal}. \par
A number of attempts have been made to get the lowest possible error rate on this dataset. As of August 2018 the the lowest achieved so far is a error rate of 0.21\% or an accuracy of 99.79\%. For comparison human can accurately recognize digits at a rate of 98\% - 98.5\%\cite{humanPerf}.
\begin{figure}[H]
\centering
\includegraphics[width=0.25\linewidth]{mathworks_mnist_neuralnetFinal_v2}
\caption{Sample numbers from MNIST \cite{mnistMATLAB2}.}
\label{fig:mathworksmnistneuralnetfinal}
\end{figure}
\subsection{Artificial neural network}
An artificial neural network (referred to as a neural network in this paper) is a computation system that mimics the biological neural networks found in animal brains. A neural network is not an algorithm, but a general framework to solve problems. Artificial neural networks are based of layers of interconnected neurons that transmit signals to each other. The layers in between the input and the output layers are referred to as hidden layers. Neural networks may be trained for tasks, such as the number recognition in this report.
The neural net implemented in this project had an input vector of $784\times1$ and an output vector of $10\times1$. Different configurations were tried, with one hidden layer of $250\times1$ producing the best results. A visualization of an example neural net is seen in figure \ref{fig:nndiagram}.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{nnDiagram}
\caption{Visualization of a neural network with one hidden layer\cite{nnDiagramStack}. }
\label{fig:nndiagram}
\end{figure}
\subsection{Neural Network Walkthrough}
The training of a neural network involves four main steps:
\begin{enumerate}\singlespacing
\item Initialize weights and biases (parameters).
\item Forward propagation
\item Compute the loss
\item Backward propagation
\end{enumerate}%\doublespacing
\subsubsection{Parameter Initialization}
The first step in training a neural net is to initialized the bias vectors and weight matrices. They are initialized with random numbers between 0 and 1, then multiplied by a small scalar around the order of $10^{-2}$ so that the units are not on the region where the derivative of the activation function are close to zero. The initial parameters should be different values (to keep the gradients from being the same).
There are various forms of initialization such as Xavier initialization or He-et-al Initialization, but a discussion on methods of initialization outside the scope of this paper. In this paper we will stick with random parameter initialization.
\subsubsection{Forward Propagation}
The next step is the forward propagation. The network takes the inputs from a previous layer, computes their transformation, and applies an activation function. Mathematically the forward propagation at level ``i" is represented by equation~\ref{eqn:forProp}.\par
\begin{equation}
\begin{aligned}
z_i = A_{i-1}* W_{i}+b\\
A_i=\phi(z_i)
\end{aligned}
\label{eqn:forProp}
\end{equation}
Where z is the input vector, A is the layer, W is the weights going into the layer, b the bias, and $\phi$ the activation function. This process then repeats for the next layer until it reaches the end of the neural net.
\subsubsection{Compute loss}
The loss is simply the difference between the output and the actual value. In this neural net it is computed by equation \ref{eqn:loss}.
\begin{equation}
\text{loss}=A_{i=\text{end}}-y
\label{eqn:loss}
\end{equation}
This loss is used to begin the next step: backward propagation.
\subsubsection{Backward propagation}
After going forward through the neural net in the forward propagation step and computing the loss the final step is backwards propagation. Backwards propagation is the updating of the weight parameters via the derivative of the error function with respect to the weights of the neural net. For the output layer this is seen in equation \ref{eqn:outputBack} and equation \ref{eqn:allOtherBack} for all other layers.
\begin{equation}
dW_{i=\text{end}}=\phi'(z_{i=\text{end}})*\left(A_{i=\text{end}}-y\right)
\label{eqn:outputBack}
\end{equation}
\begin{equation}
dW_i=\phi'(z_i)*\left(W_{\left(i+1\right)}^T*dW_{\left(i+1\right)}\right)
\label{eqn:allOtherBack}
\end{equation}
Once these derivatives have been computed, the weights are updated by equation \ref{eqn:updateWeight}
\begin{equation}
W_i = W_i=\alpha*dW_{i}*A_{\left(i-1\right)}^T
\label{eqn:updateWeight}
\end{equation}
Where for the first layer $A_{\left(i-1\right)}^T$ will be the input vector and for all the following layers it will be the vector from the previous layer. \par
At this point the neural net has completed a full run through. The next input vector is selected and the forward and backward propagation are run again. A visualization of forward and backward propagation is in figure \ref{fig:forwardbackwardprop}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{forwardBackwardProp}
\caption{A visualization of forward and backward propagation \cite{nnBlog}.}
\label{fig:forwardbackwardprop}
\end{figure}
\subsection{Gradient Decent}
Also known as steepest decent, gradient decent is a first order optimization algorithm. It is used to find the minimum of a function. Equation \ref{eqn:gdes} shows gradient decent implemented in a neural net.
\begin{equation}
\Delta W(t) = - \alpha \frac{\partial E}{\partial W(t)}
\label{eqn:gdes}
\end{equation}
Where $\alpha$ is the learning rate, and $\partial E/\partial W(t)$ is the error derivative with respect to the weight. As these derivatives must be computed for each node the more nodes there are in a neural net the longer it will take to train.
\subsection{Activation Function}
The activation function was previously mentioned as a function used to convert the input signal to the output signal. Activation functions introduce non-linear properties to the neural net's functions, allowing the neural net to represent complex functions \cite{nnBlog}. \par
The two most common activation functions used in neural nets for the gradient decent are sigmoid and hyperbolic tangent (Tanh).
The formula for Tanh is seen in equation \ref{eqn:tanH}, and the formula for it's derivative is seen in equation \ref{eqn:dtanh}
\begin{equation}
\phi_{\text{Tanh}}(z)=\frac{1-e^{-2z}}{1+e^{-2z}}
\label{eqn:tanH}
\end{equation}
\begin{equation}
\phi'_{\text{Tanh}}(z)=\frac{4}{\left(e^{-z}+e^{z}\right)^2}
\label{eqn:dtanh}
\end{equation}
The formula for the sigmoid function is seen in equation \ref{eqn:sig}, the formula for it's derivative is seen in equation \ref{eqn:dsig}.
\begin{equation}
\phi_{\text{Sigmoid}}(z)=\frac{1}{1+e^{-z}}
\label{eqn:sig}
\end{equation}
\begin{equation}
\phi'_{\text{Sigmoid}}(z)=\frac{e^{-z}}{\left(e^{-z}+1\right)^2}
\label{eqn:dsig}
\end{equation}
The sigmoid and Tanh function are visualized in figure \ref{fig:sigvstanh}.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{sigVsTanh}
\caption{Visualization of sigmoid and Tanh function}
\label{fig:sigvstanh}
\end{figure}
Both functions have relatively simple mathematical formulas and are differentiable. In this paper the sigmoid function is used over the Tanh function as it had better results. Sigmoid and Tanh are not the only activation functions. Other functions that should be noted are the Rectified Linear Unit (ReLU) and the Leaky Rectified Linear Unit function. These functions have their own separate pros and cons, but a proper discussion of two more activation functions is outside the scope of this paper.
\subsection{Pitfalls}
The most important thing to steer clear of is over training. Over training occurs when the neural net trains too much to the training data. While it will have a high accuracy for the training data, it's performance for the test data will decay, as it has become too well attuned to the training data. \par
The other problem is the time it takes to train. A three layer neural net can be trained to 97\% accuracy within 10 minutes, however it will not improve far beyond that. Larger nets will take longer to train, but will take far longer to train.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{nnRunTime}
\caption{Run times for various neural network architectures\cite{deepBig}.}
\label{fig:nnruntime}
\end{figure}
\section{Implementation}
\subsection{Object Oriented Programming in MATLAB}
This neural net had to be made without the use of any built in libraries \cite{Hicken18gradProjDes} and the code had to be modular \cite{Hicken18gradProjRubric}. To create code for neural network subject to these constraints the author decided to create their own neural net class in MATLAB. \par
The MATLAB class philipNeuralNet.m was written for this neural net project. It has the parameters learningRate, enableBias, actFunSwitch, and Level. The learningRate parameter is obviously the learning rate. The Level parameter has four parameters attached to it: W (weight), dW (weight derivative), z (input), and A (the vector for the layer). By having this class we have avoided hard coding the propagation of the neural net and it is possible to test different neural net architectures on this code. To implement a one hidden layer net simply set sizeArr = [250; 10] and run the code. For a six layer net set sizeArr = [2500; 2000; 1500; 1000; 500; 10] and run the code.\par
The MATLAB class also has an activation function and a derivative of the activation function. The actFunSwitch variable allows either the Sigmoid or the Tanh function to be selected. Additionally the enableBias variable allows for biases to be used or not used in the code's execution. \par
Finally it has a outputVector function that is simply an implementation of the neural net. It takes the input, runs it through the net, and returns the net's output. \par
\subsection{MATLAB Code}
The MALAB code first initializes a neural net from given parameters. It obtains the MNIST data from a function\cite{usingMNIST}. It then uses the handleTrainNet function to train the net. This function implements batch training, using the forward and backward propagation functions. It then computes and displays the training error and the testing accuracy after a specified number of runs via the testAcc.m function. Once it has done this it plots the accuracy of the neural net via the plot accuracy function, generating the plots seen in the results section. The code used in this report may be downloaded from \url{https://github.com/PhilipHoddinott/DesignOpt/tree/master/designOpNN}\par
\section{Results}
\subsection{Simple Neural Net}
The best results were found for the simplest neural net examined; a one hidden layer with 250 nodes a learning rate of 0.1, and no biases. For this simple neural net a test accuracy of 98.37\% was achieved. The first 1000 epochs of this net are visualized in figure \ref{fig:250_all}.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{250_results}
\caption{The results for the first 1000 epochs.}
\label{fig:250results}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{250_results_log}
\caption{The results for the first epochs scaled logarithmically}
\label{fig:250resultslog}
\end{subfigure}
\caption{The results from the simple network for the first 1000 epochs.}
\label{fig:250_all}
\end{figure}
To get the 98.37\% accuracy it took approximately 12 hours of running the code and over three million neural net evaluations.
\subsection{Comparison of different hidden layer sizes}
From Shure \cite{matlabNNBeg}, the optimal size for the hidden layer in a three layer neural network for the MNIST is 250 nodes. Comparing the results for hidden layers show in figure \ref{fig:multiplelayers}, different sizes of hidden layer do not have a large effect on the accuracy of these results. However what is different is the time it takes to run each net. The more nodes in a net, the longer it takes for the net to train, as there are more operations to perform. Thus if a neural net with 250 nodes will have the same accuracy as a net with 800 nodes, the first net is preferable, as it will be trained faster.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{multipleLayers}
\caption{Net accuracy for different layer sizes.}
\label{fig:multiplelayers}
\end{figure}
\subsection{Multiple hidden layers}
The best accuracy occurred when both layers had a size of $250\times1$. For network with two hidden layers of 250 each the best test accuracy was 96.49\%. The accuracy from the first 2000 epochs is seen in figure \ref{fig:250_all_l2}.
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{250_results_l2}
\caption{The results for the first 2000 epochs.}
\label{fig:250results_l2}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{250_results_log_l2}
\caption{The results for the first epochs scaled logarithmically}
\label{fig:250resultslog_l2}
\end{subfigure}
\caption{The results from the two layer network for the first 2000 epochs.}
\label{fig:250_all_l2}
\end{figure}
The accuracy of two hidden layers was not as good as the accuracy of one hidden layer. No other configurations achieved an accuracy as high as this one.
\section{Conclusion}
The simple neural net achieved an accuracy of 98.37\%. This is on par with human recognition, and is about as accurate as a simple neural net can achieve. Higher accuracy rates are achieved via the use of constitutional neural networks, such as the LeNet-5\cite{len5}. Due to the long run time of large multi layered neural networks they were not studied, but could provide a more accurate identification with out convolution.\par
What is interesting is that the best results occurred with out biases and with one hidden layer. It was expected that adding more complexion to the neural net would increase the accuracy, however this was not the case. As neural nets are very much a trial and error process, it is possible that these more complex nets will achieve a better accuracy with more fiddling.\par
%--------------------------------------
% References
% -------------------------------------
\bibliographystyle{unsrt}
\bibliography{ref}
%-----------------------------------------------------------
% Appendix
%-----------------------------------------------------------
%\newpage
\singlespacing
% \section*{Appendix 1 -derivation of gibbs}
\section*{Appendix 1 - MATLAB code}
\addcontentsline{toc}{section}{Appendix}
\lstset{language=Matlab,%
%basicstyle=\color{red},
breaklines=true,%
morekeywords={matlab2tikz},
keywordstyle=\color{blue},%
morekeywords=[2]{1}, keywordstyle=[2]{\color{black}},
identifierstyle=\color{black},%
stringstyle=\color{mylilas},
commentstyle=\color{mygreen},%
showstringspaces=false,%without this there will be a symbol in the places where there is a space
numbers=left,%
numberstyle={\tiny \color{black}},% size of the numbers
numbersep=9pt, % this defines how far the numbers are from the text
emph=[1]{for,end,break},emphstyle=[1]\color{red}, %some words to emphasise
%emph=[2]{word1,word2}, emphstyle=[2]{style},
}
\subsection*{NN\_Master.m}
%C:\Users\Philip\Documents\GitHub\DesignOpt\designOpNN\latexCode
\lstinputlisting{C:/Users/Philip/Documents/GitHub/DesignOpt/designOpNN/latexCode/NN_Master.m}
\subsection*{philipNeuralNet.m}
\lstinputlisting{C:/Users/Philip/Documents/GitHub/DesignOpt/designOpNN/latexCode/philipNeuralNet.m}
\subsection*{testAcc.m}
\lstinputlisting{C:/Users/Philip/Documents/GitHub/DesignOpt/designOpNN/latexCode/testAcc.m}
\subsection*{plotAcc.m}
\lstinputlisting{C:/Users/Philip/Documents/GitHub/DesignOpt/designOpNN/latexCode/plotAcc.m}
%\lstinputlisting{C:/Users/Philip/Documents/GitHub/Thesis/Master_TLE.m}
%\lstinputlisting{C:/Users/Philip/Documents/GitHub/independent_study_fall_2018/Independat-Study-Fall-2018/Gibbs_Heck_master_loop_Latex.m}
%\subsection{Code}
%\subsection{Master\_TLE.m}
%\lstinputlisting{C:/Users/Philip/Documents/GitHub/Thesis/Master_TLE.m}
%\subsection{get\_SATCAT.m}
% \lstinputlisting{C:/Users/Philip/Documents/GitHub/Thesis/get_SATCAT.m}
% \subsection{get\_TLE\_from\_ID\_Manager.m}
%\lstinputlisting{C:/Users/Philip/Documents/GitHub/Thesis/get_TLE_from_ID_Manager.m}
%\subsection{get\_TLE\_from\_NorID.m}
%\lstinputlisting{C:/Users/Philip/Documents/GitHub/Thesis/get_TLE_from_NorID.m}
%\lstinputlisting{get_SATCAT.m}
%Thanks for Paul McKee who started this template. It seems to have good matlab code viwing
\end{document}
| {
"alphanum_fraction": 0.7313049068,
"avg_line_length": 50.0751072961,
"ext": "tex",
"hexsha": "3b30c84f3768ee9ed33ad953c1e89ad60418f45c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "413381ef4e783fe6a5138d9a0a5bea386ac79467",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PhilipHoddinott/DesignOpt",
"max_forks_repo_path": "designOpNN/Latex/Hoddinott_Philip_Grad_Project_Write_up.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "413381ef4e783fe6a5138d9a0a5bea386ac79467",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PhilipHoddinott/DesignOpt",
"max_issues_repo_path": "designOpNN/Latex/Hoddinott_Philip_Grad_Project_Write_up.tex",
"max_line_length": 693,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "413381ef4e783fe6a5138d9a0a5bea386ac79467",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PhilipHoddinott/DesignOpt",
"max_stars_repo_path": "designOpNN/Latex/Hoddinott_Philip_Grad_Project_Write_up.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6318,
"size": 23335
} |
\documentclass[10pt, a4paper, twoside]{basestyle}
\usepackage[Mathematics]{semtex}
%%%% Shorthands.
%%%% Title and authors.
\title{%
\textdisplay{%
An Introduction to Runge-Kutta Integrators}%
}
\author{Robin~Leroy (eggrobin)}
\begin{document}
\maketitle
In this post I shall assume understanding of the concepts described in chapter~8 (Motion) as well as sections 11--4 and 11--5 (Vectors and Vector algebra) of chapter~11 of Richard Feynman's \emph{Lectures on Physics}.
\section{Motivation}
We want to be able to predict the position $\vs\of t$ as a function of time of a spacecraft (without engines) around a fixed planet of mass $M$. In order to do this, we recall that the velocity is given by
\[\vv = \deriv t \vs\]
and the acceleration by
\[\va = \deriv t \vv = \deriv[2] t \vs\text.\]
We assume that the mass of the spacecraft is constant and that the planet sits at the origin of our reference frame. Newton's law of universal gravitation tells us that the magnitude (the length) of the acceleration vector will be \[
a=\frac{G M}{s^2}\text,
\]
where $s$ is the length of $\vs$, and that the acceleration will be directed towards the planet, so that\[
\va=-\frac{G M}{s^2} \frac{\vs}{s}\text.
\]
We don't really care about the specifics, but we see that this is a function of $\vs$. We'll write it $\va\of\vs$.
Putting it all together we could rewrite this as
\[\deriv[2] t \vs = \va\of\vs\]
and go ahead and solve this kind of problem, but we don't like having a second derivative. Instead we go back to our first order equations and we write both of them down,
\[
\begin{dcases}
\deriv t \vs = \vv \\
\deriv t \vv = \va\of\vs
\end{dcases}\text.
\]
Let us define a vector $\vy$ with 6 entries instead of 3,
\[\vy = \tuple{\vs, \vv} = \tuple{s_x, s_y, s_z, v_x, v_y, v_z}\text.\]
Similarly, define a function $\vf$ as follows:
\[\vf\tuple{\vy} = \tuple{\vv, \va\of\vs}\text.\]
Our problem becomes
\[\deriv t \vy = \tuple{\deriv t \vs, \deriv t \vv} = \tuple{\vv, \va\of\vs} = \vf\of\vy\text.\]
So we have gotten rid of that second derivative, at the cost of making the problem 6-dimensional instead of 3-dimensional.
\section{Ordinary differential equations}
We are interested in computing solutions to equations of the form
\[\deriv t \vy = \vf\of{t,\vy}\text.\]
Such an equation is called an \emph{ordinary differential equation} (\textsc{ode}). The function $\vf$ is called the \emph{right-hand side} (\textsc{rhs}).
Recall that if the right-hand side didn't depend on $\vy$, the answer would be the integral,
\[\deriv t \vy = \vf\of t \Implies \vy = \int{} \vf\of t \diffd t\text.\]
General \textsc{ode}s are a generalisation of this, so the methods we use to compute their solutions are called \emph{integrators}.
In the case where the right-hand side doesn't depend on $t$ (but depends on $\vy$), as was the case in the previous section, the equation becomes
\[\deriv t \vy = \vf\of{\vy}\text.\]
For future reference, we call such a right-hand side \emph{autonomous}.
In order to compute a particular solution (a particular trajectory of our spacecraft), we need to define some initial conditions (the initial position and velocity of our spacecraft) at $t=t_0$. We write them as\[
\vy\of{t_0} = \vy_0\text.
\]
The \textsc{ode} together with the initial conditions form the \emph{initial value problem} (\textsc{ivp})
\[
\begin{dcases}
\deriv t \vy = \vf\of{t,\vy} \\
\vy\of{t_0} = \vy_0
\end{dcases}\text.
\]
\section{Euler's method}
As we want to actually solve the equation using a computer, we can't compute $\vy\of t$ for all values of $t$. Instead we approximate $\vy\of{t}$ at discrete time steps.
How do we compute the first point $\vy_1$, the approximation for $\vy\of{t_0 + \increment t}$? By definition of the derivative, we have \[
\lim_{\conv{\increment t}{0}} \frac{\vy\of{t_0+\increment t}-\vy\of{t_0}}{\increment t} = \vf\of{t_0,\vy_0}\text.
\]
This means that if we take a sufficiently small $\increment t$, we have
\[
\frac{\vy\of{t_0 + \increment t}-\vy\of{t_0}}{\increment t} \approx \vf\of{t_0,\vy_0}\text,
\]
where the approximation gets better as $\increment t$ gets smaller.
Our first method for approxmimating the solution is therefore to compute\[
\vy_1 = \vy_0 + \vf\of{t_0,\vy_0} \increment t \text.\]
Note that this is an approximation:\[
\vy_1\approx\vy\of{t_0+\increment t}\text.
\]
For the rest of the solution, we just repeat the same method, yielding \[
\vy_{n+1} =\vy_n + \vf\of{t_n,\vy_n}\increment t\text.\]
Again these are approximations:\[
\vy_n\approx\vy\of{t_0+n\increment t}\text.
\]
This is called \emph{Euler's method}, after the Swiss mathematician and physicist Leonhard Euler (1707--1783). A good visualisation of this method, as well as a geometric interpretation, can be found in the \emph{Wikipedia} article \url{http://en.wikipedia.org/wiki/Euler_method}.
We want to know two things: how good our approximation is, and how much we need to reduce $\increment t$ in order to make it better.
In order to do that, we use Taylor's theorem.
\subsection*{Taylor's theorem}
Recall that if $\deriv t \vy$ is constant, \[\vy\of{t_0+\increment t} = \vy\of{t_0} + \deriv t \vy \of{t_0} \increment t \text.\]
If $\deriv[2] t \vy $ is constant, \[\vy\of{t_0+\increment t} = \vy\of{t_0} + \deriv t \vy\of{t_0} \increment t + \deriv[2] t \vy\of{t_0} \frac{\increment t^2 }{2}\text.\]
In general, if we assume the $n$th derivative to be constant,\[
\vy\of{t_0+\increment t}=\vy\of{t_0} + \sum{j=1}[n] \deriv[j] t \vy \of{t_0} \frac{\increment t^j}{\Factorial j} \text,\]
where $\Factorial j = 1 \times 2 \times 3 \times \dotsb \times j$ is the factorial of $j$.
Taylor's theorem roughly states that this is a good approximation, which gets better as $n$ gets higher. Formally, if $\vy$ is differentiable $n$ times, for sufficiently small $\increment t$,
\begin{equation}
\vy\of{t_0+\increment t}=\vy\of{t_0} + \sum{j=1}[n-1] \deriv[j] t \vy \of{t_0} \frac{\increment t^j}{\Factorial j} + \BigO\of{\increment t ^ n} \text,
\label{TaylorLandau}
\end{equation}
where $\BigO\of{\increment t ^ n}$ is read ``big $\BigO$ of $\increment t ^n$''. It is not a specific function, but stands for ``some function whose magnitude is bounded by $K \increment t ^ n$ for some constant $K$ as $\increment t$ goes to $0$''.
This big $\BigO$ notation indicates the quality of the approximation: it represents error terms that vanish at least as fast as $\increment t ^ n$.
There is a version of Taylor's theorem for multivariate functions\footnote{Functions of a vector.}; the idea is the same, but stating it in its general form is complicated. Instead let us look at the cases we will need here.
For a function $\vf\of{t,\vy}$, We have the analogue to the $n=1$ version of Taylor's theorem:
\begin{equation}
\vf\of{t_0,\vy_0+\increment\vy} = \vf\of{t_0,\vy_0} + \BigO\of{\norm{\increment\vy}}
\label{TaylorF1}
\end{equation}
and the analogue to the $n=2$ version:
\begin{equation}
\vf\of{t_0,\vy_0+\increment\vy} = \vf\of{t_0,\vy_0} + \deriv\vy\vf\of{t_0,\vy_0}\increment\vy + \BigO\of{\norm{\increment\vy}^2}\text.
\label{TaylorF2}
\end{equation}
Knowing what $\deriv\vy\vf\of{t_0,\vy_0}$ actually means is not important here, it is just something that you can multiply a vector with to get another vector.\footnote{It is a linear map, so if you know what a matrix is, you can see it as one.}
\subsection*{Error analysis}
Armed with this theorem, we can look back at Euler's method. We computed the approximation for $\vy\of{t_0 + \increment t}$ as \[
\vy_1 = \vy_0 + \vf\of{t_0,\vy_0} \increment t = \vy\of{t_0} + \deriv t \vy\of{t_0} \increment t \text.\]
By definition of the derivative, we have seen that as $\increment t$ approaches $0$, $\vy_1$ will become a better approximation for $\vy\of{t+\increment t}$. However, when we reduce the time step, we need more steps to compute the solution over the same duration. What is the error when we reach some $t_{\text{end}}$? There are obviously $\frac{t_{\text{end}} - t_0}{\increment t}$ steps, so we should multiply the error on a single step by $\frac{t_{\text{end}} - t_0}{\increment t}$. This means that the error on a single step needs to vanish\footnote{For the advanced reader: uniformly.} \emph{faster than} $\increment t$.
In order to compute the magnitude of the error, we'll use Taylor's theorem for $n=2$. We have, for sufficiently small $\increment t$, \begin{align*}
\norm{\vy\of{t_0+\increment t} - \vy_1}
&= \norm{\vy\of{t_0} + \deriv t \vy\of{t_0} \increment t + \BigO\of{\increment t^2}
- \vy\of{t_0} - \deriv t \vy\of{t_0} \increment t} \\
&= \norm{\BigO\of{\increment t^2}}
\leq K \increment t^2
\end{align*}
for some constant $K$ which does not depend on $\increment t$ (recall that this is the definition of the big $\BigO$ notation). This means that the error on \emph{one step} behaves as the square of the time step: it is divided by four when the time step is halved.
It follows that the error in the approximation at $t_{\text{end}}$ should intuitively behave as $\frac{t_{\text{end}} - t_0}{\increment t} \increment t^2 = \pa{t_{\text{end}} - t_0} \increment t$, and indeed this is the case. In order to properly show that, some additional assumptions must be made, the description of which is beyond the scope of this introduction.\footnote{For the advanced reader: the solution has to be Lipschitz continuous and its second derivative has to be bounded.}
Thus, the conclusion about Euler's method is that when computing the solution over a fixed duration $t_{\text{end}} - t_0$, the error behaves like $\increment t$, \idest, linearly: halving the time step will halve the error. We call Euler's method a \emph{first-order method}.
We remark that we can rewrite Euler's method as follows.
\begin{align*}
\vk_1 &= \vf\of{t_0, \vy_0}\text;\\
\vy_1 &= \vy_0 + \vk_1 \increment t\text.
\end{align*}
This will be useful in the wider scope of Runge-Kutta integrators.
Can we do better than first-order? In order to answer this question, we note that the reason why the error in Euler's method was linear for a fixed duration is that it was quadratic for a single time step. The reason why it was quadratic for a single time step is that our approximation matched the first derivative term in the Taylor expansion. If we could match higher-order terms in the expansion, we would get a higher-order method. Specifically, if our approximation matches the Taylor expansion up to and including the $k$th derivative, we'll get a $k$th order method.
\section{The midpoint method}
How do we match higher derivatives? We don't know what they are: the first derivative is given to us by the problem (it's $\vf\of{t, \vy\of t}$ at time $t$), the other ones are not.
However, if we look at $\vg\of t = \vf\of{t, \vy\of t}$ as a function of $t$,
we have
\begin{align*}
\vg &= \deriv t \vy \\
\deriv t \vg &= \deriv[2] t \vy\text.
\end{align*}
Of course, we can't directly compute the derivative of $\vg$, because we don't even know what $\vg$ itself looks like: that would entail knowing $\vy$, which is what we are trying to compute.
However, let us assume for a moment that we could compute $\vg\of{t_0 + \frac{\increment t}{2}}$. Using Taylor's theorem on $\vg$,
\[
\vg\of{t_0 + \frac{\increment t}{2}} = \vg\of{t_0} + \deriv t \vg\of{t_0} \frac{\increment t}{2} + \BigO\of{\increment t^2}\text.\]
Substituting $\vg$ yields.
\[
\vg\of{t_0 + \frac{\increment t}{2}}
= \deriv t \vy \of{t_0} + \deriv[2] t \vy\of{t_0} \frac{\increment t}{2} + \BigO\of{\increment t^2}\text.
\]
This looks like the first and second derivative terms in the Taylor expansion of $\vy$. Therefore, the following expression would yield a third-order approximation for the step $\vy\of{t+\increment t}$ (and thus a second-order method), if only we could compute it: \[
\hat{\vy}_1 = \vy_0 + \vg\of{t_0 + \frac{\increment t}{2}} \increment t = \vy_0 + \vf\of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \frac{\increment t}{2}}} \increment t\text.\]
Indeed,
\begin{alignat*}{2}
\hat{\vy}_1
&= \vy_0 + \deriv t \vy \of{t_0} \increment t + \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{2} + \BigO\of{\increment t^3} \\
&= \vy\of{t+\increment t} + \BigO\of{\increment t^3} \text.
\end{alignat*}
Unfortunately, we can't compute $\vg\of{t_0 + \frac{\increment t}{2}}$ exactly, because for that we would need to know $\vy\of{t_0 + \frac{\increment t}{2}}$. Instead, we try using a second-order approximation for it, obtained using one step of Euler's method, namely\[
\vy_0 + \vf\of{t_0, \vy_0} \frac{\increment t}{2} = \vy\of{t_0 + \frac{\increment t}{2}} + \BigO\of{\increment t^2} \text.
\]
We use it to get an approximation $\vy_1$ of $\hat{\vy}_1$.\[
\vy\of{t_0 + \increment t} \approx \hat{\vy}_1 \approx \vy_1 = \vy_0 + \vf\of{t_0 + \frac{\increment t}{2}, \vy_0 + \frac{\increment t}{2} \vf\of{t_0, \vy_0}} \increment t
\]
In order to show that $\vy_1$ is a third-order approximation for $\vy\of{t+\increment t}$, we show that it is a third-order approximation for $\hat{\vy}_1$.
In order to do that, we use our error bound on the step of Euler's method and compute the multivariate first-order Taylor expansion of $\vf$ in its second argument (\ref{TaylorF1}),
\begin{align*}
\vf\of{t_0 + \frac{\increment t}{2}, \vy_0 + \frac{\increment t}{2} \vf\of{t_0, \vy_0}} &= \vf\of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \frac{\increment t}{2}} + \BigO\of{\increment t^2}}\\
&= \vf\of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \frac{\increment t}{2}}} + \BigO\of{\increment t^2}\text.
\end{align*}
Substituting yields
\begin{align*}
\vy_1
&= \vy_0 +\vf\of{t_0 + \frac{\increment t}{2}, \vy_0 + \frac{\increment t}{2} \vf\of{t_0, \vy_0}} \increment t \\
&= \vy_0 +\vf\of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \frac{\increment t}{2}}} \increment t + \BigO\of{\increment t^3} \\
&= \hat{\vy}_1 + \BigO\of{\increment t^3} \\
&= \vy\of{t + \increment t} + \BigO\of{\increment t^3}\text.
\end{align*}
The method is third-order on a single step, so it is a second order method.
The idea here was to say that the derivative at $t_0$ is not a good enough approximation for the behaviour between $t_0$ and $t_0 + \increment t$, and to compute the derivative halfway through $\vg\of{t_0 + \frac{\increment t}{2}}$ instead. In order to do that, we had to use a lower-order method (our Euler half-step).
A good visualisation of this method, as well as a geometric interpretation, can be found on the \emph{Wikipedia} article \url{http://en.wikipedia.org/wiki/Midpoint_method}.
Again we remark that we can rewrite the midpoint method as follows.
\begin{align*}
\vk_1 &= \vf\of{t_0, \vy_0}\text;\\
\vk_2 &= \vf\of{t_0, \vy_0 + \vk_1\frac{\increment t}{2}}\text;\\
\vy_1 &= \vy_0 + \vk_2 \increment t\text.
\end{align*}
This will be useful in the wider scope of Runge-Kutta integrators.
\section{Heun's method}
Before we move on to the description of general Runge-Kutta methods, let us look at another take on second-order methods.
Instead of approximating the behaviour between $t_0$ and $t_0 + \increment t$ by the derivative halfway through, what if we averaged the derivatives at the end and at the beginning?
\[
\hat{\vy}_1=\vy_0 + \frac{\vg\of{t_0} + \vg\of{t_0 + \increment t}}{2} \increment t \text.
\]
Let us compute the error:
\begin{align*}
\hat{\vy}_1
&=\vy_0 + \frac{\deriv t \vy \of{t_0} + \deriv t \vy \of{t_0} + \deriv[2] t \vy\of{t_0} \increment t + \BigO\of{\increment t^2}}{2} \increment t \\
&= \vy_0 + \deriv t \vy \of{t_0} \increment t + \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{2} + \BigO\of{\increment t^3} = \vy\of{t_0 + \increment t} + \BigO\of{\increment t^3}\text.
\end{align*}
This is indeed a third-order approximation for the step, so this would give us a second-order method. As in the midpoint method, we can't actually compute $\vg\of{t_0 + \increment t}$. Instead we approximate it using Euler's method, so that our step becomes:
\[
\hat{\vy}_1\approx\vy_1=\vy_0
+ \frac{\vf\of{t_0,\vy_0} + \vf\of{t_0 + \increment t,\vy_0 + \vf\of{t_0,\vy_0} \increment t}}{2} \increment t\text.
\]
We leave checking that the approximation $\hat{\vy}_1\approx\vy_1$ is indeed third-order as an exercise to the reader.
This method is called \emph{Heun's method}, after German mathematician Karl Heun (1859--1929).
A good visualisation of this method, as well as a geometric interpretation, can be found on the \emph{Wikipedia} article \url{http://en.wikipedia.org/wiki/Heun's_method#Description}.
It looks like Heun's method is slower than the midpoint method, as there are three evaluations of $\vf$. However, two of those are with the same arguments, so we can rewrite things as follows:
\begin{align*}
\vk_1 &= \vf\of{t_0,\vy_0} \text{ is our approximation for $\vg\of{t_0}$;} \\
\vk_2 &= \vf\of{t_0 + \increment t,\vy_0 + \vk_1 \increment t} \text{ is our approximation for $\vg\of{t_0 + \increment t}$;} \\
\vy_1 &= \vy_0 + \frac{\vk_1 + \vk_2}{2}\increment t \text{ is our approximation for $\vy\of{t + \increment t}$.}
\end{align*}
This process can be generalised, yielding so-called Runge-Kutta methods.
\section{Runge-Kutta methods}
In a \emph{Runge-Kutta method},\footnote{Named after German mathematicians Carl David Tolmé Runge (1856--1927) and Martin Wilhelm Kutta (1867--1944).} we compute the step $\vy_1\approx\vy\of{t_0+\increment t}$ as a linear approximation\[
\vy_1 = \vy_0 + \vgl \increment t\text.
\]
The idea is that we want to use a weighted average (with weights $b_i$) of the derivative $\vg$ of $\vy$ at $s$ points between $t_0$ and $t_0 + \increment t$ as our approximation $\vgl$, \[
\hat{\vy}_1 = \vy_0 + \pa{b_1 \vg\of{t_0} + b_2 \vg\of{t_0 + c_2\increment t} + \dotsb + b_s \vg\of{t_0 + c_s\increment t}} \increment t\text,
\]
but we cannot do that because we do not know how to compute $\vg$; we only know how to compute $\vf$. Instead we compute \emph{increments} $\vk_i$ which approximate the derivative, $\vk_i\approx\vg\of{t_0 + c_i \increment t}$, and we take a weighted average of these as our overall linear approximation:\begin{align*}
\vgl &= b_1\vk_1 + b_2\vk_2 + \dotsb + b_s\vk_s\text,\\
\vy_1 &= \vy_0 + \pa{b_1\vk_1 + b_2\vk_2 + \dotsb + b_s\vk_s} \increment t\text.
\end{align*}
In order to compute each increment, we can use the previous ones to construct an approximation that has high enough order.
Surprisingly, we will see that the approximation \[
b_1\vk_1 + b_2\vk_2 + \dotsb + b_s\vk_s \approx b_1 \vg\of{t_0} + b_2 \vg\of{t_0 + c_2\increment t} + \dotsb + b_s \vg\of{t_0 + c_s\increment t}
\] is generally better than the individual approximations $\vk_i\approx\vg\of{t_0 + c_i \increment t}$.
\section*{Definition}
A Runge-Kutta method is defined by its \emph{weights} $\vb=\tuple{b_1,\dotsc,b_s}$, its \emph{nodes} $\vc=\tuple{c_1,\dotsc,c_s}$ and its Runge-Kutta matrix\[
\matA=
\begin{pmatrix}
a_{11} & \cdots & a_{1s} \\
\vdots & \ddots & \vdots \\
a_{s1} & \cdots & a_{ss}
\end{pmatrix}
\text.
\]
It is typically written as a \emph{Butcher tableau}:\[
\begin{array}{c | c c c c c}
c_1 & a_{11} & \cdots & a_{1s} \\
\vdots & \vdots & \ddots & \vdots \\
c_s & a_{s1} & \cdots & a_{ss} \\
\hline
& b_{1} & \cdots & b_{s}
\end{array}
\]
We will only consider \emph{explicit} Runge-Kutta methods, \idest, those where $\matA$ is strictly lower triangular, so that the Butcher tableau is as follows (blank spaces in $\matA$ are zeros).\[
\begin{array}{l | c c c c c}
0 & & & & & \\
c_2 & a_{21} & & & & \\
c_3 & a_{31} & a_{32} & & & \\
\vdots & \vdots & \vdots & \ddots & & \\
c_s & a_{s1} & a_{s2} & \cdots & a_{s,s-1} & \\
\hline
& b_{1} & b_{2} & \cdots & b_{s-1} & b_{s}
\end{array}
\]
The step is computed using the weighted sum of the increments as a linear approximation,\[
\vy_1 = \vy_0 + \pa{b_1\vk_1 + b_2\vk_2 + \dotsb + b_s\vk_s} \increment t\text,
\]
where the increments are computed in $s$ \emph{stages} as follows:\footnote{\emph{Caveat lector}: $\vk_i$ is often defined as $\increment t\vf\of{t_0 + c_i\increment t, y_0 + \increment t \pa{a_{i1}\vk_1 + a_{i2}\vk_2 + \dotsb + a_{i,i-1}\vk_{i-1}}}$. In this case it is an approximation of the increment using the derivative at $t_0 + c_i\increment t$ rather than an approximation of the derivative itself.}
\begin{align*}
\vk_1 &= \vf\of{t_0, y_0} \\
\vk_2 &= \vf\of{t_0 + c_2\increment t, y_0 + a_{21}\vk_1 \increment t } \\
\vk_3 &= \vf\of{t_0 + c_3\increment t, y_0 + \pa{a_{31}\vk_1 + a_{32}\vk_2} \increment t } \\
&\vdots\\
\vk_i &= \vf\of{t_0 + c_i\increment t, y_0 + \pa{a_{i1}\vk_1 + a_{i2}\vk_2 + \dotsb + a_{i,i-1}\vk_{i-1}} \increment t} \\
&\vdots\\
\vk_s &= \vf\of{t_0 + c_s\increment t, y_0 + \pa{a_{s1}\vk_1 + a_{s2}\vk_2 + \dotsb + a_{s,s-1}\vk_{s-1}} \increment t }\text.
\end{align*}
Recall that $\vk_i$ is an approximation for $\vg\of{t_0 + c_i\increment t}$, the derivative of $\vy$ at $t_0 + c_i\increment t$, so that the\[
y_0 + \pa{a_{i1}\vk_1 + a_{i2}\vk_2 + \dotsb + a_{i,i-1}\vk_{i-1}} \increment t
\] are themselves linear approximations obtained by weighted averages of approximated derivatives.
Note that each $\vk_i$ only depends on the $\vk_j$ for $j<i$, so that they can be computed in order.\footnote{This is why we call the method \emph{explicity}. In an implicit method, each $k_i$ can depend on all the $k_j$, so that you need to solve a system of algebraic equations in order to compute them.}
Note that all of the methods described above were Runge-Kutta methods. We invite the reader to check that the $\vk_i$ described in the relevant sections correspond to the following tableaux:
Euler's method has Butcher tableau\[
\begin{array}{c | c}
0 & \\
\hline
& 1
\end{array}\text,
\]
The midpoint method is described by\[
\begin{array}{c | c c}
0 & \\
\frac 1 2 & \frac 1 2 & \\
\hline
& 0 & 1
\end{array}
\]
and Heun's method by\[
\begin{array}{c | c c}
0 & \\
1 & 1 & \\
\hline
& \frac 1 2 & \frac 1 2
\end{array}\text.
\]
\subsection*{An example: Kutta's third-order method}
We will now consider the Runge-Kutta method given by the following Butcher tableau.\[
\begin{array}{c | c c c}
0 & \\
\frac 1 2 & \frac 1 2 & \\
1 & -1 & 2 \\
\hline
& \frac 1 6 & \frac 2 3 & \frac 1 6
\end{array}
\]
We have \[
\vy_1 = \vy_0 + \pa{\frac{\vk_1}{6} + \frac{2\vk_2}{3} + \frac{\vk_3}{6}}\increment t \text.
\]
This is an approximation for\[
\hat{\vy}_1= \vy_0 + \pa{\frac{\vg\of{t_0}}{6}
+ \frac{2\vg\of{t_0 + \frac{\increment t}{2}}}{3} + \frac{\vg\of{t_0 + \increment t}}{6}} \increment t
\]
Let us look at the order of $\hat{\vy}_1$ as an approximation of $\vy\of{t_0 + \increment t}$.
\begin{align*}
\hat{\vy}_1&= \vy_0 + \pa{\frac{\vg\of{t_0}}{6}
+ \frac{2\vg\of{t_0 + \frac{\increment t}{2}}}{3} + \frac{\vg\of{t_0 + \increment t}}{6}} \increment t \\
&= \vy_0 + \frac{1}{6}\deriv t \vy \of{t_0} \increment t \\
& \quad + \frac{2}{3}\pa{\deriv t \vy \of{t_0} + \deriv[2] t \vy \of{t_0} \frac{\increment t}{2} + \deriv[3] t \vy \of{t_0} \frac{\increment t^2}{8} } \increment t \\
& \quad + \frac{1}{6}\pa{\deriv t \vy \of{t_0} + \deriv[2] t \vy \of{t_0} \increment t + \deriv[3] t \vy \of{t_0} \frac{\increment t^2}{2}} \increment t \\
&\quad + \BigO\of{\increment t^3} \increment t \\
&= \vy_0 + \deriv t \vy \of{t_0} \increment t + \deriv[2] t \vy \of{t_0} \frac{\increment t^2}{2} + \deriv[3] t \vy \of{t_0} \frac{\increment t^3}{6} + \BigO\of{\increment t^4} \\
&= \vy\of{t_0 + \increment t} + \BigO\of{\increment t^4}\text,
\end{align*}
so it looks like this could be a third-order method ($\hat{\vy}_1\approx\vy\of{t_0 + \increment t}$ is a fourth-order approximation).
In order for that to work however, we need $\vy_1\approx\hat{\vy}_1$ to be fourth-order, in other words, we need the difference between \[
\frac{\vg\of{t_0}}{6} + \frac{2\vg\of{t_0 + \frac{\increment t}{2}}}{3} + \frac{\vg\of{t_0 + \increment t}}{6}
\]and\[
\frac{\vk_1}{6} + \frac{2\vk_2}{3} + \frac{\vk_3}{6}
\]to be $\BigO\of{\increment t^3}$.
We have $\vk_1=\vg\of{t_0}$.
If we compute $\vk_2$, we see that it is \emph{only} a second-order approximation for $\vg\of{t_0 + \frac{\increment t}{2}}$,
\marginnote{The fourth line follows from (\ref{TaylorF2}), the fifth by taking the Taylor expansion of $\deriv\vy\vf\of{t,\vy\of{t}}$ as a function of $t$.}
\begin{alignat*}{2}
\vk_2 &=\vf\of{t_0 + \frac{\increment t}{2}, \vy_0 + \vk_1 \frac{\increment t}{2}} \\
&=\vf\of{t_0 + \frac{\increment t}{2}, \vy_0 + \deriv t \vy \of{t_0} \frac{\increment t}{2}} \\
&=\vf\of{t_0 + \frac{\increment t}{2},
\vy\of{t_0 + \frac{\increment t}{2}}
- \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{8}
+ \BigO\of{\increment t^3} } \\
&=\vf\of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \frac{\increment t}{2}} }
- \deriv \vy\vf \of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \frac{\increment t}{2}} }
\deriv[2] t \vy\of{t_0} \frac{\increment t^2}{8} + \BigO\of{\increment t^3} \\
&= \vf\of{t_0 + \frac{\increment t}{2}, \vy\of{t_0 + \deriv[2] t \vy\of{t_0} \frac{\increment t}{2}} }
- \deriv \vy\vf \of{t_0, \vy\of{t_0}}
\deriv[2] t \vy\of{t_0} \frac{\increment t^2}{8} + \BigO\of{\increment t^3} \\
&= \vg\of{t_0 +\frac{\increment t}{2}}
- \deriv \vy\vf \of{t_0, \vy\of{t_0}}
\deriv[2] t \vy\of{t_0} \frac{\increment t^2}{8} + \BigO\of{\increment t^3} \text,
\end{alignat*}
so in order for the method to be third-order, we need this second-order term to be \emph{cancelled} by the error in $\vk_3$. We can compute this error, \begin{alignat*}{2}
\vk_3 &=\vf\of{t_0 + \increment t, \vy_0 - \vk_1 \increment t + 2 \vk_2 \increment t} \\
&=\vf\of{t_0 + \increment t,
\vy_0 - \deriv t \vy \of{t_0} \increment t
+ 2\vf\of{t_0 + \frac{\increment t}{2},
\vy\of{t_0 + \frac{\increment t}{2}} } \increment t
+ \BigO\of{\increment t^3} } \\
&=\vf\of{t_0 +\increment t,
\vy_0 - \deriv t \vy \of{t_0} \increment t
+ 2 \deriv t \vy\of{t_0 + \frac{\increment t}{2}} \increment t
+ \BigO\of{\increment t^3} } \\
&=\vf\of{t_0 +\increment t,
\vy_0 - \deriv t \vy \of{t_0} \increment t
+ 2 \deriv t \vy\of{t_0} \increment t
+ \deriv[2] t \vy\of{t_0} \increment t^2
+ \BigO\of{\increment t^3} } \\
&=\vf\of{t_0 +\increment t,
\vy\of{t_0 + \increment t}
+ \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{2}
+ \BigO\of{\increment t^3} } \\
&=\vf\of{t_0 +\increment t, \vy\of{t_0 + \increment t} }
+ \deriv \vy\vf \of{t_0, \vy\of{t_0}} \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{2}
+ \BigO\of{\increment t^3} \\
&= \vg\of{t_0 +\increment t}
+ \deriv \vy\vf \of{t_0, \vy\of{t_0}} \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{2}
+ \BigO\of{\increment t^3} \text,
\end{alignat*}
and indeed the second-order error term from $\vk_3$ cancels with the one from $\vk_2$ in the weighted average, so that for the whole step we get:
\begin{alignat*}{2}
\vy_1 &= \vy_0 + \pa{\frac{\vk_1}{6} + \frac{2\vk_2}{3} + \frac{\vk_3}{6}} \increment t \\
&= \vy_0 + \bigg( \frac{\vg\of{t_0}}{6} \\
&\quad + \frac{2\vg\of{t_0 + \frac{\increment t}{2}}}{3} - \frac{2}{3} \deriv \vy\vf \of{t_0, \vy\of{t_0}} \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{8} \\
&\quad + \frac{\vg\of{t_0 + \increment t}}{6} + \frac{1}{6} \deriv \vy\vf \of{t_0, \vy\of{t_0}} \deriv[2] t \vy\of{t_0} \frac{\increment t^2}{2} \\
&\quad + \BigO\of{\increment t ^ 3} \bigg) \increment t\\
&= \vy_0 + \increment t \pa{\frac{\vg\of{t_0}}{6} + \frac{2\vg\of{t_0 + \frac{\increment t}{2}}}{3} + \frac{\vg\of{t_0 + \increment t}}{6}} + \BigO\of{\increment t ^ 4} \\
&= \hat{\vy}_1 + \BigO\of{\increment t ^ 4} \\
&= \vy\of{t_0 + \increment t} + \BigO\of{\increment t^4}\text.
\end{alignat*}
The error on the step is fourth-order and thus the is accurate to the third order.
\subsection*{Closing remarks}
Fiddling with Taylor's theorem in order to find a high-order method by trying to make low-order terms cancel out is hard and involves a lot of guesswork. This is where the Runge-Kutta formulation shines: one can check the order of the method by seeing whether the coefficients $\matA$, $\vb$, $\vc$ satisfy the corresponding \emph{order conditions}.
A method has order $1$ if and only if it satisfies\[
\sum{i=1}[s]b_i =1\text.\]
It has order $2$ if and only if, in addition to the above equation, it satisfies\[
\sum{i=1}[s]b_ic_i =\frac 1 2 \text.\]
It has order $3$ if and only if, in addition to satisfying the above two equations, it satisfies\[
\begin{dcases}
\sum{i=1}[s]b_ic_i^2 =\frac 1 3 \\
\sum{i=1}[s]\sum{j=1}[s]b_ia_{ij}c_j =\frac 1 6
\end{dcases}\text.
\]
It has order $4$ if and only if, in addition to satisfying the above four equations, it satisfies\[
\begin{dcases}
\sum{i=1}[s]b_ic_i^3 =\frac 1 4 \\
\sum{i=1}[s]\sum{j=1}[s]b_ic_ia_{ij}c_j =\frac 1 8 \\
\sum{i=1}[s]\sum{j=1}[s]b_ia_{ij}c_j^2 =\frac {1} {12} \\
\sum{i=1}[s]\sum{j=1}[s]\sum{k=1}[s]b_ia_{ij}a_{jk}c_k =\frac {1} {24} \\
\end{dcases}\text.
\]
The number of order conditions explodes with increasing order, and they are not easy to solve. There are cases where only numerical values are known for the coefficients.
We leave the following as an exercise to the reader: characterise all explicit second-order methods with two stages ($s=2$). Check your result by computing Taylor expansions.
\end{document} | {
"alphanum_fraction": 0.6604165968,
"avg_line_length": 66.546875,
"ext": "tex",
"hexsha": "6cde2f4343b9a6166459bf853719f76490465423",
"lang": "TeX",
"max_forks_count": 92,
"max_forks_repo_forks_event_max_datetime": "2022-03-21T03:35:37.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-11T23:08:58.000Z",
"max_forks_repo_head_hexsha": "64c4c6c124f4744381b6489e39e6b53e2a440ce9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pleroy/Principia",
"max_forks_repo_path": "documentation/ODEs and Runge-Kutta integrators.tex",
"max_issues_count": 1019,
"max_issues_repo_head_hexsha": "2191c660cf22f60f8009d0a3aaa6ebd8a5d647d9",
"max_issues_repo_issues_event_max_datetime": "2022-03-29T21:02:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-03T11:42:27.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "WC12366/Principia",
"max_issues_repo_path": "documentation/ODEs and Runge-Kutta integrators.tex",
"max_line_length": 626,
"max_stars_count": 565,
"max_stars_repo_head_hexsha": "2191c660cf22f60f8009d0a3aaa6ebd8a5d647d9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "WC12366/Principia",
"max_stars_repo_path": "documentation/ODEs and Runge-Kutta integrators.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-22T12:04:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-04T21:47:18.000Z",
"num_tokens": 10539,
"size": 29813
} |
\documentclass{ecnreport}
\setlength{\parindent}{0cm}
\stud{Option Robotique, Control \& Robotics master}
\topic{Advanced Robot Programming}
\def\maze{\texttt{ecn::Maze}~}
\begin{document}
\inserttitle{Advanced Robot Programming Labs \newline C++ Programming}
\insertsubtitle{Maze generation and solving}
\newcommand{\involves}[1]{
\item {\bf C++ skills:} #1
}
\newcommand{\aitip}[1]{
\item {\bf AI tips:} #1
}
\section{Content of this lab}
In this lab you will use and modify existing code in order to generate and solve mazes. As shown in \Fig{maze2}, the goal is to generate
a maze of given dimensions (left picture) and to use a path planning algorithm to find the shortest path from the upper left to the lower right corners (right picture).
\begin{figure}[h]\centering
\includegraphics[width=.4\linewidth]{maze} \quad \quad\includegraphics[width=.4\linewidth]{maze_cell}
\caption{$51\times 75$ maze before (left) and after (right) path planning.}
\label{maze2}
\end{figure}
The lab was inspired by \link{https://www.youtube.com/watch?v=rop0W4QDOUI}{this Computerphile video}.\\
We use the classical mix of Git repository and CMake to download and compile the project:
\begin{center}
\texttt{mkdir build} $\rightarrow$ \texttt{cd build} $\rightarrow$ \texttt{cmake ..} $\rightarrow$ \texttt{make}
\end{center}
The program can the be launched.
\section{Required work}
Four programs have to be created:
\begin{enumerate}
\item Maze generation
\item Maze solving through A* with motions limited to 1 cell (already quite written)
\item Maze solving through A* with motions using straight lines
\item Maze solving through A* with motions using corridors
\end{enumerate}
As in many practical applications, you will start from some given tools (classes and algorithm) and use them inside your own code.\\
As told during the lectures, understanding and re-using existing code is as important as being able to write something from scratch.
\section{Maze generation}
The \link{https://en.wikipedia.org/wiki/Maze_generation_algorithm}{Wikipedia page} on maze generation is quite complete
and also proposes C-code that generates a perfect maze of a given (odd) dimension. A perfect maze is a maze where there is one and only
one path between any two cells.
Create a \texttt{generator.cpp} file by copy/pasting the Wikipedia code and modify it so that:
\begin{itemize}
\item It compiles as C++.
\item The final maze is not displayed on the console but instead it is saved to an image file
\item The executable takes a third argument, that is the percentage of walls that are randomly erased in order to build a non-perfect maze.
\end{itemize}
A good size is typically a few hundred pixels height / width. To debug the code, 51 x 101 gives a very readable maze.
The \maze class (\Sec{mazeClass}) should be used to save the generated maze through its \texttt{Maze::dig} method that removes a wall at a given (x,y) position.
It can also save a maze into an image file.
\section{Maze solving}
The given algorithm is described on \link{https://en.wikipedia.org/wiki/A*\_search\_algorithm\#Pseudocode}{Wikipedia}. It is basically a graph-search algorithm that
finds the shortest path and uses a heuristic function in order to get some clues about the direction to be favored.
In terms of implementation, the algorithm can deal with any \texttt{Node} class that has the following methods:
\begin{itemize}
\item \texttt{vector<unique\_ptr<Node>> Node::children()}: returns a \texttt{vector} of smart pointers\footnote{The use of smart pointers (here \texttt{unique\_ptr}) is detailed in \Sec{smart}} to the children (or neighboors) of the considered
element
\item \texttt{int distToParent()}: returns the distance to the node that generated this one
\item \texttt{bool is(const Node \&other)}: returns true if the passed argument is actually the same point
\item \texttt{double h(const Node \&goal)}: returns the heuristic distance to the passed argument
\item \texttt{void show(bool closed, const Node \& parent)}: used for online display of the behavior
\item \texttt{void print(const Node \& parent)}: used for final display
\end{itemize}
While these functions highly depend on the application, in our case we consider a 2D maze so some of these functions are already implemented in, as seen in \Sec{ptClass}:
\begin{itemize}
\item For the first exercice, only the \texttt{children} method is to write.
\item The second one adds the \texttt{distToParent} method.
\item The last one adds the \texttt{show} and \texttt{print} methods.
\end{itemize}
\subsection{A* with cell-based motions}
The first A* will use cell-based motions, where the algorithm can only jump 1 cell from the current one.
The file to modify is \texttt{solve\_cell.cpp}.
At the top of the file is the definition of a \texttt{Position} class
that inherits from \texttt{ecn::Point} in order not to reinvent the wheel (a point has two coordinates, it can compute the distance to
another point, etc.).
The only method to modify is \texttt{Position::children} that should generates the neighboors of the current point.
The parent node is likely to be in those neighboors, but it will be removed by the algorithm.
\subsection{A* with line-based motions}
Copy/paste the \texttt{solve\_cell.cpp} file to \texttt{solve\_line.cpp}.\\
Here the children should be generated so that a straight corridor is directly followed (ie children can only be corners, intersections or dead-ends).
A utility function \texttt{bool is\_corridor(int, int)} may be of good use.\\
The distance to the parent may not be always 1 anymore. As we know the distance when we look for the neighboor nodes, a good thing would be to store it
at the generation by using a new Constructor with signature \texttt{Position(int \_x, int \_y, int distance)}.\\
The existing \texttt{ecn::Point} class is already able to display lines between two non-adjacent points (as long as they are on the same horizontal or
vertical line). The display should thus work directly.
\subsection{A* with corridor-based motions}
Copy/paste the \texttt{solve\_line.cpp} file to \texttt{solve\_corridor.cpp}.\\
Here the children should be generated so that any corridor is directly followed (ie children can only be intersections or dead-ends, but
not simple corners).
A utility function \texttt{bool is\_corridor(int, int)} may be of good use.\\
The distance to the parent may not be always 1 anymore. As we know the distance when we look for the neighboor nodes, a good thing would be to store it
at the generation by using a new Constructor with signature \texttt{Position(int \_x, int \_y, int distance)}. \\
The existing \texttt{Point::show} and \texttt{Point::print} methods are not suited anymore for this problem. Indeed, the path from a point to its parent
may not be a straight line. Actually it will be necessary to re-search for the parent using the same approach as to generate the children.\\
For this problem, remember that by construction the nodes can only be intersections or dead-ends. Still, the starting and goal positions may
be in the middle of a corridor. It is thus necessary to check if a candidate position is the goal even if it is not the end of a corridor.
\section{Comparison}
Compare the approaches using various maze sizes and wall percentage.
If it is not the case, compile in \texttt{Release} instead of \texttt{Debug} and enjoy the speed improvement.\\
The expected behavior is that mazes with lots of walls (almost perfect mazes) should be solved must faster with the corridor, then line, then cell-based approaches.\\
With less and less walls, the line- and cell-based approaches should become faster as there are less and less corridors to follow.
\appendix
\section{Provided tools}
\subsection{The \maze class}\label{mazeClass}
This class interfaces with an image and allows easy reading / writing to the maze.
\paragraph{Methods for maze creation}
\begin{itemize}
\item \texttt{Maze(std::string filename)}: loads the maze from the image file
\item \texttt{Maze(int height, int width)}: build a new maze of given dimensions, with only walls
\item \texttt{dig(int x, int y)}: write a free cell at the given position
\item \texttt{save()}: saves the maze to \texttt{maze.png} and displays it
\end{itemize}
\paragraph{Methods for maze access}
\begin{itemize}
\item \texttt{int height()}, \texttt{int width()}: maze dimensions
\item \texttt{int isFree(int x, int y)}: returns true for a free cell, or false for a wall or invalid (out-of-grid) coordinates
\end{itemize}
\paragraph{Methods for maze display}
\begin{itemize}
\item \texttt{write(int x, int y, int r, int g, int b, bool show = true)}\\
will display the (x,y) with the (r,g,b) color and actually shows if asked
\item \texttt{passThrough(int x, int y)}: write the final path, color will go automatically from blue to red.
Ordering is thus important when calling this function
\item \texttt{saveSolution(std::string suffix)}: saves the final image
\end{itemize}
\subsection{The \texttt{ecn::Point} class}\label{ptClass}
This class implements basic properties of a 2D point:
\begin{itemize}
\item \texttt{bool is(Point)}: returns true if both points have the same x and y coordinates
\item \texttt{double h(const Point \&goal)}: heuristic distance, may use Manhattan
($|x-\texttt{goal}.x| + |y-\texttt{goal}.y|$) or classical Euclidean distance.
\item \texttt{void show(bool closed, const Point \& parent)}: draws a straight line between the point and its parent, that
is blue if the point is in the closed set or red if it is in the open set.
\item \texttt{void print(const Point\& parent)}: writes the final path into the maze for display, also considers a straight line
\end{itemize}
All built classes should inherit from this class. The considered maze is available through the static member variable \texttt{ecn::maze} and can thus
be accessed from the built member functions.
\section{Pointers and smart pointers}\label{smart}
During an A* algorithm, a lot of nodes are created, deleted and moved between the open and the closed set.
This makes it necessary to create the nodes on the heap and manipulate only pointers.
In our case, new nodes are created in when calling the \texttt{children()} method, while they are deleted by the algorithm
itself if needed. In order to avoid memory leaks, we will use smart pointers.\\
Assuming that a new position can be created from its coordinates, a new position may be created with:
\begin{itemize}
\item \texttt{Position p(x, y);} for an actual position
\item \texttt{Position* p = new Position(x, y);} for a raw pointer
\item \texttt{std::unique\_ptr<Position> p(new Position(x, y);} for a unique ptr
\end{itemize}
As the \texttt{children()} method is supposed to return a \texttt{std::vector} of pointers, the corresponding syntax may be as follows:
\begin{center}\cppstyle
\begin{lstlisting}
typedef std::unique_ptr<Position> PositionPtr;
std::vector<PositionPtr> children()
{
std::vector<PositionPtr> generated;
while(..) // whatever method we use to look for children
{
// assuming that we found a child at (x,y)
generated.push_back(std::make_unique<Position>(x,y)));
}
return generated;
}
\end{lstlisting}
\end{center}
Here the generated children are directly put inside the vector, which is returned at the end. The smart pointer keeps track
of the allocated memory and the A* algorithm can decide to keep it or not, knowing that the memory will be managed accordlingly.
\end{document}
| {
"alphanum_fraction": 0.7615431087,
"avg_line_length": 49.9439655172,
"ext": "tex",
"hexsha": "00188896f424e7be6ede2eb1e728f2adc1947da2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f3b751621665ba0da4cf13123c6fe42b504189a2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "WeLoveKiraboshi/ARPRO",
"max_forks_repo_path": "maze/subject/ARPRO_maze.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f3b751621665ba0da4cf13123c6fe42b504189a2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "WeLoveKiraboshi/ARPRO",
"max_issues_repo_path": "maze/subject/ARPRO_maze.tex",
"max_line_length": 244,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f3b751621665ba0da4cf13123c6fe42b504189a2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "WeLoveKiraboshi/ARPRO",
"max_stars_repo_path": "maze/subject/ARPRO_maze.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2912,
"size": 11587
} |
\foldertitle{model}{Model Objects and Functions}{model/Contents}
Model objects are created by loading a \href{modellang/Contents}{model
file}. Once a model object exists, you can use model functions and
standard Matlab functions to write your own m-files to perform the
desired tasks, such calibrate or estimate the model, find its steady
state, solve and simulate it, produce forecasts, analyse its properties,
and so on.
Model methods:
\paragraph{Constructor}\label{constructor}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/model}{\texttt{model}} - Create new model object based on
model file.
\end{itemize}
\paragraph{Getting information about
models}\label{getting-information-about-models}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/addparam}{\texttt{addparam}} - Add model parameters to a
database (struct).
\item
\href{model/autocaption}{\texttt{autocaption}} - Create captions for
graphs of model variables or parameters.
\item
\href{model/autoexogenise}{\texttt{autoexogenise}} - Get or set
variable/shock pairs for use in autoexogenised simulation plans.
\item
\href{model/comment}{\texttt{comment}} - Get or set user comments in
an IRIS object.
\item
\href{model/eig}{\texttt{eig}} - Eigenvalues of the transition matrix.
\item
\href{model/findeqtn}{\texttt{findeqtn}} - Find equations by the
labels.
\item
\href{model/findname}{\texttt{findname}} - Find names of variables,
shocks, or parameters by their descriptors.
\item
\href{model/get}{\texttt{get}} - Query model object properties.
\item
\href{model/iscompatible}{\texttt{iscompatible}} - True if two models
can occur together on the LHS and RHS in an assignment.
\item
\href{model/islinear}{\texttt{islinear}} - True for models declared as
linear.
\item
\href{model/islog}{\texttt{islog}} - True for log-linearised
variables.
\item
\href{model/ismissing}{\texttt{ismissing}} - True if some initical
conditions are missing from input database.
\item
\href{model/isnan}{\texttt{isnan}} - Check for NaNs in model object.
\item
\href{model/isname}{\texttt{isname}} - True for valid names of
variables, parameters, or shocks in model object.
\item
\href{model/issolved}{\texttt{issolved}} - True if a model solution
exists.
\item
\href{model/isstationary}{\texttt{isstationary}} - True if model or
specified combination of variables is stationary.
\item
\href{model/length}{\texttt{length}} - Number of alternative
parameterisations.
\item
\href{model/omega}{\texttt{omega}} - Get or set the covariance matrix
of shocks.
\item
\href{model/sspace}{\texttt{sspace}} - State-space matrices describing
the model solution.
\item
\href{model/system}{\texttt{system}} - System matrices for unsolved
model.
\item
\href{model/userdata}{\texttt{userdata}} - Get or set user data in an
IRIS object.
\end{itemize}
\paragraph{Referencing model objects}\label{referencing-model-objects}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/subsasgn}{\texttt{subsasgn}} - Subscripted assignment for
model and systemfit objects.
\item
\href{model/subsref}{\texttt{subsref}} - Subscripted reference for
model and systemfit objects.
\end{itemize}
\paragraph{Changing model objects}\label{changing-model-objects}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/alter}{\texttt{alter}} - Expand or reduce number of
alternative parameterisations.
\item
\href{model/assign}{\texttt{assign}} - Assign parameters, steady
states, std deviations or cross-correlations.
\item
\href{model/export}{\texttt{export}} - Save carry-around files on the
disk.
\item
\href{model/horzcat}{\texttt{horzcat}} - Combine two compatible model
objects in one object with multiple parameterisations.
\item
\href{model/refresh}{\texttt{refresh}} - Refresh dynamic links.
\item
\href{model/reset}{\texttt{reset}} - Reset specific values within
model object.
\item
\href{model/stdscale}{\texttt{stdscale}} - Re-scale all std deviations
by the same factor.
\item
\href{model/set}{\texttt{set}} - Change modifiable model object
property.
\item
\href{model/single}{\texttt{single}} - Convert solution matrices to
single precision.
\end{itemize}
\paragraph{Steady state}\label{steady-state}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/blazer}{\texttt{blazer}} - Reorder steady-state equations
into block-recursive structure.
\item
\href{model/chksstate}{\texttt{chksstate}} - Check if equations hold
for currently assigned steady-state values.
\item
\href{model/sstate}{\texttt{sstate}} - Compute steady state or
balance-growth path of the model.
\item
\href{model/sstatefile}{\texttt{sstatefile}} - Create a steady-state
file based on the model object's steady-state equations.
\end{itemize}
\paragraph{Solution, simulation and
forecasting}\label{solution-simulation-and-forecasting}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/chkmissing}{\texttt{chkmissing}} - Check for missing
initial values in simulation database.
\item
\href{model/diffsrf}{\texttt{diffsrf}} - Differentiate shock response
functions w.r.t. specified parameters.
\item
\href{model/expand}{\texttt{expand}} - Compute forward expansion of
model solution for anticipated shocks.
\item
\href{model/jforecast}{\texttt{jforecast}} - Forecast with judgmental
adjustments (conditional forecasts).
\item
\href{model/icrf}{\texttt{icrf}} - Initial-condition response
functions.
\item
\href{model/lhsmrhs}{\texttt{lhsmrhs}} - Evaluate the discrepancy
between the LHS and RHS for each model equation and given data.
\item
\href{model/resample}{\texttt{resample}} - Resample from the model
implied distribution.
\item
\href{model/reporting}{\texttt{reporting}} - Run reporting equations.
\item
\href{model/shockplot}{\texttt{shockplot}} - Short-cut for running and
plotting plain shock simulation.
\item
\href{model/simulate}{\texttt{simulate}} - Simulate model.
\item
\href{model/solve}{\texttt{solve}} - Calculate first-order accurate
solution of the model.
\item
\href{model/srf}{\texttt{srf}} - Shock response functions.
\end{itemize}
\paragraph{Model data}\label{model-data}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/data4lhsmrhs}{\texttt{data4lhsmrhs}} - Prepare data array
for running \texttt{lhsmrhs}.
\item
\href{model/emptydb}{\texttt{emptydb}} - Create model-specific
database with empty tseries for all variables and shocks.
\item
\href{model/rollback}{\texttt{rollback}} - Prepare database for a
rollback run of Kalman filter.
\item
\href{model/sstatedb}{\texttt{sstatedb}} - Create model-specific
steady-state or balanced-growth-path database.
\item
\href{model/zerodb}{\texttt{zerodb}} - Create model-specific
zero-deviation database.
\end{itemize}
\paragraph{Stochastic properties}\label{stochastic-properties}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/acf}{\texttt{acf}} - Autocovariance and autocorrelation
functions for model variables.
\item
\href{model/ifrf}{\texttt{ifrf}} - Frequency response function to
shocks.
\item
\href{model/fevd}{\texttt{fevd}} - Forecast error variance
decomposition for model variables.
\item
\href{model/ffrf}{\texttt{ffrf}} - Filter frequency response function
of transition variables to measurement variables.
\item
\href{model/fmse}{\texttt{fmse}} - Forecast mean square error
matrices.
\item
\href{model/vma}{\texttt{vma}} - Vector moving average representation
of the model.
\item
\href{model/xsf}{\texttt{xsf}} - Power spectrum and spectral density
of model variables.
\end{itemize}
\paragraph{Identification, estimation and
filtering}\label{identification-estimation-and-filtering}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\href{model/bn}{\texttt{bn}} - Beveridge-Nelson trends.
\item
\href{model/diffloglik}{\texttt{diffloglik}} - Approximate gradient
and hessian of log-likelihood function.
\item
\href{model/estimate}{\texttt{estimate}} - Estimate model parameters
by optimising selected objective function.
\item
\href{model/evalsystempriors}{\texttt{evalsystempriors}} - Evaluate
minus log of system prior density.
\item
\href{model/filter}{\texttt{filter}} - Kalman smoother and estimator
of out-of-likelihood parameters.
\item
\href{model/fisher}{\texttt{fisher}} - Approximate Fisher information
matrix in frequency domain.
\item
\href{model/lognormal}{\texttt{lognormal}} - Characteristics of
log-normal distributions returned by filter of forecast.
\item
\href{model/loglik}{\texttt{loglik}} - Evaluate minus the
log-likelihood function in time or frequency domain.
\item
\href{model/neighbourhood}{\texttt{neighbourhood}} - Evaluate the
local behaviour of the objective function around the estimated
parameter values.
\item
\href{model/regress}{\texttt{regress}} - Centred population regression
for selected model variables.
\item
\href{model/VAR}{\texttt{VAR}} - Population VAR for selected model
variables.
\end{itemize}
\paragraph{Getting on-line help on model
functions}\label{getting-on-line-help-on-model-functions}
\begin{verbatim}
help model
help model/function_name
\end{verbatim}
| {
"alphanum_fraction": 0.7558987559,
"avg_line_length": 32.2629757785,
"ext": "tex",
"hexsha": "cfa65486c5c2dca53e9e123d96b43030b1f21a81",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z",
"max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_forks_repo_path": "-help/model/Contents.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_issues_repo_path": "-help/model/Contents.tex",
"max_line_length": 72,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_stars_repo_path": "-help/model/Contents.tex",
"max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z",
"num_tokens": 2735,
"size": 9324
} |
\documentclass{beamer}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{textpos}
\usepackage{tikz}
\usepackage{listings}
\usepackage{menukeys}
\usepackage[tikz]{mdframed}
\usepackage{courier}
\usepackage{amsmath}
\usetikzlibrary{shapes,arrows}
\usetheme{Madrid}
\usecolortheme{beaver}
% Custom changes:
\setbeamertemplate{footline}[frame number]{}
\definecolor{university_tuebingen}{RGB}{165,30,55}
\definecolor{matlab_input}{RGB}{200,255,200}
\definecolor{matlab_output}{RGB}{200,200,255}
\setbeamercolor{frametitle}{fg=university_tuebingen, bg=white}
\setbeamercolor{title}{fg=university_tuebingen}
\addtobeamertemplate{frametitle}{}{
\begin{tikzpicture}[remember picture, overlay]
\node[anchor=north east,yshift=-0.1cm] at (current page.north east)
{\includegraphics[width=3cm]{../common/logo_uni_tuebingen2.png}};
\end{tikzpicture}}
% Define flow diagramm options
\tikzstyle{decision} = [diamond, draw, fill=green!20, text width=3.5em, text badly centered, node distance=2.5cm, inner sep=0pt, font=\small]
\tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=3.5em, text centered, rounded corners, minimum height=4em, font=\small]
\tikzstyle{line} = [draw, -latex']
% Settings for Matlab syntax highlighting
\lstset{numbers=left, %
numberstyle=\tiny, %
numberblanklines=false, %
basicstyle=\small\ttfamily, %
keywordstyle=\bfseries\color{green!40!black}, %
breaklines=true, %
commentstyle=\itshape\color{purple!40!black}, %
identifierstyle=\color{red}, %
language=Matlab, %
backgroundcolor=\color{matlab_input}
}
% Define highlighting for pseudocode
\lstdefinelanguage{pseudo}{morekeywords={IF, THEN, ELSE, WHILE, FOR, END}, sensitive=true}
% Option for mdframed:
\newmdenv[backgroundcolor=yellow!30, roundcorner=5pt]{exercise}
% New command for title
\newcommand{\matlabTitle}[1]{\title{Einführung in Matlab\\{\scriptsize #1}}}
% New command for auto increment exercise:
\newcounter{mexercise}
\newcounter{mchapter}
\newcommand{\secMexercise}{\stepcounter{mexercise} \subsection{Übung \themchapter -\themexercise}}
\newcommand{\frameMexercise}{\frametitle{Übung \themchapter -\themexercise}}
% New command for in text Matlab listing:
\newcommand{\matlabInput}[1]{\colorbox{matlab_input}{\texttt{#1}}}
\newcommand{\matlabOutput}[1]{\colorbox{matlab_output}{\texttt{#1}}}
% New command for inserting only matlab code:
\newcommand{\insertMatlabOnlyCode}[2][firstline=1]{%
\lstinputlisting[#1]{#2.m}%
}
% New command for inserting matlab code and output:
\newcommand{\insertMatlabCodeOutput}[1]{%
\begin{columns}[t]%
\begin{column}{5cm}%
\lstinputlisting{#1.m}%
\end{column}%
%
\begin{column}{5cm}%
\lstinputlisting[backgroundcolor=\color{matlab_output}]{#1.txt}%
\end{column}%
\end{columns}%
}
% New command for inserting matlab code and plot:
\newcommand{\insertMatlabCodePlot}[1]{%
\begin{columns}[t]%
\begin{column}{5cm}%
\lstinputlisting{#1.m}%
\end{column}%
%
\begin{column}{5cm}%
\includegraphics[width=5.0cm]{#1.png}
\end{column}%
\end{columns}%
}
% Configure hyperlinks
\hypersetup{colorlinks=true,urlcolor=blue}
\newcommand{\urlLink}[2]{\href{#1}{#2}}
\newcommand{\matlabLink}[1]{\urlLink{http://de.mathworks.com/help/matlab/ref/#1.html?searchHighlight=#1}{#1}}
\author{Prof. Dr. Christiane Zarfl, Dipl.-Inf. Willi Kappler, Prof. Dr. Olaf Cirpka}
\date{}
\institute{}
| {
"alphanum_fraction": 0.7172177879,
"avg_line_length": 31.8909090909,
"ext": "tex",
"hexsha": "5c002627dbdf8dfdb9658cd34611012869375f9b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "15aebff850a4af2d0c0e3b29acec55af37b28637",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "willi-kappler/Matlab_Kurs",
"max_forks_repo_path": "common/header.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "15aebff850a4af2d0c0e3b29acec55af37b28637",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "willi-kappler/Matlab_Kurs",
"max_issues_repo_path": "common/header.tex",
"max_line_length": 141,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "15aebff850a4af2d0c0e3b29acec55af37b28637",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "willi-kappler/Matlab_Kurs",
"max_stars_repo_path": "common/header.tex",
"max_stars_repo_stars_event_max_datetime": "2016-04-05T12:54:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-04-05T12:54:40.000Z",
"num_tokens": 1098,
"size": 3508
} |
\documentclass{article}
\usepackage{graphicx}
\usepackage{float}
\usepackage{hyperref}
\begin{document}
\title{Exoplanet Search: Vetting Kepler Light Curves}
\author{Praveen Gowtham}
\date{}
\maketitle
\section{Introduction/Background}
\paragraph{}
In 1992 astronomers discovered a periodic set of dips modulating the emission from a pulsating neutron star. The source of these dips was identified as two planetary bodies orbiting the neutron star. The possibility of discovering and examining planets that do not orbit our sun was a compelling one. Some relevant reasons include the desire to find Earth-similar planets that could host extra-terrestrial life, or to understand the distribution of the types of planets and planetary systems that exist and where our own solar system fits in this schema.
A swarm of activity in exoplanet detection technique development and the creation of theoretical frameworks for extracting information about these planets soon emerged. This project focuses on one of these techniques: light curve transit detection. In this technique, the light flux of a target star is observed for an extended period of time (months to years) by a telescope. An orbiting planet can transit across the line of sight between the star and the telescope which results in a small dip in the measured light intensity of the star. \textit{Note: these dips are usually quite small compared to the overall stellar light intensity.} An illustration of a planetary transit and its effects on a stellar light curve can be seen below:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/transit_illustration.jpg}
\end{center}
\caption{Exoplanet transit.}
\end{figure}
\paragraph{}
These dips will occur at regular intervals corresponding to the orbital period of the planet about the host star. Here is an example showing a real light curve with periodic dips in the light curve:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/exo_multiple_transit.jpg}
\end{center}
\caption{Multiple transits from exoplanet HAT-P-7 b.}
\end{figure}
\paragraph{} The shape, the depth, and the period of these transits along with stellar parameters yields a wealth of information about the exoplanet: it's radius, mass, surface temperature, orbital parameters around the star, etc.
So far, over 3000 exoplanets and their properties have been discovered using the light curve transit detection method.
\paragraph{} So what's the problem? The example from HAT-P-7 b has an exceptionally clear planetary transit signal. More often than not, the transits are significantly shallower. In such cases a decision must be made on whether a dip in the light curve is actually a planetary transit or stellar noise. Stellar noise can be quite significant. The surface of stars are full of eruptions, temporary local cool spots (sunspots), plasma turbulence, not to mention the inevitable fluctuations coming from the fact that a star is hot.
In addition, there are non-planetary phenomena that can produce statistically significant periodic dips. Among these are orbiting binary star systems or variable stars that pulsate or produce light intensity chirps. There are also imaging artifacts that come from the light of another star periodically leaking into the measured intensity of a target star. Light curves from binary star systems, in particular, can closely mimic that of an exoplanetary transit about a host star. The two stars orbit about each other and both partially eclipse the other at different phases of the orbital cycle. Since both stars produce a significant amount of light we will expect two eclipses. The primary eclipse (deepest dip in the light curve) occurs when the less luminous star comes in front of the more luminous star. The secondary eclipse occurs when the reverse is true. The relative depths of the two eclipses will depend on the magnitude/size of the two stars in relation to each other. An example can be seen below:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/algol-curve.png}
\end{center}
\caption{Light curve from the Algol A/B binary star system.}
\end{figure}
\paragraph{} These look a lot like the curves for exoplanets. But as can be seen, there are differences -- such as the existence of a secondary eclipse. Planets only reflect light from their host star. Thus, with the exception of some very large planets with small orbital radii (known as hot Jupiters) real exoplanet secondary eclipses are undetectable. Even in the case of hot Jupiters, secondary eclipses typically have much smaller amplitude dips in the light curves than the secondary eclipses associated with a binary star companion. There are other more subtle difference in the light curves of binary star systems vs exoplanets which will be discussed later. But it is clear that making a decision on whether a star has an exoplanet via transit detection will first involve the determination of statistically significant periodic transit-like events \textit{and} the vetting of potential false positives (binary stars or other phenomena) in the remaining light curves. \textbf{This project aims to construct relevant features from light curves with statistically significant events and make determinations as to whether the event and light curve correspond to an exoplanet, a binary star system, or some other phenomena.}
\section{Data Wrangling}
\subsection{Data Sources}
\paragraph{}All the data in this study come from the Kepler Space Telescope. The telescope was launched into Earth orbit in 2009 and decommissioned in 2018. Kepler was tasked with long term observation of a section of our Milky Way galaxy and in the end managed to observe approximately 500,000 stars. Of these stars, only a small fraction had light curves that contained statistically significant events that could potentially be transits.
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/kepler_spacetelescope.jpg}
\end{center}
\caption{Kepler Space Telescope}
\end{figure}
\paragraph{} The Kepler mission developed a data pipeline in order to store the surveyed stellar light curves, flagging potential objects of interest with statistically significant events (known as Kepler Objects of Interest or KOIs), and generating transit parameters like the period, depth, and duration of the transits. These KOIs are the target stars that we are interested in developing a model for with the the hope that the model can vet between true exoplanets and false positives. The light curves and the aggregated data for the Kepler mission KOIs are stored in the Minkulski Archive for Space Telescopes (MAST) and the NASA Exoplanet Archive. \\
Light curve request API: \\ \href{https://exo.mast.stsci.edu/docs/getting_started.html}{https://exo.mast.stsci.edu/docs}
Aggregated/cumulative data table: \\ \href{https://exoplanetarchive.ipac.caltech.edu/docs/program_interfaces.html}{https://exoplanetarchive.ipac.caltech.edu/docs/program\textunderscore interfaces.html}
\paragraph{}
The cumulative data table contains transit parameters/metadata about the KOIs. The KOIs are indexed by the Kepler input catalog star ID (KIC ID) and transit event number. The transit event number (TCE num) is relevant when there is more than one planet orbiting the target star with a particular KIC ID. Each number would correspond to a different planet in the system. The cumulative table contains info like the period, maximum depth, and duration of the transit event. There are also flags for the status of a given KOI (confirmed planet, secondary eclipse false positive, non-transiting phenomenon, etc.). \textbf{We used the flags in the cumulative table as training labels. Our aim is to separate the light curves into three classes: exoplanets, secondary eclipse FPs, and non-transit phenomena (NTP) FPs.}
\begin{table}[H]
\begin{tabular}{|c|c|c|c|}
\hline
Class name & Confirmed Planet & Secondary Eclipse FP & NTP FP \\
\hline
Target Label & 1 & 2 & 3 \\
\hline
\end{tabular}
\caption{Class name and label encoding scheme.}
\end{table}
\paragraph{}
The KIC ID and TCE num of a transit crossing event (TCE) in the cumulative table can be used to specify a light curve request for the given transit from the MAST API. The indices are requisite parameters passed to the constructor for a custom class KOIObject within the module KOIclass. The class performs a whole host of functions and sweeps many of the details of data downloading, light curve signal processing, feature construction, and light curve plotting under the rug. Raw light curve data was downloaded to disk as this allows construction of the light-curve extracted feature matrix and subsequent transformations/changes to this matrix to occur with much higher throughput.
\section{Light Curve Processing}
The detrended light curves needed to go through a few preprocessing steps before feature extraction could commence. In short, the light curves were phase-folded (for a periodic signal, points at a certain phase in the oscillation are grouped together). These grouped points are averaged together to yield the phase-folded, bin-averaged light curve. This procedure helps in averaging out some of the Gaussian noise and keeps the cycle-averaged periodic signal. The primary transit crossing event is indexed to zero phase. The steps are shown below:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/lc_detrend_secondaryFP_ex1.png}
\end{center}
\caption{Detrended light curve from Kepler data validation pipeline}
\end{figure}
The above light curve is clearly periodic. We phase-fold the light curve on this period, bin the data, and average.
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/phasefolded_secondaryFP_ex1.png}
\end{center}
\caption{Phase-folded and average light curve. This is a binary star system.}
\end{figure}
\section{Feature Extraction}
We extract features from these phase-folded and averaged light curves. We want to construct features from these time-series that help our models to distinguish between the different classes. A look at representative curves for the three classes will be helpful. A representative phase-folded/averaged for class 2 (secondary eclipse FP) has already been seen in Fig 6. Examples for NTP FPs and confirmed planets can be seen below:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/phasefolded_ntp_ex2.png}
\end{center}
\caption{Phase-folded and average light curve in the NTP FP class. Example of a pulsating star.}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/phasefolded_CP_ex1.png}
\end{center}
\caption{Phase-folded and average light curve for a confirmed exoplanet.}
\end{figure}
\paragraph{}These light curves suggest the construction of a few features that could help distinguish between the various classes. In order to differentiate class 2 (Fig 6) and class 1 (Fig 8), we need to conduct a statistical test on the existence of a secondary peak. This is done by subtracting the primary TCE and performing a statistical test for the existence of a secondary eclipse against the noise background. We call the resulting statistic the \textbf{weak secondary statistic} and is referred to for shorthand as $p_{sec}$. It measures the likelihood that a secondary peak is not noise.
\paragraph{} In a subset of the secondary eclipse FPs, the secondary eclipse occurs pretty close to half period. In cases where this secondary eclipse's amplitude is large enough, the Kepler pipeline's statistical tester logs this event as the same as the primary event. The effect is that the Kepler pipeline outputs a period that is actually half the period of the binary system. An example can be seen below:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/lc_detrend_secondaryFP_ex2.png}
\end{center}
\caption{Detrended light curve for secondary FP with large amplitude secondary eclipse.}
\end{figure}
A naive phase-folding/averaging according to the Kepler pipeline's extracted period would yield a single transit dip: thus superimposing the primary and secondary eclipse and averaging them. To account for this possibility, we group alternating cycles together into even or odd phase groups and average within the subgroups. We've staggered the even and odd phase averages by the Kepler pipeline period and plotted the result of this operation below:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/evenoddstagger_secondaryFP_ex2.png}
\end{center}
\caption{Period-staggered even and odd phase-folded/averaged series corresponding to light curve from Figure 9. The even and odd transit amplitudes clearly differ indicating that this is a secondary eclipse FP.}
\end{figure}
\paragraph{} This all suggests a statistical test: collect transit depths for even and odd cycles. Then do a 2-sample t-test to determine if the difference of the means of the transit depths between the alternating cycles is statistically significant. The t-statistic itself can be used here as a feature. Larger t-statistics imply that we can reject the null (that the even/odd cycle amplitudes are the same) with higher certainty. We call this the \textbf{even-odd statistic} and observations with large even odd statistic are likely secondary eclipse false positives.
\paragraph{} Some generic statistical features are also of use here as well. The depth of the primary transits (or the minimum value of the phase-binned light curve) is also used as a feature. Secondary eclipse FPs from binary stars tend to have much larger transit depths than those from exoplanets -- as can bee seen by looking at Fig 6 and Fig 8. Non-transiting FPs have a lot of wiggles and can often have max values in the series that are large positive compared to base-line. Thus we also take the max as a feature.
\paragraph{}
We have spent a lot of time on statistical tests for secondary eclipse phenomena or generic statistical features of the light curve. But there is quite a lot of information in the shape of the TCEs themselves that can help us distinguish between the three classes. For each light curve, we zoom in on the primary transit event and extract the light curve localized at $\pm 2$ durations on either side of the zero phase of the transit. In order to make faithful comparisons of the shape of the transits across different observation and classes, we normalize the time axis (x-axis) by the transit duration and rescale the relative flux (y-axis) to lie between [0,-1] with -1 being the transit depth. The xy-rescaled transit close-ups are then resampled into fixed length series of 141 points. These 141 bins of the xy-normalized transit close-ups are incorporated into the feature matrix. Below, we can see the results of this process across the secondary eclipse FPs and confirmed planets:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/rescalexy_secondaryFP.png}
\end{center}
\caption{XY-rescaled transit close-ups for secondary eclipse FPs (class 2). }
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=5cm]{figures/rescalexy_CP.png}
\end{center}
\caption{XY-rescaled transit close-ups for confirmed exoplanets(class 1).}
\end{figure}
The secondary eclipse FP transit close-ups all collapse to a V-like shape when rescaled. On the other hand, the confirmed planet class transit close-ups collapse to something approximatng a U-shape. We haven't plotted the curves for class 3 (non-transiting phenomena) as there is much higher variability in the series across observations (making for an unenlightening plot). The important point is that the shape of the transit encoded in the 141 bins clearly provide a means of distinguishing between the classes.
\paragraph{} Finally, the transit duration (measured in hours) and the TCE period from the Kepler cumulative table are included in the feature matrix. The following table summarizes the features extracted from the light curves:
\begin{table}[H]
\begin{tabular}{|c|c|}
\hline
Period & TCE period (days) from Kepler cumulative table,\\
\hline
Duration & TCE duration (hours) from Kepler cumulative table. \\
\hline
even\_odd\_stat & Even-odd statistic for secondary eclipse detection. \\
\hline
p\_secondary & Weak secondary statistic \\
\hline
min & Almost always corresponds to the depth of the primary transit. \\
\hline
max & Maximum value in phase-folded/bin-averaged light curve. \\
\hline
LCBIN\_0 - LCBIN\_140 & 141 points of the xy-normalized primary transit close-ups \\
\hline
\end{tabular}
\caption{Features extracted from Kepler data pipeline.}
\end{table}
\section{Statistical EDA on some features}
We spent some time constructing features. Are these features useful? We'll answer the question for some of these in this report. Analysis on all of the features can be found in the notebooks. For what proceeds in this report, all the features have been normalized already. We look at the weak secondary statistic:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=4cm]{figures/psec_hist.png}
\end{center}
\caption{Distribution of weak secondary statistic. As designed, higher values indicate a secondary eclipse FP. Large number of counts at 1.0 has to do with the way we constructed the feature. }
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=4cm]{figures/psec_class_diff.png}
\end{center}
\caption{Bar plot showing that the weak secondary statistic preferentially selects out class 2 at larger values. }
\end{figure}
The even-odd statistic shows similar selection for class 2 at higher values. Both statistics are doing what they should be doing and will be good features for our models to train on.
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=4cm]{figures/evenodd_class_diff.png}
\end{center}
\caption{Bar plot showing that the even-odd statistic preferentially selects out class 2 at larger values. }
\end{figure}
One of the features that aids in selecting out NTPs over the other two classes is the period. We've plotted the cumulative distribution function of the three classes with respect to the TCE period:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=4cm]{figures/ecdf_period.png}
\end{center}
\caption{Empirical cumulative distribution of the TCE period conditioned on the three classes. }
\end{figure}
Class 3 has a long tail and dominates the long period sector of the TCEs. The period will thus be a good feature to select for the NTPs. The other features min, max, and duration are also useful in one way or another for selecting or differentiating between classes. The LCBIN features will be discussed in the next section.
\section{Last Pre-processing: Dimensionality Reduction on Transit Close-Up Features}
\paragraph{} The 141 transit close-up features are useful but form a feature sub-space of high dimensionality. This typically presents some problems for machine learning algorithms. But as we can see from Figures 11 and 12 the true dimensionality of the space that the close-up features live on is much lower. We employ a dimensionality reduction technique called Locality Preserving Projections (LPP). As the name suggests, this technique projects from high dimensions to a low dimensional space in a way that preserves neighborhoods. Details can be found in He and Nyogi (NIPS, 2003). XY-rescaled transit close-ups with U-shapes corresponding to exoplanets should be closer to each other than to those from eclipsing binary stars and NTPs in the low-D space generated by LPP. We perform LPP on the light curves with knn segmentation at 5 and a target dimension of 2. We plot the resulting distributions for three classes in this 2D space:
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=4cm]{figures/lpp_hexbinplot.png}
\end{center}
\caption{Distribution for each class in the 2D space generated by LPP. Target label 1, 2, and 3 correspond to confirmed planet, secondary eclipse false positives, and non-transiting phenomena respectively.}
\end{figure}
While there is certainly overlap between the classes, class 1 and 2 have the bulk of their distribution centered at different areas in the 2D-space. Class 3 has a bit more overlap with class 2. The reason for the overlap has to do with the fact that some Class 3 light curves look like eclipsing binaries and that additional information not present in the light curve is required to differentiate them from class 2. However, class 3 does dominate extremal parts of the 2D space. Our conclusion then is that the projected 2D feature set (LPP\_1, LPP\_2) is a good one and will help the models in differentiating between classes. The final feature set that we fed into the models is:
\begin{table}[H]
\begin{tabular}{|c|c|}
\hline
Period & TCE period (days) from Kepler cumulative table,\\
\hline
Duration & TCE duration (hours) from Kepler cumulative table. \\
\hline
even\_odd\_stat & Even-odd statistic for secondary eclipse detection. \\
\hline
p\_secondary & Weak secondary statistic \\
\hline
min & Almost always corresponds to the depth of the primary transit. \\
\hline
max & Maximum value in phase-folded/bin-averaged light curve. \\
\hline
LPP\_1 & First dimension of LPP-reduced XY-rescaled transit close-ups \\
\hline
LPP\_2 & Second dimension of LPP-reduced XY-rescaled transit close-ups . \\
\hline
\end{tabular}
\caption{Final feature set used for model training}
\end{table}
\section{Modeling}
\paragraph{} We split the data into 85-15 training/hold-out set. Within the training set, we then conducted 5-fold cross validation for each model trained with a 85-15 training/evaluation split. We tried three classifier types: a random forest classifier, an RBF-kernelized soft-margin SVM, and a gradient boosting classifier. The models were incorporated as the last step in a pipeline that did all the necessary fit/transformations (e.g., LPP dim reduction, normalization, etc.) before fitting the classifier. We initiated a CV hyperparameter search on the pipelines to tune model performance for each class. The pipeline architecture shines here as it ensures prevention of data leakage during cross validation and in the final model transformation/fitting stages. We chose to optimize our CV on the F1 score micro-averaged across classes. The best performing models (within the hyperparameters scanned) as deemed by the CV scores are listed below: \\
\textbf{Random Forest}:\\ RandomForestClassifier(max\_depth=18, max\_features=3, n\_estimators=700) \\
\textbf{RBF-Kernelized Soft-Margin SVM}: \\ SVC(C=100, gamma=25) \\
\textbf{XGBoost}:\\ XGBClassifier(learning\_rate=0.1, max\_depth=2, n\_estimators=1000) \\
\subsection{Random Forest Classifier: Results}
\begin{table}[H]
\begin{center}
\begin{tabular}{l|r|r|r|r}
{} & precision & recall & f1-score & support \\ \hline
1 & 0.925620 & 0.957265 & 0.941176 & 351.000000 \\ \hline
2 & 0.858859 & 0.890966 & 0.874618 & 321.000000 \\ \hline
3 & 0.787037 & 0.643939 & 0.708333 & 132.000000 \\ \hline
accuracy & 0.879353 & 0.879353 & 0.879353 & 0.879353 \\ \hline
macro avg & 0.857172 & 0.830723 & 0.841376 & 804.000000 \\ \hline
weighted avg & 0.876213 & 0.879353 & 0.876375 & 804.000000 \\ \hline
\end{tabular}
\end{center}
\caption{Classification report for Random Forest on hold-out set. Class 1 = Exoplanets, 2 = Secondary Eclipse FPs, 3 = NTP FPs}
\end{table}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/exo_randforest_cfmat.png}
\end{center}
\caption{Confusion matrix for the random forest on the hold-out set.}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/randforest_feature_imp.png}
\end{center}
\caption{Feature importances for Random Forest Model}
\end{figure}
\subsection{Kernelized SVM: Results}
\begin{table}[H]
\begin{center}
\begin{tabular}{l|r|r|r|r}
{} & precision & recall & f1-score & support \\ \hline
Class 1 & 0.889764 & 0.965812 & 0.926230 & 351.000000 \\ \hline
Class 2 & 0.854890 & 0.844237 & 0.849530 & 321.000000 \\ \hline
Class 3 & 0.764151 & 0.613636 & 0.680672 & 132.000000 \\ \hline
accuracy & 0.859453 & 0.859453 & 0.859453 & 0.859453 \\ \hline
macro avg & 0.836268 & 0.807895 & 0.818811 & 804.000000 \\ \hline
weighted avg & 0.855217 & 0.859453 & 0.855291 & 804.000000 \\ \hline
\end{tabular}
\end{center}
\caption{Classification report for SVM on hold-out set. Class 1 = Exoplanets, 2 = Secondary Eclipse FPs, 3 = NTP FPs}
\end{table}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/exo_svc_cfmat.png}
\end{center}
\caption{Confusion matrix for the SVM on the hold-out set.}
\end{figure}
\subsection{XGBoost: Results}
\begin{table}[H]
\begin{center}
\begin{tabular}{l|r|r|r|r}
{} & precision & recall & f1-score & support \\ \hline
1 & 0.943820 & 0.957265 & 0.950495 & 351.00000 \\ \hline
2 & 0.867868 & 0.900312 & 0.883792 & 321.00000 \\ \hline
3 & 0.773913 & 0.674242 & 0.720648 & 132.00000 \\ \hline
accuracy & 0.888060 & 0.888060 & 0.888060 & 0.88806 \\ \hline
macro avg & 0.861867 & 0.843940 & 0.851645 & 804.00000 \\ \hline
weighted avg & 0.885601 & 0.888060 & 0.886128 & 804.00000 \\ \hline
\end{tabular}
\end{center}
\caption{Classification report for XGBoost on hold-out set. Class 1 = Exoplanets, 2 = Secondary Eclipse FPs, 3 = NTP FPs}
\end{table}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/exo_xgb_cfmat.png}
\end{center}
\caption{Confusion matrix for the gradient boosting classifier on the hold-out set.}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[totalheight=6cm]{figures/xgb_feature_imp.png}
\end{center}
\caption{Feature importances for XGBoost Model}
\end{figure}
\subsection{Modeling: Discussion}
\paragraph{} Model performance on the hold-out sets and in cross-validation (see notebooks for cross-validation performance) are comparable between the models. However, its clear that the tree ensemble models are outperforming the kernelized SVM. There are some general trends that all these models share. The first is that the precision/recall and F1 score on prediction in Class 1 are very good. \textbf{This implies that we are separating true exoplanets from false positives very well.} The models also do a solid job identifying secondary eclipse FPs. The predictions for class 3 are not amazing. The precision for class 3 across the models is fair, but the recall is not that great. This means that the model is identifying a significant number of light curves that are labeled as class 3 as some other class. A little digging shows that the models are marking these light curves as class 2 -- which is something we expected based on our EDA. But this is no worry, as in the end, we care about whether a light curve corresponds to an exoplanet or not.
\paragraph{}
In terms of the metrics and model complexity, XGBoost seems to be doing the best. The model has a similar number of trees as our optimal random forest but just with using stumps as opposed to trees with 18 layers. For class 1, the precision is 0.94, and the recall and f1-score is 0.95. For class 2, the precision is 0.87, and the recall is 0.90 and f1-score is 0.88. Not too shabby. I'd choose this model as the winner.
\paragraph{} The fact that these models are all performing well to varying degrees lends credence to the argument that the features we constructed and the feature engineering are key here. A look at the feature importances for the random forest and XGBoost corroborate this. All the features seem to be important with LPP\_1, min, and the weak secondary statistic being the most important. In fact, the feature importances are ranked exactly the same way for both models. This consistency is also a hint that the constructed features are a large part of what is driving the performance.
\section{Conclusions}
\paragraph{} It seems that we have built a pretty good light curve vetter for exoplanet transit identification within the Kepler Object of Interest catalog. Ongoing missions like Kepler 2 and TESS (Transiting Exoplanet Survey Satellite) are finding new objects of interest. They too have data validation pipelines that are pretty similar in architecture to the Kepler data validation stream. It would not take much to generalize our feature extraction, pre-processing and prediction pipeline appropriately and use it for light curve classification in other missions.
\paragraph{} Ideally, we would have built a vetter that relies less on the mission data validation streams. We would need to incorporate instrument denoising, primary transit event detection, period extraction, and light curve whitening, and detrending on totally raw data. This is definitely a direction to go in that would actually cover the whole process end-to-end. But for now, our project does well for flagged objects of interest (which is what we intended to look at). The rest is for the future.
\end{document}
| {
"alphanum_fraction": 0.7781504065,
"avg_line_length": 88.3832335329,
"ext": "tex",
"hexsha": "efd97472e776b0e4b353bf3f6c0f4e9bce8807f2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4708493051e9138713adcd21a800fcc59acfdf31",
"max_forks_repo_licenses": [
"FTL",
"RSA-MD"
],
"max_forks_repo_name": "admveen/Exoplanet",
"max_forks_repo_path": "reports/final_report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4708493051e9138713adcd21a800fcc59acfdf31",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"FTL",
"RSA-MD"
],
"max_issues_repo_name": "admveen/Exoplanet",
"max_issues_repo_path": "reports/final_report.tex",
"max_line_length": 1231,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "4708493051e9138713adcd21a800fcc59acfdf31",
"max_stars_repo_licenses": [
"FTL",
"RSA-MD"
],
"max_stars_repo_name": "admveen/Exoplanet",
"max_stars_repo_path": "reports/final_report.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-05T04:35:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-05T04:35:32.000Z",
"num_tokens": 7302,
"size": 29520
} |
%!TEX TS-program = lualatex
%!TEX encoding = UTF-8 Unicode
\documentclass[12pt]{exam}
\usepackage{graphicx}
\graphicspath{{/Users/goby/Pictures/teach/163/lab/}
{img/}} % set of paths to search for images
\usepackage{geometry}
\geometry{letterpaper, left=1.5in, bottom=1in}
%\geometry{landscape} % Activate for for rotated page geometry
%\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{amssymb, amsmath}
%\usepackage{mathtools}
% \everymath{\displaystyle}
\usepackage{fontspec}
\setmainfont[Ligatures={TeX}, BoldFont={* Bold}, ItalicFont={* Italic}, BoldItalicFont={* BoldItalic}, Numbers={OldStyle}]{Linux Libertine O}
\setsansfont[Scale=MatchLowercase,Ligatures=TeX, Numbers={OldStyle}]{Linux Biolinum O}
\usepackage{microtype}
%\usepackage{unicode-math}
%\setmathfont[Scale=MatchLowercase]{Asana Math}
%\setmathfont[Scale=MatchLowercase]{XITS Math}
% To define fonts for particular uses within a document. For example,
% This sets the Libertine font to use tabular number format for tables.
%\newfontfamily{\tablenumbers}[Numbers={Monospaced}]{Linux Libertine O}
%\newfontfamily{\libertinedisplay}{Linux Libertine Display O}
\usepackage{booktabs}
\usepackage{multicol}
\usepackage[normalem]{ulem}
\usepackage{longtable}
%\usepackage{siunitx}
\usepackage{array}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}p{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}p{#1}}
\usepackage{enumitem}
\setlist[enumerate]{font=\normalfont\scshape}
\setlist[enumerate,1]{leftmargin=*}
\usepackage{stmaryrd}
\usepackage{hyperref}
%\usepackage{placeins} %PRovides \FloatBarrier to flush all floats before a certain point.
\usepackage{hanging}
\usepackage[sc]{titlesec}
\makeatletter
\def\SetTotalwidth{\advance\linewidth by \@totalleftmargin
\@totalleftmargin=0pt}
\makeatother
\pagestyle{headandfoot}
\firstpageheader{BI 063: Evolution and Ecology}{}{\ifprintanswers\textbf{KEY}\else Name: \enspace \makebox[2.5in]{\hrulefill}\fi}
\runningheader{}{}{\footnotesize{pg. \thepage}}
\footer{}{}{}
\runningheadrule
\begin{document}
\subsection*{Depicting your hypothesis as a phylogenetic tree}
Below is a table of 21 organisms. If you do not know what some of the
organisms are, try looking them up in a dictionary, your text, or on the
internet (Google, as usual, is a good place to start). You will use this
list to to make a hypothesis that shows how these organisms are or
are not related to each other. Your hypothesis must take the form of
a phylogenetic tree or trees. Your hypothesis should reflect \textit{your}
ideas about whether the organisms are related in any way to each other.
You cannot make an incorrect hypothesis, as long as you follow the simple rules
outlined below. Do not copy a tree from the internet or
from a friend. You can make as many or as few trees as
needed to convey your idea. Below, I give you few tips to help
you make a good hypothesis with correctly drawn trees.
\begin{longtable}[c]{@{}L{1in}L{0.6in}|L{1in}L{0.6in}|L{1in}L{0.6in}@{}}
\toprule
\multicolumn{2}{c|}{Organism} &
\multicolumn{2}{c|}{Organism} &
\multicolumn{2}{c}{Organism}\tabularnewline
%
alligator & \includegraphics[width=0.5in]{alligator_small} &
\emph{E. coli} & \includegraphics[width=0.5in]{ecoli_small} &
marine worm & \includegraphics[width=0.5in]{marine_worm}\tabularnewline
%
bass & \includegraphics[width=0.5in]{bass} &
frog & \includegraphics[width=0.5in]{frog} &
\emph{Paramecium} & \includegraphics[width=0.5in]{paramecium}\tabularnewline
%
bat & \includegraphics[width=0.5in]{bat} &
fungus & \includegraphics[width=0.4in]{fungus} &
pigeon & \includegraphics[width=0.5in]{pigeon}\tabularnewline
%
bison & \includegraphics[width=0.5in]{bison} &
\emph{Homo sapiens} & \includegraphics[width=0.5in]{human} &
praying mantis & \includegraphics[width=0.4in]{praying_mantis}\tabularnewline
%
cactus & \includegraphics[width=0.35in]{cactus} &
land snail & \includegraphics[width=0.5in]{land_snail} &
snake & \includegraphics[width=0.4in]{snake}\tabularnewline
%
cat & \includegraphics[width=0.4in]{cat} &
macaque & \includegraphics[width=0.5in]{macaque} &
wasp & \includegraphics[width=0.5in]{wasp}\tabularnewline
%
chimpanzee & \includegraphics[width=0.5in]{chimpanzee} &
maple tree & \includegraphics[width=0.4in]{maple_tree} &
whale & \includegraphics[width=0.6in]{whale}\tabularnewline
%
\bottomrule
\end{longtable}
\vspace*{\baselineskip}
\noindent\textsc{General tips}
\begin{enumerate}
\item
Your hypothesis needs to account for all 21
organisms listed above. This means that you \emph{must} include
all 21 on your tree.
\item
An easy way to start is put all the organisms across the top, grouping
them by any relationships you will be drawing. All 21 of those are
alive today, so all 21 had better be at the top of the tree, because
that is the present time.
%\end{enumerate}
\begin{longtable}[c]{@{}ll@{}}
First draw them: & Then draw the lines:\tabularnewline
%
\includegraphics[width=0.33\textwidth]{draw_step1} &
\includegraphics[width=0.33\textwidth]{draw_step2} \tabularnewline
%
\end{longtable}
%\begin{enumerate}
\item
The vertical (Y) axis represents time. Present time is at the top. Farther
down the tree or page represents farther back in time. The oldest
organism is thus the farthest back, or at the bottom of your tree. If
the first organism or organisms are among the 21 listed above,
you can put their names at the bottom too, but you do not have to. In
fact, I recommend that you do not. Just starting a line is okay, as
shown in earlier exercises. Obviously, if your first organisms are
not among the 21, you will just have to start with lines at the bottom.
\item The horizontal (X) axis does not represent anything in most phylogenetic trees.
\item
A line (branch) indicates an ancestor to descendant relationship, where
successive generations would be the (invisible) points that make up
the line leading from the start of life at the bottom to the present
day at the top. Of course, if you want an organism to start sometime
later, just do not start its line all the way down at the bottom. A
fork in the line indicates a lineage splitting, as you saw in the
Phylogenetic Forest exercise.
\item
You \emph{must} include a time scale labeled along the Y-axis. Do
not simply write Present and Past. You must have a specific time
scale, but the actual units (e.g, thousands of years, millions of
years) is part of your hypothesis.
\item
Only two organisms can branch from one point. That is, you should get
a split like a “goal post” (preferred) or {\Large$\Ydown$} with one kind of
organism on one branch, a
different organism on the other. This means that two groups of individuals
from the same species got separated and one of the groups changed
enough that they are no longer the same species. It would not happen
that three groups would all do this at precisely the same moment.
Hence,
\newpage
\begin{longtable}[c]{@{}lL{0.2in}lL{0.2in}l@{}}
this is possible, & &
this is \emph{not}, & &
but this is!\tabularnewline
%
\includegraphics[height=2in]{03d_possible1} & &
\includegraphics[height=2in]{03d_possible_not} & &
\includegraphics[height=2in]{03d_possible2}\tabularnewline
%
\end{longtable}
\item
The image below shows nine examples of tree drawing, with explanations of each tree. Some trees show common mistakes made when drawing the trees (\textsc{b}--\textsc{e}).
The other trees show a few things that are okay (\textsc{a, f--i}).\\[1\baselineskip] \textit{Do not do \textsc{b, c, d,} or \textsc{e}!}
\end{enumerate}
\noindent\includegraphics[width=\textwidth]{03d_example_do_dont}
\begin{enumerate}
\def\labelenumi{\alph{enumi}.}
\item
This shows a \emph{Paramecium} giving rise first to \emph{E. coli} and
then to \emph{Amoeba}. All three are alive today—they are
\emph{extant}. This is okay, and a testable
hypothesis.
\item
This shows a \emph{Paramecium} giving rise first to \emph{E. coli} and
\emph{Amoeba}. The \emph{Paramecium} is not alive today, though,
according to this hypothesis. It is extinct. How do I know? It
is not represented as a branch leading to the top of the tree. What
this says is that it evolved into something else, but no
\emph{Paramecium} line lasted to the present. You would not want to do
this because \emph{Paramecium} is a living organism
today.
\item
This shows \emph{Paramecium} giving rise first to \emph{E. coli} and
then \emph{Amoeba}. The \emph{E. coli} went extinct a little less
than 2 million years ago (\textsc{mya}), the other two are extant. How do I
know this? The \emph{E. coli} line branches off about 3 \textsc{mya} but
stops less than 2 \textsc{mya}; this is when it became extinct.
Again, \emph{E. coli} is a modern organism, so you cannot do
this.
\item
This is probably impossible. It shows \emph{Amoeba} and \textit{Paramecium}
merging to become one organism. Other than a few rare cases with
plants, this just does not happen. \emph{E. coli} is shown joining
them later. Again, this is highly unlikely.
Now, you may be thinking that some organisms are formed by having two different
species mate to produce a new species. In fact, one definition of a
species in biology is a group of organisms that can mate with each
other and produce fertile offspring, but \emph{cannot} mate with
members of another species to produce fertile offspring.
Horses and donkeys are two species; they
can mate, but the offspring are mules and are sterile. This method
of producing new species just does not work, except in a few instances
involving \emph{very} closely related animals, or slightly less closely
related plants. Usually, with animals, such matings only work with closely
related species within the same genus, and not always then.
On the other hand, members of a species \emph{can} become so different
from the rest of the species that they can no longer mate with the
others and produce fertile offspring. We then say that they constitute
a new species. So, diagrams like \textsc{a}, \textsc{b}, and \textsc{c} can happen—we have
actually observed the process of new species forming in the laboratory.
But the process shown in \textsc{d} has only been observed in a few cases with
closely related organisms.
\item
This is a common mistake when drawing a tree. What this shows is that
\emph{Paramecium} evolved into \emph{Amoeba} which evolved into
\emph{E. coli}. This says that only \emph{E. coli} is alive today,
the other two organisms are extinct, and of course, they actually
are not.
\item
This says that \emph{Paramecium} is now as it always has been and that
it did not evolve into anything or evolve from anything. This is a
plausible, testable hypothesis.
\item
This shows the same thing as \textsc{f}. You do not need to put the name at the
bottom. This is okay.
\item
This shows that \emph{Paramecium} just appeared on Earth very
recently. This is also okay.
\item
This shows that \emph{Paramecium} appeared on Earth a little more than
2 \textsc{mya}. Again, this is okay.
\end{enumerate}
\noindent\textbf{N\textsc{ote}: Use \emph{only} the 21 organisms listed on the first page
of this assignment. Do not include the other organisms, such as
\emph{Amoeba}, dog or gorilla, used in the examples above, or extinct organisms like dinosaurs.}
\end{document} | {
"alphanum_fraction": 0.7503215296,
"avg_line_length": 38.619205298,
"ext": "tex",
"hexsha": "4baf364753b8b12a65e6f98f780683f868d7175e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "670db734c68195edb7af76a2feee7bcb166fdffc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mtaylor-semo/163",
"max_forks_repo_path": "lab_archive/903d_your_phylognetic_tree.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "670db734c68195edb7af76a2feee7bcb166fdffc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mtaylor-semo/163",
"max_issues_repo_path": "lab_archive/903d_your_phylognetic_tree.tex",
"max_line_length": 174,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "670db734c68195edb7af76a2feee7bcb166fdffc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mtaylor-semo/163",
"max_stars_repo_path": "lab_archive/903d_your_phylognetic_tree.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3444,
"size": 11663
} |
Language model adaptation can be applied when little training data is given for the
task at hand, but much more data from other less related sources is available. {\tt tlm} supports two adaptation methods.
\subsection{Minimum Discriminative Information Adaptation}
MDI adaptation is used when domain related data is very little but
enough to estimate a unigram LM. Basically, the n-gram probs of a
general purpose (background) LM are scaled so that they match the
target unigram distribution.
\noindent
Relevant parameters:
\begin{itemize}
\item {\tt -ar=value}: the adaptation {\tt rate}, a real number ranging
from 0 (=no adaptation) to 1 (=strong adaptation).
\item {\tt -ad=file}: the adaptation file, either a text or a
unigram table.
\item {\tt -ao=y}: open vocabulary mode, which must be set if the adaptation file
might contain new words to be added to the basic dictionary.
\end{itemize}
\noindent
As an example, we apply MDI adaptation on the ``adapt'' file:
\begin{small}
\begin{verbatim}
$> tlm -tr=train.www -lm=wb -n=3 -te=test -dub=1000000 -ad=adapt -ar=0.8 -ao=yes
n=49984 LP=326327.8053 PP=684.470312 OVVRate=0.04193341869
\end{verbatim}
\end{small}
\noindent
\paragraph{Warning:} modified shift-beta smoothing cannot be applied in open
vocabulary mode ({\tt -ao=yes}). If this is the case, you should either
change smoothing method or simply add the adaptation text to the
background LM (use {\tt -aug} parameter of {\tt ngt}). In
general, this solution should provide better performance.
\begin{small}
\begin{verbatim}
$> ngt -i=train.www -aug=adapt -o=train-adapt.www -n=3 -b=yes
$> tlm -tr=train-adapt.www -lm=msb -n=3 -te=test -dub=1000000 -ad=adapt -ar=0.8
n=49984 LP=312276.1746 PP=516.7311396 OVVRate=0.04193341869
\end{verbatim}
\end{small}
\subsection{Mixture Adaptation}
\noindent
Mixture adaptation is useful when you have enough training data to
estimate a bigram or trigram LM and you also have data collections
from other domains.
\noindent
Relevant parameters:
\begin{itemize}
\item {\tt-lm=mix} : specifies mixture smoothing method
\item {\tt -slmi=<filename>}: specifies filename with information about LMs to combine.
\end{itemize}
\noindent
In the example directory, the file {\tt sublmi} contains the following lines:
\begin{verbatim}
2
-slm=msb -str=adapt -sp=0
-slm=msb -str=train.www -sp=0
\end{verbatim}
\noindent
This means that we use train a mixture model on the {\tt adapt} data set and
combine it with the train data. For each data set the desired
smoothing method is specified (disregard the parameter {\tt -sp}). The file
used for adaptation is the one in FIRST position.
\begin{verbatim}
$> tlm -tr=train.www -lm=mix -slmi=sublm -n=3 -te=test -dub=1000000
n=49984 LP=307199.3273 PP=466.8244383 OVVRate=0.04193341869
\end{verbatim}
\noindent
{\bf Warning}: for computational reasons it is expected that the $n$-gram
table specified by {\tt -tr} contains AT LEAST the $n$-grams of the last
table specified in the slmi file, i.e. {\tt train.www} in the example.
Faster computations are achieved by putting the largest dataset as the
last sub-model in the list and the union of all data sets as training
file.
\noindent
It is also IMPORTANT that a large {\tt -dub} value is specified so that
probabilities of sub-LMs can be correctly computed in case of
out-of-vocabulary words.
| {
"alphanum_fraction": 0.7401689484,
"avg_line_length": 37.3152173913,
"ext": "tex",
"hexsha": "8102858d624f9f9301f27f9a46f22b6b97bd9a9c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "38530efb23fc0f57a1d139d27b797be93ecb41b2",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "aashishyadavally/Working-with-Kaldi",
"max_forks_repo_path": "tools/extras/irstlm/doc/LMAdaptation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "38530efb23fc0f57a1d139d27b797be93ecb41b2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "aashishyadavally/Working-with-Kaldi",
"max_issues_repo_path": "tools/extras/irstlm/doc/LMAdaptation.tex",
"max_line_length": 122,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "3868818a06a2162db52d24df913272b504236094",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "plison/mosespy",
"max_stars_repo_path": "doc/LMAdaptation.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-30T06:46:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-28T13:09:14.000Z",
"num_tokens": 1033,
"size": 3433
} |
% MIT License
% Copyright (c) 2019-2020 Simon Crase
% Permission is hereby granted, free of charge, to any person obtaining a copy
% of this software and associated documentation files (the "Software"), to deal
% in the Software without restriction, including without limitation the rights
% to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
% copies of the Software, and to permit persons to whom the Software is
% furnished to do so, subject to the following conditions:
% The above copyright notice and this permission notice shall be included in all
% copies or substantial portions of the Software.
% THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
% IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
% FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
% AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
% LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
% OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
% SOFTWARE.
\documentclass[]{article}
\usepackage{caption,subcaption,graphicx,float,url,amsmath,amssymb,tocloft}
\usepackage[hidelinks]{hyperref}
\usepackage[toc,acronym,nonumberlist]{glossaries}
\usepackage{titling}
\setacronymstyle{long-short}
\usepackage{glossaries-extra}
\graphicspath{{figs/}}
\setlength{\cftsubsecindent}{0em}
\setlength{\cftsecnumwidth}{3em}
\setlength{\cftsubsecnumwidth}{3em}
% I snarfed the next line from Stack exchange
% https://tex.stackexchange.com/questions
% /42726/align-but-show-one-equation-number-at-the-end
% It allows me to suppress equation numbers with align*,
% then selectively add equation numbers
% for lines that I want to reference slsewhere
\newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}}
% Add logo at start of document
\pretitle{
\begin{center}
\includegraphics[width=6cm]{KanjiLife}\\
}
\posttitle{\end{center}}
%opening
\title{
Notes from Origins of Life\\
Week 6\\
Astrobiology \& General Theories of Life
}
\author{Simon Crase (compiler)\\[email protected]}
\makeglossaries
% Prefix section numbers with week number
\renewcommand{\thesection}{6.\arabic{section}}
\loadglsentries{glossary-entries}
\renewcommand{\glstextformat}[1]{\textbf{\em #1}}
\begin{document}
\maketitle
\begin{abstract}
These are my notes from the $6^{th}$ Week of the Santa Fe Institute Origins of Life Course\cite{sfi2020}.
The content and images contained herein are the intellectual property of the Santa Fe Institute, with the exception of any errors in transcription, which are my own.
These notes are distributed in the hope that they will be useful,
but without any warranty, and without even the implied warranty of
merchantability or fitness for a particular purpose. All feedback is welcome,
but I don't necessarily undertake to do anything with it.\\
\LaTeX source for this week's lectures can be found at\\
\url{https://github.com/weka511/complexity/tree/master/origins}.
\end{abstract}
\setcounter{tocdepth}{2}
\tableofcontents
\listoffigures
\section[Introduction]{Introduction--Chris Kempes}
Over the course of these lectures we have provided a range of tools and perspectives that would help us understand the origin of life on our own planet. These can be used to develop a general theory of life. Our ultimate goal is to provide a general theory of Life, one capable of uncovering the history we know about, but also bounding the possibilities for other types of life, and helping us recognize other forms of life. In this unit we'll discuss the search for life beyond Earth and how origins of life fits into this effort. We'll also discuss general evolutionary processes and abstract life.
\section[Origins of Life and Astrobiology]{Origins of Life and Astrobiology-- Sara Imari Walker}
Hi I'm Sarah Walker, a professor at Arizona State University and an astrobiologist.
I study the origin of life. One of the questions you might have as an astrobiologist is: why is origin of life so critical to the study of astrobiology?
Astrobiologists are really interested in whether or not we can identify a living world, so if we can identify life on another planet.
We want to discover aliens and ultimately, the question is whether we'll be able to distinguish planets that have life from plants that don't have life. So in our own solar system, we can actually send robotic missions to other planets to look for life on the surface of those worlds but we're thinking about exoplanets and planets and distant solar systems all that we're going to get is a little bit of data about the entire planet.
As astrobiologists we are interested in thinking about life not only at the scale of individual organisms, but also at the scale of entire planets.
And so just to talk about the magnitude of the problem a lot of people want to talk about looking at life in kind of new ways and maybe trying to use insights from different aspects of known fields of science to try to understand how we identify life.
Figure \ref{fig:Jupiter:Tellus} shows two worlds that are probably familiar to everyone, Jupiter and Earth; we know one of these planets is inhabited and the other isn't.
\begin{figure}[H]
\caption[What makes worlds with life different?]{What makes worlds with life different? Not just non-equilibrium structures.}\label{fig:Jupiter:Tellus}
\begin{subfigure}[b]{0.44\textwidth}
\caption{Jupiter has its Great Red Spot}\label{fig:Jupiter}
\includegraphics[width=\textwidth]{Jupiter}
\end{subfigure}
\;\;\;
\begin{subfigure}[b]{0.5\textwidth}
\caption{Earth at night, showing cities.}\label{fig:Earth}
\includegraphics[width=\textwidth]{Tellus}
\end{subfigure}
\end{figure}
The inhabited world is obviously our very own earth, and we can see living structures on its surface. What we see in Figure \ref{fig:Earth} is cities at night. When we're
thinking about how to think about the problem of distinguishing the living process on Earth from the nonliving process on Jupiter, it's clear that both have non-equilibrium structures on their surfaces. Jupiter--Figure \ref{fig:Jupiter} has this great red spot and Earth has cities. So when we want to talk about defining the properties of those planets that are associated with life, it's not just about this disequilibra. Clearly cities are fundamentally different from the great red spot of Jupiter, even though they're both non-equilibrium structures. So we have to move a little bit further and understand the origins of the processes that led to structure on the surface of our planet that are associated with life, and the root of that question is really to understand the probability of life emerging on a planet--Figure \ref{fig:P:Life}, and how can we actually understand that as a planetary scale process.
\begin{figure}[H]
\caption[Rate of abiogenesis in a prebiotic environment]{Rate of abiogenesis in a prebiotic environment as a function of its physical and chemical conditions}\label{fig:P:Life}
\includegraphics[width=0.9\textwidth]{P_Life}
\end{figure}
There's really two ways of constraining the likelihood of life emerging on a planet. We don't think that Jupiter is a living planet:
based on our observations of Jupiter, it \emph{might} turn out to satisfy some definition of life down the road if we actually come up with a theory for life and Jupiter satisfies that theory; right now we don't think Jupiter's alive.
And so we need some observations of other living world to constrain the probability of life. Right now Earth is the only example we know and so to do that we actually have to detect alien life and determine its abundance. And so this is the way you usually people think about astrobiology is actually looking at other worlds trying to identify if there's aliens on those worlds and then maybe we would actually be able to constrain the probability that a planet like Earth is going to emerge life on its surface and a planet like Jupiter is not.
But we can also think about theory and experiment to constrain the probability for life. From this view the idea is really to try to uncover the universal principles of life that might actually allow us to build predictive models for the circumstances under which life should emerge. So, we would have some a priori theory that would enable us to predict $P(life)$--the probability of life emerging.
That theory should be able to account for the differences in Figure \ref{fig:Jupiter:Tellus}, i.e. to explain why it's not just a non-equilibrium process on the surface of a planet. In the case of formation of cities or forests--or any of the kind of rich structure that we see on earth that's a product of biology--the theory would be able to explain what those things are and be able to predict what kinds of other
examples of life we might be able to see on other planets and their likelihood.
But really what we're talking about in
order to constrain the probability of
life is not just to think about the
probability of forests or cities on the
surface of planets as opposed to the
probability of great red spots or other
kinds of dissipative structures that
aren't alive. What we really want is to
understand what's the likelihood of life
even emerging on that planet.
So we really need to be able to solve
the origin of life problem in order to do
astrobiology effectively and constrain
the likelihood of life in the universe.
And so in order to do that, we have to
come up with better theories for origins
of life and be able to understand how
life emerges. And so one of the ways I
like to think about it is really that
we're looking for new principles that
would explain life not just on earth but
life on other planets and I really love
this quote from David Deutsch which I
think articulates very nicely the kind
of processes that are happening on
planets that we really need to be able
to understand in order to understand
life.
\begin{quotation}
Base metals can be transmuted into gold by stars, and by intelligent beings who understand the processes that power stars, and by nothing else in the universe--David Deutsch\cite{deutsch2011beginning}.
\end{quotation}
We have a physics that explains things like stars or the physics of Jupiter and why Jupiter has a great storm on the surface of the planet at the great red spot.
But we don't have a physics that explains the evolution of a planet like our own, how life emerges on that planet, or how it evolves over time to lead to the kind of diversity of structures that we have on the surface of our planet today, like cities and
thinking human beings.
\begin{figure}[H]
\begin{center}
\caption[We don't have a physics that explains the evolution of our planet]{We don't have a physics that explains the evolution of a planet like our own}\label{fig:tellus}
\includegraphics[width=0.5\textwidth]{Tellus}
\end{center}
\end{figure}
The origin of life problem is really a problem of how that entire process of life gets started in the first place and it's ultimately critically important to the field of astrobiology that we understand that process because we want to know on how many worlds that occurs.
\section{Exoplanets}
\subsection[The Habitable Zone]{The Habitable Zone--Elizabeth Tasker}
This lecture takes us away from our own planet to look at what we currently know about
planets orbiting around other stars.
Before the early 1990s, the only planets we knew for sure that existed were the worlds that orbited around our own Sun, but as our instruments became sensitive enough to spot
the dim whisper of a planet around other stars in our galaxy, we discovered our planetary system was one of multitudes--Figure \ref{fig:exoplants}.
\begin{figure}[H]
\caption{Exoplanets by discovery technique}\label{fig:exoplants}
\includegraphics[width=0.9\textwidth]{Exoplanets}
\end{figure}
We now know of thousands of extrasolar planets or exoplanets--planets that orbit stars
other than our Sun.
This results in an obvious question: could any of these newly discovered worlds be habitable?
\begin{figure}[H]
\caption{Could any of these newly discovered
worlds be habitable?}\label{fig:could-any-be-habitable}
\includegraphics[width=\textwidth]{could-any-be-habitable}
\end{figure}
The problem with that question is, while we have discovered many worlds, we actually know very little about each planet. The majority of planets
we have discovered so far have been found by one of two techniques--Figure \ref{fig:exoplants}:
\begin{enumerate}
\item the radial velocity technique used by ground-based telescopes, such as the \gls{gls:ESO} in Chile----Figure \ref{fig:radio-velocity-technique0};
\item the transit technique used by instruments such as the Kepler space telescope and its successor--\gls{gls:TESS}--Figure \ref{fig:transit}.
\end{enumerate}
\begin{figure}[H]
\caption[The radial velocity technique]{The radial velocity technique, sometimes known as the "Doppler wobble," detects a planet via the tiny wobble it excites in the star. While we normally think of the star as stationary and the planet in orbit, in truth, both the star and planet orbit their common center of mass--Figure \ref{fig:radio-velocity-technique}. As a star is so much bigger than the planet, this center of mass lies very close to the star's own center, causing its orbit to be just a tiny wobble in comparison to the planet's wide circuit. This wobble causes the star to move periodically slightly further away and then closer to the Earth. As the star moves slightly from the Earth, its light waves stretch out and redden slightly--Figure \ref{fig:radio-velocity-technique1}. Conversely, as a star moves back towards us, the light waves compress and become bluer. This regular shift from red to blue is what astronomers can measure to detect a planet--Figure \ref{fig:radio-velocity-technique2}.}\label{fig:radio-velocity-technique0}
\begin{subfigure}[t]{0.3\textwidth}
\caption{The star and planet orbit their common center of mass}\label{fig:radio-velocity-technique}
\includegraphics[width=\textwidth]{radio-velocity-technique}
\end{subfigure}
\;\;\;
\begin{subfigure}[t]{0.3\textwidth}
\caption{As the star moves slightly from the Earth, its light waves stretch out and redden slightly}\label{fig:radio-velocity-technique1}
\includegraphics[width=\textwidth]{radio-velocity-technique1}
\end{subfigure}
\;\;\;
\begin{subfigure}[t]{0.3\textwidth}
\caption{This regular shift from red to blue is what astronomers can measure to detect a planet}\label{fig:radio-velocity-technique2}
\includegraphics[width=\textwidth]{radio-velocity-technique2}
\end{subfigure}
\end{figure}
\begin{figure}[H]
\begin{center}
\caption[The transit technique]{The second main method for planet detection is the transit technique. Here, a slight dip in the star's brightness is detected as the planet passes in front of the star as seen from Earth.}\label{fig:transit}
\includegraphics[width=0.5\textwidth]{transit}
\end{center}
\end{figure}
These two methods give you just two properties about the planet--Figure \ref{fig:planet-properties}.
The transit technique gives you an estimate of the planet's radius while the radial velocity technique tells you about the planet's minimum mass.
This may be significantly less than the true mass of the planets, as the radial velocity technique only measures the wobble of the star directly towards the Earth.
If the planet's orbit is tilted with respect to us, then part of the star's motion
will be directed away from us.
We won't measure this and so will underestimate the planet mass.
\begin{figure}[H]
\begin{center}
\caption{These two methods give you just two properties about the planet}\label{fig:planet-properties}
\includegraphics[width=0.8\textwidth]{planet-properties}
\end{center}
\end{figure}
Both techniques also tell you about the amount of radiation the planet receives from the star.
But, this can be very different from the surface temperature, as it does not allow for
the heat trapping effects of the different atmospheric gases.
The challenge we're trying to determine--if a plant is habitable--is therefore that we can only measure two or three properties and none of these actually tell us what it's like on the planet surface.
This will change as the next generation of telescopes will be able to detect light that passes through the planet's atmosphere.
Different molecules in the atmosphere absorb different wavelengths of light, providing a fingerprint of missing wavelengths that indicate atmospheric composition--our first hint at what is happening on the planet's surface.
But, this brings us to a new problem: such atmospheric spectroscopy for rocky, temperate planets is time-consuming and difficult.
We therefore need a way of selecting planets most likely to reveal interesting results.
But how do we select planets best suited for habitability without knowing any surface properties?
Let's think about what we want to find.
It's going to be easiest to recognize Earth-like life, that is, water and
carbon-based chemistry.
Also, this needs to be detectable, which means the water needs to be on the surface of the planet, not a subsurface system like Europa.
Based on this, we can ask the question: how much \gls{gls:insolation} does an Earth-like planet need?
The answer to this is the \emph{Classical Habitable Zone}--Figure \ref{fig:classical:habitable:zone}.
\begin{figure}[H]
\caption{Classical Habitable Zone}\label{fig:classical:habitable:zone}
\includegraphics[width=\textwidth]{ClassicalHabitableZone}
\end{figure}
The Classical Habitable Zone is where an Earth-like planet, that is, a planet with our surface pressure, atmospheric gases and geological processes can support water on the surface.
Often, in exoplanet literature, this is simply referred to as the "habitable zone", as we don't yet know about planets other than the Earth that can support life.
At the inner edge of the habitable zone, it is too warm for surface water on the Earth and it evaporates.
At the outer edge, carbon dioxide condenses into clouds and is no longer able to provide
the thermal insulation of a greenhouse gas--so the planet freezes.
Climate models predict that the habitable zone should stretch between 0.99 au and 1.67 au
where 1 au is the average distance of the Earth from the Sun.
Our planet, therefore,sits right on the inner edge.
A slight extension to this is known as the "optimistic habitable zone," which can broaden these limits based on the idea that Venus and Mars probably have
supported surface water in their past--Figure \ref{fig:optimistic:habitable:zone}.
\begin{figure}[H]
\caption[Optimistic Habitable Zone]{Optimistic Habitable Zone\cite{kasting1993habitable,kopparapu2013habitable}}\label{fig:optimistic:habitable:zone}
\includegraphics[width=0.9\textwidth]{OptimisticHabitableZone.jpg}
\end{figure}
So, an earth-like planet could have a period of habitability just outside the habitable zone edges.
The edges of the classical habitable zone are only calculated for the Earth.
This is easily demonstrated as, while Venus sits outside the habitable zone, both the Moon and Mars orbit within it but neither are Earth-like enough to support liquid water in this region.
Different planets might have different habitable zones at different locations, or they may not have a habitable zone at all.
Of the planets we have found so far orbiting in the classical habitable zone, almost 15 times as many are large enough to be likely to have thick, Neptune-like atmospheres compared to planets that might be rocky.
We have discovered planets that are the right size to be rocky and orbit entirely within
the habitable zone--Figure \ref{fig:are:these:earthlike}.
Are these Earth-like enough to support liquid water in this region?
We don't know. They may have very different atmospheric gases or geology that makes
surface water impossible. The only thing we can say is that if another habitable,
Earth-like planet is out there, it would be in the habitable zone, but being in the habitable zone does not mean you're Earth-like enough for life.
\begin{figure}[H]
\begin{center}
\caption{Are these exoplanets Earth-like?}\label{fig:are:these:earthlike}
\includegraphics[width=0.9\textwidth]{AreTheseEarthlike}
\end{center}
\end{figure}
So, in conclusion:\begin{itemize}
\item we've discovered thousands of exoplanets, many of which are similar in size to the Earth;
\item at the moment, we have no way of knowing what their surfaces are like. Note, in particular, that the Earth and Venus are both very similar in size--so, they are both Earth-sized planets;
\item our next generation of telescopes will be able to detect the atmosphere of these worlds and tell us something about their surfaces for the first time.
\item the habitable zone is a useful concept for selecting planets for these new telescopes but it offers no guarantee that a planet is actually habitable.
\end{itemize}
References:
\begin{itemize}
\item If you'd like to try playing with a simple climate model of an Earth-like planet, you can head over to Earthlike World \cite{earthlike.world} or the associated Twitter feed. This website lets you see how different a planet might be from our own world today, even if it did have the same geological cycles as our own.
\item The NASA \gls{gls:NExSS} "Many Worlds" blog\cite{nexss.info} covers the latest news for exoplanets and many origin of life stories.
\item There's also a more technical overview of the search for biosignatures in a paper led by Yuka Fujii, published in "Astrobiology" last year--Figure \ref{fujii2018exoplanet}.
\item See also \cite{villanueva2015unique}.
\end{itemize}
\subsection[Exoplanet Atmospheric Characterization]{Exoplanet Atmospheric Characterization--Yuka Fujii}
Astronomers have discovered
thousands of extrasolar planets
or exoplanets.
Is any of them inhabited like the Earth?
How can we search for it?
In this lecture, I will talk about
the techniques to study exoplanet
atmospheres and possibly surfaces,
which is an essential step towards
finding life on exoplanets.
Detection of exoplanets typically
comes with two properties:
the size and the orbit,
including the distance from the host star--Figure \ref{fig:discovered:exoplanets}.
\begin{figure}[H]
\caption{Discovered Exoplanets}\label{fig:discovered:exoplanets}
\includegraphics[width=0.9\textwidth]{ExoplanetCharacteristics}
\end{figure}
For evaluating their potential for life,
we definitely need to get
more information.
But, how?
It has to be reminded that
exoplanets are light-years away.
This means that it's not realistic
to actually visit there
and it's not possible either
to get the spatially resolved images,
unlike the case of solar system planets.
They are just point sources
and we have to rely on remote
observations of these faint dots.
But, observations of these faint dots,
if possible at all,
can in principle give us hints
about the nature of the planets.
For example,
if you observe an Earth Twin--Figure \ref{fig:spectrum:earth:twin0},
Figure \ref{fig:spectrum:earth:twin} shows the overall spectrum
you would get.
\begin{figure}[H]
\caption{Spectrum of an earth twin}
\begin{subfigure}[b]{0.3\textwidth}
\caption{Spectrum of an earth twin}\label{fig:spectrum:earth:twin0}
\includegraphics[width=\textwidth]{Tellus}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\caption{Spectrum of an earth twin}\label{fig:spectrum:earth:twin}
\includegraphics[width=\textwidth]{SpectrumEarthTwin}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\caption{At shorter wavelengths the planet scatters light}\label{fig:spectrum:earth:twin1}
\includegraphics[width=\textwidth]{SpectrumEarthTwin1}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\caption{At longer wavelengths it emits infrared}\label{fig:spectrum:earth:twin2}
\includegraphics[width=\textwidth]{SpectrumEarthTwin2}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\caption{Absorption by atmospheric species}\label{fig:spectrum:earth:twin3}
\includegraphics[width=\textwidth]{SpectrumEarthTwin3}
\end{subfigure}
\end{figure}
At shorter wavelength,
the planet illuminates
by scattering the light
from the host star--Figure \ref{fig:spectrum:earth:twin1}.
And, it's blue features depend on,
for example,
surface composition,
atmospheric pressures and clouds.
On the other hand,
in the invisible infrared range,
the planet emits light
because of its thermal energy--Figure \ref{fig:spectrum:earth:twin2}--
and its baseline depends on
the temperature structure
of the surface layers.
Imprinted in these baselines
are the lower features
due to absorption
by atmospheric species--Figure \ref{fig:spectrum:earth:twin3}.
In the case of an Earth Twin,
they include astrobiologically important
water vapor, oxygen
and ozone features.
We could also potentially use
the time variation of these features
due to planet rotation,
which essentially allows us
to scan the planet
and highlight the regional features.
However, it's not straightforward
to detect the light from exoplanets.
Seen from afar, an exoplanet is
very close to its own host star,
and the star is several to ten
orders of magnitude brighter.
It's equivalent to seeing a firefly
right next to a lighthouse.
Suppose you try to take a picture
of an exoplanet.
Then, the host star is always there,
and, on the imaging plane,
the star is blurred
and the planet is in the skirt of it--Figure \ref{fig:StarIsMuchBrighter}.
\begin{figure}[H]
\caption[The star is many orders of magnitude brighter than its planets]{Unfortunately the star is many orders of magnitude brighter than its planets}\label{fig:StarIsMuchBrighter}
\includegraphics[width=\textwidth]{StarIsMuchBrighter}
\end{figure}
Compared to the peak intensity,
the skirt is orders of magnitude darker,
but planets are often even fainter,
so the signal is varied.
In order to identify the planetary signal,
we need to suppress the light
from the host star
using special instruments.
The idea of such direct imaging
observations of Earth-like planets
dates back to the 1990s.
But, the starlight suppression
is technically challenging.
In the past decade,
direct imaging has been successful
for young, luminous, giant,
gaseous planets at wide orbits--Figure \ref{fig:young:jupiter0}.
Earth-like planets are about
ten times smaller in diameter
and much fainter than
these successful targets.
The efforts are ongoing to achieve
the hyper-suppression level
to be able to detect an Earth Twin.
\begin{figure}[H]
\caption{Success with Young Jupiter-like Planets in Distant Orbits}\label{fig:young:jupiter}
\begin{subfigure}[b]{0.25\textwidth}
\caption{Young Jupiter-like Planet}\label{fig:young:jupiter0}
\includegraphics[width=\textwidth]{young-hot-jupiter}
\end{subfigure}
\;\;\;
\begin{subfigure}[b]{0.3\textwidth}
\caption{ Images of a fourth planet orbiting hr 8799\cite{marois2010images}}\label{fig:young:jupiter1}
\includegraphics[width=\textwidth]{DirectImaging1.jpg}
\end{subfigure}
\;\;\;
\begin{subfigure}[b]{0.35\textwidth}
\caption{Gpi spectra of hr 8799 c, d,
and e\cite{greenbaum2018gpi}}\label{fig:young:jupiter2}
\includegraphics[width=\textwidth]{DirectImaging2.jpg}
\end{subfigure}
\end{figure}
Meanwhile, the discovery
of transiting planets
opened up new possibilities
to study exoplanet atmospheres
without using special instruments--Figure \ref{fig:transiting:planets}.
\begin{figure}[H]
\caption[Transiting Planets]{The discovery
of transiting planets
opened up new possibilities
to study exoplanet atmospheres
without using special instruments}\label{fig:transiting:planets}
\begin{subfigure}[b]{0.2\textwidth}
\caption{A transiting planet}\label{fig:transiting:planets1}
\includegraphics[width=\textwidth]{TransitingPlanets1}
\end{subfigure}
\;\;
\begin{subfigure}[b]{0.3\textwidth}
\caption{A planet which passes right in front of the star
because its orbital plane is close to the line of sight}\label{fig:transiting:planets2}
\includegraphics[width=\textwidth]{TransitingPlanets2}
\end{subfigure}
\;\;
\begin{subfigure}[b]{0.31\textwidth}
\caption{During the transit, a small portion
of the stellar light
is filtered through
the planetary atmosphere}\label{fig:transiting:planets3}
\includegraphics[width=\textwidth]{TransitingPlanets3}
\end{subfigure}
\end{figure}
A transiting planet is a planet which
passes right in front of the star
because its orbital plane
is close to the line of sight--Figure \ref{fig:transiting:planets2}.
During the transit, a small portion
of the stellar light
is filtered through
the planetary atmosphere--Figure \ref{fig:transiting:planets3}.
By analyzing the spectrum
of this tiny portion
and finding the scattering
or absorption features in there,
we can learn about
the atmospheric composition
and the presence of condensates
such as clouds.
This technique is called
"transmission spectroscopy."
In many cases, transiting planets also pass behind the host star
and that's called "planetary" or "secondary" eclipse--Figure \ref{fig:secondary-eclipse}
\begin{figure}[H]
\caption{Secondary Eclipse}
\begin{subfigure}[b]{0.45\textwidth}
\caption{In many cases, transiting planets also pass behind the host star
and that's called "planetary" or "secondary" eclipse}\label{fig:secondary-eclipse}
\includegraphics[width=\textwidth]{secondary-eclipse}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\caption{The difference between
the out-of-eclipse total flux
and the in-eclipse flux
corresponds to the brightness
of the planetary dayside}\label{fig:secondary-eclipse1}
\includegraphics[width=\textwidth]{secondary-eclipse1}
\end{subfigure}
\end{figure}
At the time of the eclipse,
the planetary flux is blocked by the star.
So, the difference between
the out-of-eclipse total flux
and the in-eclipse flux
corresponds to the brightness
of the planetary dayside--Figure \ref{fig:secondary-eclipse1}.
This way, using planetary eclipse,
we can identify the spectrum
of the planet
without directly separating the star
and the planet in the imaging plate.
In addition, while the planet
orbits the star,
the varying portion of the planetary
dayside faces us
and the planetary flux
changes in time accordingly--Figure \ref{fig:phase-variation}.
\begin{figure}[H]
\begin{center}
\caption[Phase Variation]{ while the planet orbits the star, the varying portion of the planetary dayside faces us and the planetary flux changes in time accordingly}\label{fig:phase-variation}
\includegraphics[width=0.8\textwidth]{phase-variation}
\end{center}
\end{figure}
In turn, although we cannot resolve
the star and the planet,
the time variation in the total flux
can be attributed to
the planetary component
assuming that the star is stable.
These three techniques
have been successful
with hot Jupiter-like planets
and have revealed
some of the atmospheric species
and thermal structures.
In order to apply them to smaller,
potentially habitable planets, however,
we need to push the current
technology to its limit.
The techniques introduced so far
have pros and cons
and the relevant targets vary.
So, we will use all of them
to study various aspects of exoplanets.
In the next decade,
new powerful observatories
with spectroscopic capability
will come into play.
The main targets will probably be
large gaseous planets
but some basic investigations
of terrestrial sized planets
are also being attempted.
Future missions aimed at
the most detailed study
of potentially habitable planets
are currently under discussion.
In this lecture,
we discussed the key observations
to study the nature of exoplanets
beyond their size in the orbit.
I encourage you to think further
about how you would find life
on an Earth Twin light-years away
using these techniques.
It will provide and unique view
on Earth's biosphere.
Summary: Key observations to characterize atmospheres (and surfaces) of exoplanets
\begin{itemize}
\item Direct Imaging
\item Transmission spectroscopy
\item Secondary eclipse
\item Phase curves
\end{itemize}
Using these techniques, how would you find life on an Earth-twin?
Here are some suggestions for further reading.
\cite{fujii2018exoplanet,sagan1993search,kaltenegger2017characterize,robinson2011earth,deming2013infrared,knutson2007map}
\section[Abstract and general Models for Life]{Abstract and general Models for Life--Sara Imari Walker}
One of the critical questions of
astrobiology is how do we model life;
we can think about abstract or general
models for life. We are
after these because we want to explain life not
only on earth but also on other
worlds and so we need to come up with as
general a principle as possible. So far a
lot of astrobiology has focused on the
idea of trying to define life, so we have
come up with a lot of different
definitions for life
from numerous different perspectives;Figure \ref{fig:life-word-cloud} is just showing some definitions for life. There are so
many different definitions for life that
has really become quite a muddle as to
how we should be thinking about it in a
more rigorous way.
\begin{figure}[H]
\begin{center}
\caption[Life from numerous different perspectives]{Life from numerous different perspectives\cite{trifonov2011vocabulary}}\label{fig:life-word-cloud}
\includegraphics[width=0.8\textwidth]{life-word-cloud}
\end{center}
\end{figure}
One of the things that you think about as an astrobiologist is: what's the value of a definition versus a theory or model? A lot of the traditional literature and origins of life has been focused on this idea of defining life so that we can actually be able to identify life on other worlds. The definitions for life are really rather ad-hoc and in
some sense they're from observations of life on Earth:
\begin{itemize}
\item for example we know that life on Earth evolved so we might have an evolutionary definition of life;
\item or we know that life on Earth is cellular so we might assume that all life requires cells.
\end{itemize}
What we ultimately really need to be aiming for
in the field of astrobiology is to build
better models and theories which might
be more general and allow us to move
beyond definitions of life that are
anthropocentric but
actually become predictive theories for
how life might look on other worlds.
The challenge that we face with anything
trying to get beyond an anthropocentric
or human-centered or earth-centered
viewpoint is that we only have a single
example of life on Earth; despite all
the diversity of life forms that we see--
trees, cats, people bacteria in your gut--
all of that life is related by a common ancestry.
The way that astrobiologists talk
about that is to talk about
the \gls{gls:LUCA}.
If we look at the tree of life
as shown in Figure \ref{fig:yatol}, and we trace
the evolution of all the life-forms that
exist today backwards in time they all
converge on \gls{gls:LUCA}, which is a
\emph{population} of cells that lived on the
primitive earth.
\begin{figure}[H]
\caption{We are limited by a single example of life}\label{fig:yatol}
\includegraphics[width=\textwidth]{YATOL}
\end{figure}
We think that all modern life descended from \gls{gls:LUCA}, and the properties that that life-form would have had would have been DNA\footnote{But see \cite{glansdorff2008last} for an RNA world \gls{gls:LUCA}}, and a translation machinery, proteins and cellular architecture much like a modern cell; it was actually a very advanced life-form. It doesn't take us all the way back to the origins of life on earth.
The fact that all life shares this kind of common biochemical architecture is actually really limiting, because it means that we only have one example of life to go on, and extrapolating any kind of general principles from one example is actually rather hard.
What people have done traditionally in the origin of life field is to try to come up with
models that are based on the core components of that architecture of life
that we know today. Two of those core components that have been dominant models for origins of life are what are called genetics first and metabolism first--Figure \ref{fig:GeneticsVsMetabolism}.
\begin{figure}[H]
\caption[Genetics-First versus Metabolism-First]{Genetics-First versus Metabolism-First: Two competing hypotheses}\label{fig:GeneticsVsMetabolism}
\includegraphics[width=0.9\textwidth]{GeneticsVsMetabolism}
\end{figure}
We know that cells metabolize: they need to acquire food from their environment; you and I need to acquire food in order to survive to reproduce, so metabolism is obviously a
critical component.
In the genetics first view we also know that life requires genetic information in order to be able to reproduce and to evolve over many generations, so if you split these two kinds of core components of biology to just what that essential thing is we have this sort of genetics idea where people have proposed that the first living entities might have just been molecules like RNA that could copy themselves.
Figure \ref{fig:GeneticsVsMetabolism} shows an
abstract model, a process
where you might just talk about the the
binary digits in a sequence if it was an
RNA molecule; it would be the sequence of
ribonucleotides in the actual RNA so
that actual basis and you can actually
talk about reproducing that information
via copying.
The RNA world
has been a popular idea because
RNA also has a catalytic function
associated with it, whereas in modern
organisms we have DNA and protein.
DNA controls genetic heredity and
proteins are primarily the catalysts
that actually execute reactions in the
cell, but RNA can do both those functions, so
genetics first has emerged of this idea
that you can model quite simply with
these kind of models where you talk
about copying and heredity and this idea
of evolution through this kind of
process as the core thing that emerged
first as a first living entity.
The metabolism first perspective is an
alternative view that the first kinds of
living entities were not individual
molecules that could replicate but sets
of molecules that were reacting together
and could collectively reproduce due to
the organizational patterns of their
reactions. That idea is called
\gls{gls:autocatalytic:set} theory.
Figure \ref{fig:GeneticsVsMetabolism} shows
an example of catalytic
sets, using that same kind of
representation molecules
as binary strings--just strings of zeros
and ones which is a way of modeling
these kinds of processes in artificial
chemistries. In this metabolism-first view the first kinds of living
systems would have been these organized
patterns of chemical reactions.
Both of these perspectives allow one to
model certain attributes of living
systems. But it's really nice if you
actually put them side-by-side and look
at something like the binary polymer
representation of them, because you start
to see that both of them are different
ways of propagating information in
chemical systems; a theme starts to
emerge about what kinds of theories
might unify different approaches to
origins of life. This gets back to
the idea that what we need to start
doing to move forward in origins of life
(whereas traditionally we've had these
models like genetics first, metabolism
first, and there's other models like
compartment first, where
we're talking about
mineral surfaces and all kinds of
things) is to start
thinking about what are the theories for
life, and how do we actually develop more
predictive models that are more generic
to different chemistries and allow us
to actually go to the lab and predict
under what circumstances we should start
getting things that look more lifelike.
And a nice example of the need for
theories and thinking about origins of
life was given by Carol Cleland and
Chris Chyba \cite{cleland2002defining} who talked about
trying to define water and how difficult
it was to actually define water before
we had a molecular theory for water.
You might describe water as a clear
liquid, you might describe it based on
the fact that it's liquid at a certain
range of temperatures, that it
doesn't have a strong odour; there's a lot
of different ways that you could
describe what water that might lead to a
definition of water but none of them are
really exclusive to water because
there's other clear liquids that you
might describe there's other things that
are also a liquid at room temperature.
The way that we really precisely
define water is actually to have
an atomic theory that describes
molecules and their interactions; and we
can precisely define water now as $H_2O$.
Their thought was that what we do
now is phenomenologically define
life, we have a lot of heuristics or a
lot of ideas about what we think life
might be but ultimately what we need is
a theory and that our definition should
derive from the theory not the other way
around.
And one way I like to think about
that is actually to think about like the
emergent\footnote{See glossary entry \gls{gls:emergence}} properties of life.
For example, one of the defining properties
of water has is that it's wet,
but wetness of water is an emergent
property: it requires many many many
millions of water molecules potentially
to be wet (although there's
actually an active debate
about how many water molecules
--if it's a few hundred, a few
thousand--and people have been working to
develop models to quantify when water
gets wet).
Likewise, if we're thinking about
emergent properties of life evolution is
often considered to be a defining
property of life, but evolution exists at
the level of population. It requires
many interacting individuals
in order to be an evolutionary
system, so in some sense evolution,
which we use as a defining property of
life, is also an emergent property of
life; one of the things that we
really need to challenge ourselves with
is to try to find the underlying theory
that explains that emergent property in
the same way that we have an atomic
theory for water that explains some of
its emergent properties.
One of the ways that we might think about that
is life as an
information processing system--Figure \ref{fig:nurse-information}.
\begin{figure}[H]
\begin{center}
\caption[Life as an information processing system]{Life as an information processing system\cite{nurse2008life}}\label{fig:nurse-information}
\includegraphics[width=0.6\textwidth]{nurse-information}
\end{center}
\end{figure}
This is a newer proposal about trying to unify different properties of life; a lot of people have been very enthusiastic about it in the field, and working on from different perspectives. If we go back to thinking about that genetics versus metabolism picture and we had the binary polymer model, we see they were both a representing an informational system that was capable of reproducing itself; they were just very different architectures for that kind of thing and one might think of genetics first as a
digital type of information processing and metabolism first as an analogue type of information processing. There is this idea in the biological community and also emerging in astrobiology about information possibly being a unifying principle for how we
should think about life across all scales. Maybe organisms are really organized by flows of information.
One way to think about the origin of life--potentially as kind of a new perspective--is to think about it as a transition and how information is stored propagated and used; this might be sufficiently general to be able to predict properties of alien chemistries
that can also process information in a similar way potentially to Earth's biology but might allow different chemistries than that biochemical architecture that we have on earth as characteristic of \gls{gls:LUCA}--Figure \ref{fig:LifeInformation}.
\begin{quotation}
Focusing on information… may perhaps provide our best shot at uncovering universal laws of life that work not just for biological systems with known chemistry but also for putative artificial and alien life--\cite{cronin2016beyond}.
\end{quotation}
\begin{figure}[H]
\caption[Life as Information]{Life as Information\cite{cronin2016beyond}}\label{fig:LifeInformation}
\includegraphics[width=0.9\textwidth]{LifeInformation}
\end{figure}
If we could understand how information works in biochemical networks on earth we could potentially understand what other possible chemistries could enable those same kinds of emergent properties on other worlds and maybe predict alien chemistries
\section[The Multiple Origins of Life]{The Multiple Origins of Life--David Krakauer}
I'm going to talk about the multiple origins of life in complex time. Most people would say life evolved perhaps once, between 3.5 and 4 billion years ago, and the evidence of that evolution are things like fosssilized microorganisms and those organisms are presumed to posses certain key properties. Autocatalylis, replication, metabolism and the ability to adapt. And this is a chemical material theory for the origin of life, and I'm going to say there's another one that's quite distinct that's based on information theory. And here's the argument
\subsection{The Argument}
Origin of Life is dominated by what we would call naturalist reductionistic perspectives. That is, the ultimate understanding of why a system works is an analysis of the chemistry of the most basic building blocks that make up the phenomenon of interest. There's another school called functionalists, who say that those building blocks are necessary but not sufficient. What we really care about are their properties and those properties can be realized in multiple different ways.
I'm going to make this argument very clear by showing your an analogous argument in the field of artificial intelligence. The idea here is that life emerges from an adaptive arrow of time--Section \ref{section:adaptive}, which is the reverse of the thermodynamic arrow of time that leads to increasing disorder--Section \ref{section:reversing}. And the interesting thing about the adaptive order of time is it's multi scale, rather like the physical arrow of time, meaning it can be be observed at molecular levels all the way through to the scale of whole societies and in that respect, it's not critically dependant on biology or chemistry. And the key to understanding the adaptive arrow of of time is understanding the adaptive agent. The agent is the key concept or sometimes called the individual in complex adaptive systems. And an agent can be a virus, the agent can even be a language. So here is the analogous debate in artificial intelligence.
John Searle is a proponent of biological naturalism. His proposition is that the only system that can be intelligent is a biological one. And he said: \begin{quotation}
My car and my adding machine understand nothing: they are not in that line of business.
\end{quotation} That is you could never build a mechanical device with understanding because they're not made of neurons. In contrast Alan Turing said \begin{quotation}
We are not interested in the fact that the brain has the consistency of cold porridge
\end{quotation}, i.e. the materials do not matter as much as we have assumed because we can instantiate function in a range of very different materials. And this argument for intelligence generalizes two arguments about life:
\begin{itemize}
\item we don't necessarily need the chemistry we have in existing living systems;
\item there might be very different ways of achieving life like properties.
\end{itemize}
\subsection{Reversing the Arrow of Time}\label{section:reversing}
The key to understanding a functionalist perspective on the origin of life
is understanding the adaptive arrow of time.
If I were to show you this slide--Figure \ref{fig:apples}--and ask you: ''which
way does time flow, from left to right
or right to left? from the fresh to rotten or
rotten to fresh?''-- everyone would know the
answer to that. That's because you have
a basic intuition about the second law of thermodynamics.
Systems tend to become disordered in time.
\begin{figure}[H]
\begin{center}
\caption{Which way does time flow, from left to right or right to left?}\label{fig:apples}
\includegraphics[width=0.8\textwidth]{apples}
\end{center}
\end{figure}
Arthur Eddington famously described this phenomenon
the arrow of time, by asking us to consider
an arrow and if we follow that arrow and
find more and more of the random element
in the state of the world, then that
points towards the future. But if the
random element decreases, the arrow points
towards the past. And "time's arrow"
expresses the one-way property of time.
\begin{quotation}
Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more
of the random element in the state of the world, then the arrow is pointing towards
the future; if the random element decreases the arrow points towards the past … I
shall use the phrase “time's arrow” to express this one-way property of time which
has no analogue in space--Sir Arthur Eddington.
\end{quotation}
Now, most of you are thinking, ''wait a second,
there is a forward arrow that leads
to greater order'', and that's what Darwin
worked on. And so, if I showed you this picture--Figure \ref{fig:butterflies}--
and said, ''which is the initial state and
which is the final state?'', those of you
familiar with the phenomena of industrial melanism
in Northern England, would say, ''well, the final
state here is the adaptive state''. Its the dark
moth with the dark background which is less susceptible to predators.
\begin{figure}[H]
\begin{center}
\caption{Butterflies and Time's Arrow}\label{fig:butterflies}
\includegraphics[width=0.8\textwidth]{butterflies}
\end{center}
\end{figure}
Ronald Fisher expressed this alternative to Eddington's arrow of time as Darwinian arrow
of time, when he said:
\begin{quotation}
It was Darwin's chief contribution not only biology but of the whole natural science to have brought to light a process by which randomness that you start with, in the process of time decreases, such that very improbable things their non-occurrence becomes highly probable\cite{huxley1954evolution}.
\end{quotation}
So, the Darwinian arrow of time, rather than increasing probable things--disorder--increases improbable things.
And the standard framework for understanding this increase in improbable things is evolution by natural selection on fitness landscapes--Figure \ref{fig:NaturalSelection}. So there are many probable states of the world, and they are all low-fitness. But there is one high-fitness state of the world, which in terms of of the space of possibilities, is highly improbable.
And, that's what natural selection does. It selects more of these possible states
a very improbable one. And, so the physical arrow of time says you are going to roll down-hill and end up in one of these more probable states and the adaptive arrow of time says you are going move uphill and end up in a very improbable ordered state.
\begin{figure}[H]
\caption[Thermodynamic Arrow of Time versus Adaptive ]{Thermodynamic Arrow of Time wants to roll down hill, Adaptive up hill towards lower probability.}\label{fig:NaturalSelection}
\includegraphics[width=0.9\textwidth]{NaturalSelection}
\end{figure}
\subsection{The Theory of the Adaptive Arrow of Time}\label{section:adaptive}
The adaptive arrow of time tells you that you go
from probable states to an improbable state,
from disordered states to an ordered state.
It is the reverse of the thermodynamic arrow of time.
There have been several efforts to mathematize that fact,
most famously by Ronald A. Fisher
in the so called 'Fundamental Theory of Natural Selection'.
Fisher wanted a theory as general as the second law of thermodynamics.
And here it is, captured in mathematical terms:
It says, you move through a space of possible solutions
in such a way that you minimize the variability in the population.
That is like minimizing the uncertainty.
And at a certain point you reach the maximum
and then there is only one solution that you observe
and that is the one we typically would call 'best adapted'.
There is a problem with the theory,
and that is: it is not completely general.
Think about a situation like this--Figure \ref{fig:rockpaperscissors},
the rock, paper, scissors game
and ask: what is the maximally adapted solution?
\begin{figure}[H]
\caption{Rock, paper, scissors}\label{fig:rockpaperscissors}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth]{rockpaperscissors}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth]{rockpaperscissors-payoff}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth]{rockpaperscissors-evolution}
\end{subfigure}
\end{figure}
Well, imagine that Fisher's fundamental theory was right
you would say that only one of those strategies at the end is where you end up
because you have minimized the variability.
So I always play 'rock'.
But, of course, someone can come along and play paper and beat me.
So, in this particular instance of frequency dependent selection
Fisher's fundamental theory cannot be right,
because you are not minimizing the variance
you are actually maximizing it.
Over the last decade or so a number of us have been working on generalizing the adaptive arrow of time, mathematically, to ask: what is actually being minimized?
All adaptive processes are minimizing
the uncertainty of an agent about the state of the world. Or, put differently, each agent maximizes the amount of information it possesses about the world in which it lives.
And when you express the adaptive arrow of time in these terms,
you realize that a whole range of apparently distinct phenomena
- evolution, Bayesian inference, reinforcement learning -
are all examples of the same fundamental dynamic--Figure \ref{fig:evolution-inference-learning}
\begin{figure}[H]
\caption{Evolution, Bayesian Inference, Learning}\label{fig:evolution-inference-learning}
\includegraphics[width=0.8\textwidth]{evolution-inference-learning}
\end{figure}
In other words: a functionalist perspective allows you to see
that the particular mechanical implementation does not matter so much.
All of them achieve the desired goal of maximizing the information in the agent about the world.
\begin{quotation}
Adaptation is an optimization dynamics transferring information from the environment into the agent--reducing uncertainty about states of the world--\cite{rockmore2018cultural}
\end{quotation}
\subsection{Evolutionary Agents}
So why does an understanding of what an evolutionary agent is help us understand the origin of life, and in particular the multiple origins of life?
It's worth starting with a case study here.
This is Sol Spiegelman; he was a virologist and he was interested in the evolution of very simple viruses.
The virus he worked with was called Q Beta Phage, a very small RNA virus.
Spiegelman asked the following question: what is the minimum genome the Q Beta has to possess in order to successfully complete its life cycle? But he tricked the virus.
He put a population of viruses in a test tube, along with enzymes that normally it would have to encode itself.
He ensured that those enzymes were always present.
And what he observed, over multiple replication rounds of the virus in its new environment that he created, was that the virus became smaller and smaller and smaller,
until it eliminated all the traces of the enzyme that now existed with certainty in the environment in which it was evolving--Figure \ref{fig:SpiegelmanMonster}.
\begin{figure}[H]
\caption{The Spiegelman Monster}\label{fig:SpiegelmanMonster}
\includegraphics[width=0.9\textwidth]{SpiegelmanMonster}
\end{figure}
We can interpret this experiment in the following way: Figure \ref{fig:SpiegelmanMonsterVenn} depicts a Venn diagram in one set of the viral genome V
and another set, the environment, H, the host.
\begin{figure}[H]
\caption{Elimination of shared Information}\label{fig:SpiegelmanMonsterVenn}
\includegraphics[width=0.9\textwidth]{SpiegelmanMonsterVenn}
\end{figure}
The intersection of these two sets is a gene shared by the virus and the host
or the virus and its environment.
Spiegelman discovered that if you make the environment certain, the virus minimizes itself.
It throws away all genes it doesn't need because they're already there. This is minimalogy.
But he could have performed an alternative experiment, and made the environment very uncertain.
And when environments are very uncertain, that is, you can't throw things away because you know they'll be there, then you have to encode them intrinsically.
So a good example for us are vitamins.
With respect to vitamins, we're minimal, because we know they're always there and we don't have to synthesize them; but many other genes we can't be certain we can acquire
from the environment, so we have to encode them and transmit them ourselves. And we call that autonomy. So these are two different configurations for an adaptive agent.
Now, many people call a virus "non-living"
because it depends on its host
to replicate, but of course,
we depend on the environment to replicate,
too, because we need vitamins.
So really there is a spectrum of
adaptive agency--Figure \ref{fig:SpectrumOfLife}.
\begin{figure}[H]
\caption{The Spectrum of Adaptive Agency}\label{fig:SpectrumOfLife}
\includegraphics[width=0.9\textwidth]{SpectrumOfLife}
\end{figure}
On the one hand,
there are organisms that live in very
certain environments. And they become
simple. There are other organisms
that live in very uncertain environments,
and they become complex. They encode
more and more in their genes. And life
spans this informational spectrum, from
organisms that encode very little about
the world because they don't need to,
to organisms that encode a great deal
about the world, because they need to,
to complete their life cycle.
And when you think about it in this
information theory term,
the way an organism, or life,
or an agent really is,
is a mechanism for acquiring adaptive
information about the world that it
propagates forward in time, then
computer viruses, the block chain,
the Constitution, and many many other
cultural forms, are essentially living.
There's nothing special about the biochemical example, a replicated cell, because a replicating cell is simply a somewhat autonomous informational entity that is able to propagate itself forward in time, just like a Constitution can.
But like a virus, the Constitution requires us. We are the vitamins of the Constitution.
And this leads to a very open question that's worth debating.
\begin{itemize}
\item On one hand, we could be \emph{fundamentalist}. We could say look, all of life depends on chemistry, and so finding the simplest chemistry that is capable of encoding adaptive information about the world is where life really started.
\item But another possibility is to say, well not really, because at any scale that you can find this basic set of mechanisms, you're entitled to call it life. And you're even entitled to call it an independent organ of life. So by analogy, someone might say, ''To understand architecture, to understand Gothic and Renaissance, or Baroque, or Rococo architecture, you need understand quantum mechanics''. And I think that would be foolish, because all of them ultimately depend on quantum mechanics, but it's not the differences in the physics that explain the differences in the architecture. That requires a higher level of understanding. And so the \emph{pluralist} approach to the multiple origin of life says that every level, we need to find those unique mechanisms that can support propagation of information. And there isn't a "correct," most basic level. It depends on the question that you're asking and the variation that you're trying to explain.
\end{itemize}
Additional reading\footnote{Personal recommendations by Simon}:
\begin{itemize}
\item The arrow of time \cite{rovelli2017time,rovelli2019order,susskind2013time}
\item Information and Life \cite{friston2013life}
\end{itemize}
\section[Evolutionary Computation]{Evolutionary Computation--Stephanie Forrest}
Today we're going to talk about how these ideas about evolution look as computations and how we can use computations to understand and leverage
evolutionary ideas.
Let's first review the three basic elements of Darwinian evolution, which you already learned about.
\begin{itemize}
\item Random variation of individuals.
\item Selection of some of those individuals, based on differential fitness
\item Inheritance of those variations into individuals of the next generation.
\end{itemize}
We also need to think a little bit about
how those variations are represented
and that really gets us into the field of genetics
and we don't need to know very
much about genetics, \begin{itemize}
\item we just need to know that the information, these variations, are represented as discrete units, which today are called genes.
\item And we need to know that the representation of the information in genes is separate from how it's expressed in the phenotype. So, we refer to this as the genotype versus phenotype mapping.
\item The final thing we need to know is that these genes are organized in a linear array, which today we call a chromosome.
\end{itemize}
And this understanding really started with Mendel in 1865 and the culmination of it was the Crick and Watson discovery of the structure of DNA.
So, we're going to take those very simple
elements and simple understandings and
translate them into a computer algorithm.
And the way we're going to do that is,
instead of having chromosomes with genes
on them, we're going to have strings with bits--Figure \ref{fig:GeneticAlgorithm}.
\begin{figure}[H]
\caption[Genetic Algorithm, exhibiting Selection, Crossover, and Mutation]{Genetic Algorithm, exhibiting Selection, Crossover, and Mutation\cite{holland1992adaptation}}\label{fig:GeneticAlgorithm}
\includegraphics[width=0.9\textwidth]{GeneticAlgorithm}
\end{figure}
Bits are numbers that are either
zero or one, they can only have two values,
and we're going to have those strings of bits
be our individuals in the population.
So, imagine that we start out, and this is
in the left-most panel of the figure.
Imagine that we start out with a population
of randomly generated individuals, or strings,
and here we only have three shown and they
each only have 5 bits, but in real genetic
algorithms we would have more bits and
larger populations.
We also need a way to evaluate fitness
and we do that by using a fitness function.
In our example fitness function, we will assume
that the values range from 0 to 1 and the
higher value (1) is better.
So, the first step is generate initial population,
then we need to evaluate each individual
in the population using our fitness function
and use those fitness values to select
the next generation.
And so, you see that in the center panel
where we have two copies of the highest
fitness individual, no copies of the lowest
fitness individual, and a single copy
of the average individual.
That's not very interesting because
those individuals look exactly like their parents.
And so, we take advantage of what are known as
genetic operators to introduce new variations.
And we do that using mutation that's shown
in the top individual where the first bit (a one)
is flipped to become a zero, and we do this
not always in the first position, we do it
randomly at different places in the string,
and do it randomly throughout the strings
of the population.
Then we also use a process called:
crossover, or recombination, where we take
two individuals and exchange pieces of their
DNA or pieces of their bit strings, and
you see that in the second two individuals.
So, now in the third panel, we have the
true, new generation, Generation T+1, and
we then need to repeat the cycle
and evaluate those using the fitness function,
do new selection, introduce new mutations
and crossovers, and that is how
the evolutionary process runs.
This basic strategy, and basic idea of using
bit strings to represent chromosomes was
introduced by John Holland in the early 1960s.
John was very interested in the
populations level effects and in the
impact of crossover.
Coming back to the algorithm,
what does it look like when we actually
run this algorithm for multiple generations?
So, I just showed you one generation in the previous slide,
but we typically iterate these for hundreds
or even thousands sometimes, of generations.
And the way we look at it is by plotting time
in terms of numbers of generations on the x-axis
and the fitness, typically the average fitness
of the population or the best fitness of
the population. Those are the two values
shown on the y-axis-- Figure \ref{fig:GAfitness}.
\begin{figure}[H]
\caption{Example Fitness Curve, with two plateaux}\label{fig:GAfitness}
\includegraphics[width=0.9\textwidth]{GAfitness}
\end{figure}
And so, this is a very typical kind of
performance curve that we see with genetic
algorithms, where the fitness of the population
starts out very low initially, very quickly
climbs up and improves as the really lousy
individuals get deleted and the somewhat
better individuals get copied, so the whole
average fitness moves up.
Then there's a little bit of searching around
that we see, and we get another innovation.
But eventually, the population gets stuck
on what is known as a plateau.
This is known as punctuated equilibrium.
And when that happens, then the algorithm
is affectively having to explore the space
more widely to find a high fitness
innovation and so that can take a varying
amount of time, and we actually see two
plateaus, the first plateau we see in the figure
we see eventually and new innovation is found
and the population jumps up in fitness
and this is very typical of these
genetic algorithms.
Let's now talk about some applications.
Genetic algorithms have been used widely
in engineering applications and they've
also been used for scientific modeling.
First, we'll talk about the engineering
applications, and of these by far the
most common is what's known as
multi-parameter function optimization--Figure \ref{fig:multi-parameter-optimization}.
\begin{figure}[H]
\caption[Multi Parameter Optimization]{Multi Parameter Optimization\cite{marshall2014evolution}}\label{fig:multi-parameter-optimization}
\begin{subfigure}[h]{0.5\textwidth}
\caption{2D fitness surface\cite{marshall2014evolution}}\label{fig:multi-parameter-optimization-2d}
\includegraphics[width=\textwidth]{multi-parameter-optimization}
\end{subfigure}
\begin{subfigure}[h]{0.45\textwidth}
\caption{Example function}\label{fig:multi-parameter-optimization-example}
\includegraphics[width=\textwidth]{multi-parameter-optimization-example}
\end{subfigure}
\end{figure}
Figure \ref{fig:multi-parameter-optimization-2d} depicts
a two-dimensional function,
that is a function in two variables.
And if a function is complex a lot of times
we don't have analyticity methods to find mathematically
what the maximum value of the function is
and when that happens we have to resort to
sampling and we can think of the genetic algorithm
as a kind of biased sampling algorithm.
And the goal, of course, would be to find
in this multi-peaked surface points that
are on the highest peak up at the top.
Let's just go into a little bit more detail
about how that works.
Figure \ref{fig:multi-parameter-optimization-example} depicts an example of such a function and it's not trivial to analyze this mathematically,
but suppose we want to find the x and y values
for this function that produces the
maximum F(x,y).
So to do that, we take out bit strings and
conceptually think of the first half of the
string as being a representation of the
value of x, and the second half of the string
is being a representation of the value of y.
Now, to evaluate fitness we then need to
take these ones and zeros and interpret
them as Base 2 numbers, translating them
into their decimal equivalents, which is
shown on the figure, and then take those
decimal values and plug them into our
fitness function, and in this case we get
the value out 4.
So, we would have to do that for every
individual in the population.
I just want to say a word about where
these algorithms came from. I've mentioned
John Holland. There were several other
groups that were interested in similar
kinds of algorithms and the first three main
groups I've listed were all independent
discoveries of similar and overlapping ideas.
And then in the early 90s, John Koza\cite{koza1992genetic} came along
and really blew open the field by showing
how we could use genetic algorithms to
evolve computer programs.
These separate stream inventions are now
lumped together and the field is called
Evolutionary Computation.
And needless to say, there's been
a lot of recombination between these
different origins.
References
\begin{itemize}
\item John Holland \cite{holland1992adaptation}
\item Ingo Rechenberg \cite{rechenberg1965cybernetic}
\item David Fogel et al\cite{fogel1998artificial}
\item John Koza--evolving computer programs\cite{koza1992genetic}
\end{itemize}
See also \cite{mitchell1998introduction,eiben2003introduction,forrest1993genetic,ma2014novo}.
\section[Scaling]{Scaling--Pablo Marquet}
Why is scaling important and why do you need to know about this?
\begin{itemize}
\item Scaling provides a way to deal with the diversity of scales and also the... different type of organism that exists and becomes part of ecological systems.
\item Scaling also makes apparent the fundamental similarity that underlies the diversity in nature and how this has been moulded by the action of natural selection. It's very important to realize that scalings point out that there are not many ways of actually having a real functioning organism in nature and provides you with the right way of understanding the constraints that are operating on diversity. And, since most organisms obey these constraints, it points to the fundamental similarity that they share.
\item Scaling provides us with a benchmark against which we can compare different species, populations and ecosystems, and you can actually measure deviations from scaling relationships, and those deviations mean that there are some processes--some important biological processes--operating on these systems
\end{itemize}.
So, let's start by the most simple way of characterizing scaling in ecology.
And, the characterization is a mathematical one - it's very simple.
And usually, scaling relationships, as you have already heard, cannot be summarized with
a relationship like a variable $y$ or a trait of an organism.
It is proportional to a variable $M$ raised to a power.
In this case, I'm talking about $M$ as the mass of the organism or the size of the biological system. You can also represent this relationship as $y=c M^\alpha$ which actually allows you to take the logarithm of this relationship and transform something which is not linear into a linear equation--(\ref{eq:PowerLawScaling}) and Figure \ref{fig:PowerLawScaling} .
\begin{align*}
y \propto& M^{\alpha}\text{, where $M$ represents mass, size, etc.}\numberthis\label{eq:PowerLawScaling}\\
=& c M^{\alpha}\\
\implies&\\
\log(y)=&\log(c) + \alpha \log(M)
\end{align*}
\begin{figure}[H]
\caption[Many ecological attributes scale with size]{Many ecological attributes scale with size\cite{white2012methodological}}\label{fig:PowerLawScaling}
\includegraphics[width=0.9\textwidth]{PowerLawScaling}
\end{figure}
On the left you see scaling relationships that are in the nonlinearized way. And then, to the right you see the linear relationship that results after you take the logarithms
of this relationship.
That makes it easier to analyze.
So, examples of scaling abound in nature
and they affect the way organisms
are put together and evolved
through the action of natural selection.
You can see, for example,
relationships between
the size of the organisms
and weaning time,
or also, the size of the organisms
and longevity.
And, you can use this relationship -
as I show you here in Figure \ref{fig:ScalingExamples} -
to compare different kinds of organisms.
\begin{figure}[H]
\caption[Scaling of life-history events]{Scaling of life-history events: mammals are in grey, marsupials in red.\cite{sibly2012metabolic}}\label{fig:ScalingExamples}
\includegraphics[width=0.9\textwidth]{ScalingExamples}
\end{figure}
For example, the gray dots represent
all mammalian species
and the red dots represent
one particular type of species,
which are the marsupials.
So, you can see the marsupials, they...
in general, they follow the trend's line
for longevity, for example,
but they deviate in maturation time,
for example.
So, these scaling relationships
allow you to compare
and you can use them as a benchmark
to try to understand what is going on
with this particular species
that they deviate.
Is this because of
their phylogenetic history?
Or, is it because of the environment
of the other kind of habits they have -
what they eat and so on.
So, you can answer
meaningful questions
using the scaling relationships.
In ecology,
these relationships also affect,
for example, things very fundamental,
like the average prey mass
that a particular carnivore species
will eventually eat,
or affect, as you can see in Figure \ref{fig:Scaling:individuals:ecosystems},
the carbon turnover,
or the time it takes,
to replace one gram of carbon
in a particular ecosystem
and how that changes
with the average mass of the plants
in that ecosystem.
\begin{figure}[H]
\caption{Scaling in individuals and ecosystems}\label{fig:Scaling:individuals:ecosystems}
\begin{subfigure}[b]{0.45\textwidth}
\caption{Mean Carnivore Mass\cite{tucker2014evolutionary}}\label{fig:mcm}
\includegraphics[width=\textwidth]{Scaling1}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\caption{Carbon Turnover\cite{anderson2013altered}}\label{fig:ct}
\includegraphics[width=\textwidth]{Scaling2}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\caption{Primary Production\cite{enquist2012land}}\label{fig:npp}
\includegraphics[width=\textwidth]{Scaling3}
\end{subfigure}
\end{figure}
Or also, you can see scaling
relationship in terms of
the net primary productivity
of an ecosystem
and the total amount of
plant biomass in that ecosystem.
So, those are fundamental relationships
that tell you something very... general
about the way organisms
and ecosystem work.
So how do we understand
the origin of this relationship?
Well, let me tell you, the fundamental...
kind of relationship I would say is
the one that relates the organism
and its energy requirement to its size--Figure \ref{fig:Kleiber}.
\begin{figure}[H]
\caption[Kleiber's Law]{Kleiber's Law: $B \propto M^\frac{3}{4}$\cite{schmidt1984scaling}}\label{fig:Kleiber}
\includegraphics[width=0.9\textwidth]{Kleiber}
\end{figure}
This is called "Kleiber's Law,"
and it points out that
the requirements of energy
of an organism, its metabolism,
which is the sum of
all the biochemical reactions
happening within the organism,
scale with the size of the system -
or mass -
raised to a three-quarters power.
And, you can see in the graph that
this relationship describes very well
how metabolism changes
as you increase the mass of an organism
from a mouse, for example,
to an elephant.
And, it tells you that somehow
an elephant is just a variation
on the same theme as a mouse is.
So, it's this relationship that points out
that there is a fundamental similarity
among these different kinds of organisms.
So, natural selection
is not acting randomly -
it's following some constraints or...
so, what are those constraints?
In 1997, Geoffrey West, Jim Brown and Brian Enquist, proposed a very simple and elegant model that points out to some fundamental principles acting and helping us to understand why metabolism changes in the way it does with the size of a system.
And, I'm not going to explain to you
all the details of the model.
I'm just going to give you
two major insights into it.
\begin{itemize}
\item The properties of resource delivery networks determine
the properties of whole-organism metabolic rate.
\item Biological systems have evolved under natural
selection to optimize performance (delivery networks
minimize energy loss)
\end{itemize}
Well, the major thing is that
any organism that exists faces a problem
and the problem is that
you have to deliver energy -
the resources that you get
from the environment to live -
you have to deliver it to
all the different parts of your body.
In a multicellular organism,
this means that you have to deliver
this energy to all the cells in your body--Figure\ref{fig:capillaries}.
\begin{figure}[H]
\caption[The West, Brown, Enquist Model]{The West, Brown, Enquist Model; In a multicellular organism, you have to deliver energy to all the cells in your body.\cite{west1997general}}\label{fig:capillaries}
\includegraphics[width=0.8\textwidth]{capillaries}
\end{figure}
So, how do you do that?
Well, the way you do that is
you construct.
I mean natural selection has molded
the existence of networks
to deliver this energy towards
all the parts of your body.
So, those networks
actually generate constraints
that manifest in this fundamental
three-quarter scaling for metabolism.
Now, the way natural selection
acts on this is
by minimizing energy loss.
So, it generates networks
that minimize this energy -
so efficient networks -
when you take account of what
you consider these two major insights,
and they show
in a mathematical model
that this creates the three-quarter
scaling relationship that we see.
So, let me give you some example
of the implications of this relationship
in ecology.
One simple one is that you can ask
a very simple question like -
what is the maximum number
of individuals
that can be found in a given area?
So, to answer this question,
which I think is a very fundamental one,
you are just required to know
the amount of energy or resources
that are in a particular area,
and the requirements of those resources
by different kinds of individuals.
So... we know that the scale
with the size of the organisms
rises through three-quarters.
So, it's very simple then to compute
the maximum number of individuals
in a given area
just by dividing the amount of resources
over the requirements of each individual.
What that gives you is
another scaling relationship
that says that the maximum number
of individual scales with mass
raised to minus three quarters.
\begin{align*}
R=& \text{Energy or resources per unit area}\\
B=&\text{Individual resource requirements}\\
N_{max}\propto& R M^{-\frac{3}{4}}\label{eq:max}\numberthis
\end{align*}
Now, let's see the empirical evidence.
The empirical evidence on this,
and I'm going to go back
to a paper published by
John Damuth in 1981
where he made a compilation
of the density
of different mammal species
around the world.
And, these happened to be
every [major] mammal...
and he computed the density
and size of each of these species,
and he plotted this relationship,
and, as you can see in Figure \ref{fig:MammalianHerbivores},
there's a negative slope and
this slope is minus three quarters.
\begin{figure}[H]
\caption[Mammalian Herbivores]{Mammalian Herbivores--$N_{max}\propto R M^{-\frac{3}{4}}$\cite{damuth1981population}}\label{fig:MammalianHerbivores}
\includegraphics[width=0.9\textwidth]{MammalianHerbivores}
\end{figure}
So, right as predicted,
the maximum number of individuals
follow a minus three-quarter
scaling law.
The implications of this
are very interesting
because it means that mice actually...
they achieved larger densities
than elephants,
but somehow they are the same,
because they, as I will show you,
they use the same amount of energy
at the population level.
And, how we can calculate that?
Well, we call that
the "population energy use."
Population energy use - PEU -
is proportionate to
the number of individuals times
the requirements for energy.
So, that will be the total amount
of energy required by a population.
And, if we replace the known
scaling relationships
for number and for energy requirements,
we know that population energy use
will be proportional to
mass raised to minus three quarters
times mass raised to three quarters.
As you can see from (\ref{eq:const}), these two exponents
they kind of annihilate themselves
and give you a population energy use
that is proportional to
mass raised to zero,
meaning that population energy use
tends to be invariant
regarding the size of the organisms,
meaning that elephants use
the same amount of energy
that mice use--Figure \ref{fig:PopulationEnergyUse}.
\begin{align*}
E_{pop}\propto&N B\text{, Energy use for population}\\
\propto& M^{-\frac{3}{4}} M^{\frac{3}{4}}\\
\propto& 1\label{eq:const}\numberthis
\end{align*}
\begin{figure}[H]
\caption[Population Energy Use]{Population Energy Use\cite{enquist1998allometric} }\label{fig:PopulationEnergyUse}
\includegraphics[width=0.9\textwidth]{PopulationEnergyUse}
\end{figure}
So, this is a very fundamental
invariant relationship.
There are many more scaling relationships
that show this kind of property
of invariance,
but it shows you that
things can be different,
but at the same,
time things can be equal
if you explore this relationship
between the scalings.
So, why is it fundamental?
Why is this very important too
is because it allows us to understand
something about ourselves too.
And here, I want to make the point that,
using this relationship,
you can understand that humans
are a hyper-dense species--Figure \ref{fig:Hyperdense}.
\begin{figure}[H]
\caption{Humans--the hyper-dense species}\label{fig:Hyperdense}
\includegraphics[width=0.9\textwidth]{Hyperdense}
\end{figure}
How do we know that?
Because, we know the scaling
relationship for all mammals.
We know how density changes as you
change the body size of the mammal.
We are mammals.
On average, a human being
you can consider weighs around 70 kilos.
And, you can try to plot which would be
the density that we should achieve
if we will kind of
follow this relationship.
And, our density will be around
2.12 individuals per square kilometer.
So, the realized density is
5.8 raised to 10 to 4 [fourth power]
meaning 58,000 times larger.
So, this is a very significant deviation
from the expected scaling relationship.
And, this is a very meaningful one
because
you can now answer how it comes -
what happens to humans
that they could shift to
such an extreme maximum density
as it does these days.
Well, let me tell you that
when we were in a different kind of
stage of development,
a social stage of development -
when we were hunter-gatherer -
we really fit into this relationship.
As we moved through time
and social complexity,
we were moving away
from this relationship
and now we are far away from it.
And, this has impacts in terms of
the amount of energy that we use
and the impact of that amount of energy
used by humans
on the rest of the ecosystem.
That is called, or is part of
what we call "global change."
We're not going to deal with this
in this class,
but it's important to keep it in mind.
That's why scaling relationships help you
to realize there are similarities,
there are fundamental principles
related to the way
organisms deliver energy
through networks
that explain the amount of energy
that different organisms require
and this constrains the density
and amount of energy
their populations use.
And... you can use them
to understand deviations,
like the humans' density extreme
or hyper-density of humans now.
See also \cite{white2012methodological,sibly2012metabolic,tucker2014evolutionary,anderson2013altered,enquist2012land,west1997general,damuth1981population,enquist1998allometric,marquet2005scaling}
\section[Energy]{Energy--Van Savage}
Today I'm going to talk to you
about energy in biology.
"In biology" I mean all of biology,
from evolution to ecology,
to physiology to cellular biology.
And, the reason I can talk to you about it
from such a wide swathe of biology
is because we use energy
for everything we do
and because we spend a lot
of our time and abilities
just trying to get energy
and make energy.
For those reasons,
there's infinitely many ways
in which I could talk about energy
and infinitely many ways
I could describe it to you,
but one of the most fun things to me
and one of the arts of science to me
is deciding how to draw the boxes
that we're gonna use.
And, to do that,
you often have to frame a question
or think about some objective
you're trying to meet.
And, the two I'm going to talk about
in this talk:
\begin{enumerate}
\item one is with relation to evolution--how do we think about energy in terms of evolution and what's needed for evolution?
\item And, the other is more in relation to physiology and ecology, which really has to do with how do we get energy and how do we make energy.
\end{enumerate}
So, from an evolutionary perspective,
the main thing we are concerned about
is fitness,
which is how many offspring
or individuals
we give to the next generation.
To do that, we first have to grow
and maintain ourselves to reproduce.
So, one of the main boxes for evolution
actually is development and growth.
So, Figure \ref{fig:development} shows a plant growing
from early stages to later,
and, in doing that,
it has to create many more cells,
it has to create new types of cells
and new types of structures -
so this takes a lot of energy
and is a very energy intensive process.
\begin{figure}[H]
\begin{center}
\caption{Plant growing
from early stages to later}\label{fig:development}
\includegraphics[width=0.6\textwidth]{development}
\end{center}
\end{figure}
And, that's also true
for animals and their growth.
Figure \ref{fig:animals} is a picture of these two birds
crying out to their parents for food -
to bring them food -
because they need lots more food
to grow and have energy
to get to be big enough to reproduce.
\begin{figure}[H]
\begin{center}
\caption{Two birds
crying out to their parents for food}\label{fig:animals}
\includegraphics[width=0.6\textwidth]{animals}
\end{center}
\end{figure}
The next box I'll talk to you about
for energy is maintenance,
and that's sort of a less obvious
visible one
because you're not seeing cells change
or structures change or reproducing -
you sort of see things
staying the same way they are visibly,
but actually that takes a lot of energy
just to replace cells that die
or to feed cells energy
just to keep living.
As extreme examples,
if you look at the redwood trees--Figure \ref{fig:maintenance}--
it takes an enormous amount of energy
just to keep water or sap
pumping up to the leaves at the top -
it's a huge distance they have to travel.
\begin{figure}[H]
\begin{center}
\caption{Redwood trees}\label{fig:maintenance}
\includegraphics[width=0.6\textwidth]{maintenance}
\end{center}
\end{figure}
And, you have to build structures
to maintain them to go up to these leaves.
So, you use a lot of energy
just to keep pushing
up to the tops of the trees.
And, similarly, we use energy
for all the structures in our body.
But, there are extreme examples here too
where -
if we're thinking about a peacock--Figure \ref{fig:peacock}
if it builds a whole array of feathers,
that takes a lot of energy to build
and to maintain.
\begin{figure}[H]
\begin{center}
\caption{It takes a lot of energy to build a peacock's feathers.}\label{fig:peacock}
\includegraphics[width=0.6\textwidth]{peacock}
\end{center}
\end{figure}
And, it's going to use that
to attract a mate,
which it again
it needs for reproduction,
which is important for evolution.
As a brief aside,
before I get to reproduction,
from maintenance,
one of the interesting things to me
about us as humans is that,
individually biologically,
we use about the same amount of power
or energy per time
that you would see in a light bulb,
but once you add in things -
how much energy we use for cars
or computers or light bulbs
or heating our houses -
we actually, each individual in the US,
uses about the same amount of energy
as a blue whale,
so we're really enormous energy users
in terms of our biological footprint.
The last evolutionary box I'll talk about
for energy is reproduction,
which is where evolution sort of
ultimately aimed most of the time.
And, that can be things
like oranges on a tree
that tempt us to eat them
because they're so pretty
and flavorful and taste good,
and then, we walk around
and distribute those seeds
to help our orange trees grow elsewhere
and increase their numbers.
Or also, an embryo
growing inside a mother
that takes a huge amount of energy
and time to produce
that's necessary for reproduction.
So, growth, maintenance
and reproduction
are the main boxes that I think
you can think about for evolution
where energy is needed.
But now, I'm gonna shift gears
and talk a little bit more
about how it's needed
for physiology and ecology,
which has a lot to do with
how do you get energy
and how do you make energy,
which evolution still plays a big role in
because you need those things to survive,
but it's not as explicit as
if you do it this way.
So, in terms of obtaining energy,
Figure \ref{fig:owl-mouse} is a dramatic picture of an owl
chasing down a mouse to eat for food,
and that's one type of example
of getting resources or food
through what we call "active capture."
But, other ways include things
like grazing - like a cow in a pasture,
or "sit-and-wait,"
which would be like snakes or spiders
waiting for prey items to come to them.
\begin{figure}[H]
\caption{How to obtain energy}
\begin{subfigure}[t]{0.3\textwidth}
\caption{Owl chasing mouse}\label{fig:owl-mouse}
\includegraphics[width=\textwidth]{owl-mouse}
\end{subfigure}
\;\;\;
\begin{subfigure}[t]{0.3\textwidth}
\caption{Owl chasing mouse}\label{fig:tree-above-ground}
\includegraphics[width=\textwidth]{tree-above-ground}
\end{subfigure}
\;\;\;
\begin{subfigure}[t]{0.3\textwidth}
\caption{Owl chasing mouse}\label{fig:tree-roots}
\includegraphics[width=\textwidth]{tree-roots}
\end{subfigure}
\end{figure}
And also, plants are a little bit
like that
in terms of getting energy.
They're more like sit-and-waits,
where they build structures
and wait for things to come to them.
So, as we see in Figure\ref{fig:tree-above-ground}, there's an extensive
branching system for this tree,
and when the leaves are all present
on the limbs,
it's using that to get light from the environment,
and get as much light in this little area
as it can - within that canopy.
And, that sort of branching system
is reflected below ground
in terms of root systems
that it uses to get water nutrients
from the ground as well,
where it has to branch out
and get as much resources as it can--Figure \ref{fig:tree-roots}.
Once resources are obtained,
we have to process that to make energy,
and, the first step in that for animals
is the digestive system,
which, you know, involves
going through our stomachs
and things like that.
But, the part I want to highlight here--Figure \ref{fig:digestion}
is that, in our guts,
there are these microbial systems,
often now called the "microbiome"
or the "gut microbiome,"
which we have to have
to process energy.
\begin{figure}[H]
\begin{center}
\caption[The gut microbiome]{In our guts, there are these microbial systems, often now called the "microbiome" or the "gut microbiome," which we have to have to process energy}\label{fig:digestion}
\includegraphics[width=0.6\textwidth]{digestion}
\end{center}
\end{figure}
It's this own little world -
an ecosystem - inside our bodies.
And, basically, based on how it processes
energy and the energy it needs,
it really affects which bacteria you see
and the diversity of bacteria you see,
and when that's off
it can really affect our digestive system.
After we process energy
and get it in a more usable form
from what we took into our bodies,
we still have to get it
to the rest of our bodies -
to our fingertip, our toe tip,
or our head to use,
and that's done by a branching system
inside our bodies--Figure \ref{fig:distribution1}.
It looks a lot like the branching system
in trees outside
or in their roots in the ground -
and that's the cardiovascular system,
where we use a heart to pump blood
out to our limbs and to our head.
And then, at the finer scale,
we have capillaries or capillary beds--Figure \ref{fig:distribution1},
which is where the transfer of oxygen
or other nutrients can take place.
\begin{figure}[H]
\caption{Distributing Energy}
\begin{subfigure}[t]{0.4\textwidth}
\caption{The cardiovascular system}\label{fig:distribution1}
\includegraphics[width=\textwidth]{distribution1}
\end{subfigure}
\;\;\;
\begin{subfigure}[t]{0.55\textwidth}
\caption{Capillaries}\label{fig:distribution2}
\includegraphics[width=\textwidth]{distribution2}
\end{subfigure}
\end{figure}
Once we distribute energy
and get it to each cell that needs it
to keep producing energy and living,
the main way we make energy -
at least in animals -
is through mitochondria,
and each mitochondria
is like a little engine
that takes oxygen and makes energy.
And, it's actually a really old bacteria
that we've brought in to -
not "we" humans -
but a long time ago
cells brought in to make energy for them.
So, it's a really ancient way
of making energy.
That leads to the question:
if it's really ancient,
is it really good at it -
is it very efficient?
And, I would think it would be
because, if it's used that broadly,
you would think it must be pretty good
or you would reinvent the wheel somehow.
But, what's interesting is
if you compare to something like solar panels,
and compare like the grass
and the trees in the background here
to the solar panels in the foreground,
the grass and the trees
use photosynthesis to make energy,
which is about three percent effective,
but the solar panels can get up to
about 30 percent efficiency,
so about 10 times better,
which was kind of shocking to me
when I first learned about it -
that they can do so much better.
And, maybe this does suggest
that biology can still evolve
and do better.
But, the catch here really is that
solar panels use a lot of elements
that aren't easily accessible
to biological organisms -
they take money to to either mine
or to construct them the right way.
When we think about
sort of being efficient or evolving,
it's always within constraints.
So, I'd argue that biology is applied
really well in the constraints it has,
but we're able to get at things
biology has not been able to get to.
And, looking at this efficiency question
from a different perspective,
if you think back again to the networks,
either for trees or the cardiovascular
system within our bodies,
there's a million ways
you could build such a network.
We want those networks to span space
to be able to get blood or water
everywhere it needs to go,
and we want them to do so
in an efficient way
so we don't spend
a huge amount of energy
just pumping blood around
and losing energy pushing fluid around.
And, if you think about all the ways
in which networks could be built,
you can look at... drive theory
and look at data
to see what's the most optimal.
And, it turns out that biology
has done a really good job
of optimizing networks to be efficient
And, one consequence of that,
actually, as you look at metabolic rates
on the y-axis in Figure \ref {fig:MammalianBasalMetabolicRate}
versus mass on the x-axis,
you see a very clean systematic pattern
where, the bigger something is,
the more energy it uses -
which isn't surprising.
\begin{figure}[H]
\caption[Metabolic rate depends on optimized networks and body size]{Metabolic rate depends on optimized networks and body size\cite{savage2004predominance}}
\begin{subfigure}[p]{0.45\textwidth}
\caption{Mammalian Basal Metabolic Rate}\label{fig:MammalianBasalMetabolicRate}
\includegraphics[width=0.9\textwidth]{MammalianBasalMetabolicRate}
\end{subfigure}
\begin{subfigure}[p]{0.45\textwidth}
\caption{Whole Plant Xylum Flux}\label{fig:WholePlantXylumFlux}
\includegraphics[width=0.9\textwidth]{WholePlantXylumFlux}
\end{subfigure}
\end{figure}
But, the surprising piece here
is that it's nonlinear.
So, you think about an elephant
that's 10,000 times bigger than a mouse,
it only uses about
a thousand times more energy,
which means - per cell -
a cell from an elephant
uses about ten times less energy
than a cell from a mouse.
So, you're gaining efficiency
by getting bigger
in this way of looking at it.
And, just to make sure for people
paying close attention to the axes here,
they're logarithmic axes -
so a curved line becomes a straight line,
and, what would be the exponent
of a mathematical equation
becomes the slope.
And, this pattern is true,
not just for across these large...
huge range of sizes
in mammals or animals in general,
but also for plants -
xylem flux is a similar sort of measure
of metabolic rate in plants.
We plot that versus body size again--Figure \ref{fig:WholePlantXylumFlux}--
and again, you see
a very clear straight line
across a huge range in size for plants,
and, again,
an exponent or a slope
that's close to 3/4.
So, the same sort of pattern
shows up again.
And, another big effector,
besides body size--after body size,
the biggest effector of energy use
across individuals is temperature.
So, if you look in --Figure \ref{fig:BodyTemperature},
basically the warmer something is--
if we think about a frog
or a turtle or a plant--
the warmer it is,
the faster it uses energy,
and that increases at exponential rates
that are faster and faster and faster
up to the point where
you get extreme temperatures
and things start to fall apart
and things just start to die.
\begin{figure}[H]
\caption[Metabolic rate depends on body temperature]{Metabolic rate depends on body temperature\cite{dell2011systematic}}\label{fig:BodyTemperature}
\includegraphics[width=0.9\textwidth]{BodyTemperature}
\end{figure}
But, up until that point or close to it,
the warmer you are
the faster you use that energy.
And because, as we started seeing
at the start of this talk,
that we use energy for everything we do,
understanding how mass
and temperature affect metabolic rate
or the power we produce
tells us a lot about all kinds of
other things in biology.
So, for example, if we look at
heart rates across mammals--Figure \ref{fig:MammalianRestingHeartRate},
another way to think about this
in terms of mouse versus elephant
is that an elephant's heart beats
about ten times slower than a mouse.
So, every time an elephant's heart
beats once,
the mouse will have beat
ten times really quickly.
\begin{figure}[H]
\caption[Mammalian Resting Heart Rate]{Mammalian Resting Heart Rate\cite{savage2004predominance}}\label{fig:MammalianRestingHeartRate}
\includegraphics[width=0.9\textwidth]{MammalianRestingHeartRate}
\end{figure}
Or, if you look at ecology--Figure \ref{fig:AndEcology}--
when we correct for temperature
and like versus size
and we think about how much
each individual produces in a system,
that actually follows a very tight,
clean pattern here as well
and it's true across a huge,
diverse variety of taxa
that includes plants, mammals,
insects, fish -
almost everything you can think of.
\begin{figure}[H]
\caption{Ecology and Temperature Correction}
\begin{subfigure}[b]{0.5\textwidth}
\caption{Temperature Corrected Individual Production\cite{ernest2003thermodynamic}}\label{fig:AndEcology}
\includegraphics[width=\textwidth]{AndEcology1}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\caption{Temperature Corrected Population Density\cite{ernest2003thermodynamic}}\label{fig:AndEcology2}
\includegraphics[width=\textwidth]{AndEcology2}
\end{subfigure}
\end{figure}
And finally, as another ecological example--Figure \ref{fig:AndEcology2}--
this affects how many individuals per area
that we see; the bigger you are
or the warmer you are,
the more energy you need
the fewer individuals you get around.
And, you see this systematic pattern
for animals,
which are the red dots
and plants,
which are the green dots.
And, one of the interesting things here
is that animals are much lower
than the plants
and that's because it has
conversion efficiency,
where plants have to convert
sunlight into energy,
and, basically, all animals
either directly or indirectly
get their energy from plants.
So, they get about ten percent
of the energy from plants
that they can use
to produce their numbers.
So they're lower down
because they lose a lot of efficiency
in going from plants to animals.
References: \cite{sibly2012metabolic,battley1987energetics,odum1976energy,odum1983systems,schmidt1997animal,brown2004toward,kempes2017thermodynamic}.
\section[Nonequilibrium Physics]{Nonequilibrium Physics-- Eric Smith}
We will talk about how motion itself can freeze. Topics covered:
\begin{itemize}
\item The equilibrium concept of phase transition
\item How phase transitions explain robust patterns
\item Why equilibrium isn’t enough to understand life
\item Phase transitions in dynamical systems--''frozen motion''
\end{itemize}
\subsection{Phase Transitions}
To understand what makes phase transitions special we have to start with the ''ordinary'' response of thermodynamic systems to controls. E.g. lava has viscosity, which increases smoothly as temperature is lowered--Figure \ref{fig:lava}.
\begin{figure}[H]
\begin{center}
\caption[The ''ordinary'' response of thermodynamic systems to controls]{The ''ordinary'' response of thermodynamic systems to controls. Hot lava flows like water: viscosity increases smoothly (gradually!) as temperature is lowered.}\label{fig:lava}
\includegraphics[width=0.8\textwidth]{lava}
\end{center}
\end{figure}
Phase transitions are different:
\begin{enumerate}
\item Water does not become harder as it is cooled
\item It turns to ice suddenly at a ''critical temperature''--Figure \ref{fig:water-ice}.
\begin{figure}[H]
\begin{center}
\caption{Phase transitions are different}\label{fig:water-ice}
\includegraphics[width=0.8\textwidth]{water-ice}
\end{center}
\end{figure}
\item The average direction of pointing of the ice sharply increases from zero at the freezing temperature--Figure \ref{fig:water-ice1}.
\begin{figure}[H]
\begin{center}
\caption[The suddenness of change matters]{The suddenness of change matters. At the left the water models point in no particular direction.}\label{fig:water-ice1}
\includegraphics[width=0.8\textwidth]{phase-transition1}
\end{center}
\end{figure}
\item The direction and strength of the crystal is called the ''Order Parameter'' of the transition--Figure \ref{fig:water-ice2}. They give a sense of what kind, and how much order it has. The order parameter's direction is arbitrary, and the strength has to do with how far we are below the freezing temperature. It gives an idea of how globally the ice crystals are lined up, and how rigidly.
\begin{figure}[H]
\begin{center}
\caption{Concept of an order parameter}\label{fig:water-ice2}
\includegraphics[width=0.8\textwidth]{water-ice2}
\end{center}
\end{figure}
\item Change is sudden because ''you can’t have half a symmetry''--Figure \ref{fig:water-ice3}.
\begin{figure}[H]
\begin{center}
\caption{You can’t have half a symmetry}\label{fig:water-ice3}
\includegraphics[width=0.8\textwidth]{water-ice3}
\end{center}
\end{figure}
\begin{itemize}
\item A direction either exists or it doesn’t
\item The water is symmetric, so all frozen directions are equivalent. In order the freeze you must choose one. It is the choice that gives rise to the suddenness with which the transition sets in. This causes phase transitions to play a very important role in our material world. They are responsible for most of the robust patterns that we see.
\end{itemize}
\item Phase transitions, cooperatively-maintained states, and robustness--Figure \ref{fig:a-girls-best-friend}.
\begin{figure}[H]
\begin{center}
\caption{Phase transitions, cooperatively-
maintained states, and robustness}\label{fig:a-girls-best-friend}
\includegraphics[width=0.8\textwidth]{a-girls-best-friend}
\end{center}
\end{figure}
\begin{itemize}
\item An individual carbon atom is easy to move, so why are diamonds hard?
\item Diamonds are hard because many atoms lock each other in place
\item The order of the crystal is a ''robust'' property of freezing
\end{itemize}
\end{enumerate}
\subsection{Evolution happens on a background of robust architectures}
\begin{figure}[H]
\caption{Evolution happens on a background of robust architectures}\label{fig:evolution-robust}
\includegraphics[width=0.9\textwidth]{evolution-robust}
\end{figure}
\begin{itemize}
\item Universal small metabolites
\item RNA and proteins
\item Cellular and genomic individuality
\end{itemize}
Equilibrium ideas are not enough to explain the robust order of life--Figure \ref{fig:chicken:soup}.
\begin{figure}[H]
\caption{Equilibrium ideas are not enough to explain the robust order of life}\label{fig:chicken:soup}
\begin{subfigure}[t]{0.45\textwidth}
\caption{We can turn a chicken into chicken soup}
\includegraphics[width=\textwidth]{chicken2soup}
\end{subfigure}
\;\;\;\;
\begin{subfigure}[t]{0.45\textwidth}
\caption{Nobody has figured out how to turn chicken soup into a chicken}
\includegraphics[width=\textwidth]{soup2chicken}
\end{subfigure}
\end{figure}
A classical example is Stanley Miller's 1953 experiment\cite{miller1959organic}, which made the origins of life a serious scientific topic for the first time in history. Miller took a variety of gases, excited them with an electric spark, and saw that over a period of time the transparent mixture of gases was converted to a variety of darker and darker tars which contained amino acids, which are some of the things out of which living systems are made. But these amino acids were not life--Figure \ref{fig:life-interlocking}--just as the chicken soup was not the chicken--because life is made of interlocking structures and processes.
\begin{figure}[H]
\caption{Life is made of interlocking structures and processes}\label{fig:life-interlocking}
\includegraphics[width=0.9\textwidth]{life-interlocking}
\end{figure}
Can phase transition ideas be applied to these interlocking structures and processes? In fact they can. Maybe the easiest system to understand this is fracture propagation--Figure \ref{fig:fracture}. We begin with a lattice of molecules, bonded in some way, and we stress it in some way. To relieve the stress it might separate. If we put a nick in that system we can watch the fracture grow: it is the propagation of a self reproducing pattern of bond-breakage and realignment of the stress field--Figure \ref{fig:stress:dist}. The sense in which fracture propagation is a cooperative effect is that the breaking of bonds occurs at single atomic diameters, but the stress field is distributed across the entire material. The macro deformation from the part that has already ruptured allows a weak energy distributed throughout the material to focus as the bright light shows--Figure \ref{fig:stress-zoomed}--all the way down to a single bond diameter, and it is that focus that causes the stress field and the bond breakage to repeat and propagate as a self reproducing pattern.
\begin{figure}[H]
\caption[Fracture Propagation]{Fracture Propagation\cite{bitzek2015atomistic}}\label{fig:fracture}
\begin{subfigure}[t]{0.55\textwidth}
\caption{The stress field is distributed across the entire material(purple)}\label{fig:stress:dist}
\includegraphics[width=\textwidth]{fracture}
\end{subfigure}
\;\;\;\;
\begin{subfigure}[t]{0.4\textwidth}
\caption{Weak energy distributed throughout the material to focus as the bright light shows}\label{fig:stress-zoomed}
\includegraphics[width=\textwidth]{stress-zoomed}
\end{subfigure}
\end{figure}
We can understand the space-time patterns that are formed in these kinds of transitions as states of order in the same way as we can understand a crystal orientation as a state of order. One way to do this is with diagrams. Suppose I take a stressed elastic solid, which is a block and turn it so that it is bent. Figure \ref{fig:stressed-solid} depicts a slice through it. We can ask what patterns are displayed in the space-time diagram.
\begin{figure}[H]
\caption[Understanding space-time patterns
as states of order]{Understanding space-time patterns
as states of order. Reduce to relevant
dimensions in space and use extra freedom to show
time}\label{fig:stressed-solid}
\includegraphics[width=0.9\textwidth]{stressed-solid}
\end{figure}
Figure \ref{fig:stressed-solid-comparison} allows us to compare the unstressed and stressed crystals. Unstressed has a kind of melted state, which has all its symmetry; stressed has a uniform but dilute stress field that persists through time. If a nick is introduced and the fracture propagates we see the stress field and fracture tip, which move in a uniform way through space and time. On the left the order parameter has two components, a direction, which can be arbitrary (symmetry) and a strength (physics); on the right there are two components, a location (arbitrary--symmetry) and a speed (physics).
\begin{figure}[H]
\caption[Comparing the unstressed and stressed crystals]{Comparing the unstressed and stressed crystals: the crystal’s order parameter had a
direction and a strength; the fracture’s order parameter has a
location and a speed.}\label{fig:stressed-solid-comparison}
\includegraphics[width=0.9\textwidth]{stressed-solid-comparison}
\end{figure}
So the concept of phase transitions and the spontaneous creation of order exist in the dynamical realm. Frozen motion doesn't mean that is it caught and it stops: that would be the end of motion. Frozen motion is motion that is made robust be cooperative effects and phase transition.
\subsection{What might be the order parameters of life?}
\begin{itemize}
\item They would be chemical and energetic, since life is based on chemistry and energy;
\item They would involve interdependent structure and process, where the structure carries the process and the process builds the structure.
\end{itemize}
Some candidates would be the characteristic molecules of Figure \ref{fig:evolution-robust}:
\begin{itemize}
\item Unchanging universal roles for small metabolites
\item Key macromolecules such as RNA
\end{itemize}
but another property could be found at the scale of the entire planet, and this is the effect that out biosphere has on the great biogeochemical cycles--Figure \ref{fig:biogeochemical}
\begin{itemize}
\item Life alters cycling of Carbon, Nitrogen, Sulfur, and more
\item New compounds are also formed of these elements
\end{itemize}
\begin{figure}[H]
\caption[The great biogeochemical cycles]{The great biogeochemical cycles\cite{falkowski2008microbial}}\label{fig:biogeochemical}
\includegraphics[width=0.9\textwidth]{biogeochemical}
\end{figure}
The regular patterns in these changes are eligible to be order parameters for life as a phenomenon on this planet.
Another possibility would be Earth's energy throughput--Figure \ref{fig:EnergyThroughput}. Earth is the only green planet in our solar system, because the biosphere and its pigments have been brought into existence on this planet. The biosphere has changed the composition of out atmosphere. In particular it has brought molecular oxygen into existence in the atmosphere, which could be seen from space as sunlight shines through. The way life changes atmospheric composition is one of the ways scientists look for the possibility of life on a planet that is too far away for us the see anything else.
\begin{figure}[H]
\caption[Earth’s energy throughput]{Earth’s energy throughput: the Biosphere changes the way a planet converts sunlight into heat.\cite{meadows2005modelling}}\label{fig:EnergyThroughput}
\includegraphics[width=0.9\textwidth]{EnergyThroughput}
\end{figure}
Another characteristic of life that could be an order parameter is the concept of individuality, which has emerged in many different ways--Figure \ref{fig:emergence-individuality}.
\begin{itemize}
\item Individuality takes many forms
\item The order parameters in individual-based systems are proper names
\end{itemize}
The order parameters that we have looked at before have one value throughout the bulk, but if we want to understand the kind of order that comes into existence with individuality we encounter something that we normally discuss in the humanities--the need
for proper names.
\begin{figure}[H]
\caption{Individuality takes many forms}\label{fig:emergence-individuality}
\includegraphics[width=0.9\textwidth]{emergence-individuality}
\end{figure}
Take-home messages from the lecture:
\begin{itemize}
\item Phase transitions are one way natural systems spontaneously form order
\item The order is robust due to mutual reinforcement
\item Phase transitions can also lead to spontaneous order in processes like fractures
\item Candidates for living order parameters include chemical cycles and individuality
\end{itemize}
References: \cite{smith2011large,goldenfeld2018lectures,smith2015new,smith2016origin}
% end of text
% glossary
\printglossaries
% bibliography go here
\bibliographystyle{unsrt}
\addcontentsline{toc}{section}{Bibliography}
\bibliography{origins,wikipedia}
\end{document}
| {
"alphanum_fraction": 0.7927274964,
"avg_line_length": 44.3766081871,
"ext": "tex",
"hexsha": "a2c9ae957442adc9c8a31725f5a76551fb71ddce",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "435ffab978e4499aea7c2c83788533867cc9b062",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "weka511/complexity",
"max_forks_repo_path": "origins/origins6.tex",
"max_issues_count": 11,
"max_issues_repo_head_hexsha": "fa4e39677ea3ed7713e40a55b9453b2826f11a6c",
"max_issues_repo_issues_event_max_datetime": "2020-07-20T03:07:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-20T03:20:12.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "weka511/fractals",
"max_issues_repo_path": "origins/origins6.tex",
"max_line_length": 1079,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "fa4e39677ea3ed7713e40a55b9453b2826f11a6c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "weka511/fractals",
"max_stars_repo_path": "origins/origins6.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-28T04:36:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-07-22T01:39:49.000Z",
"num_tokens": 27552,
"size": 113826
} |
\chapter{Background}
\label{chap:background}
| {
"alphanum_fraction": 0.8,
"avg_line_length": 15,
"ext": "tex",
"hexsha": "16b91b22f0f1866ed7102c85137567aed42e96e2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-07-16T14:02:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-16T14:02:45.000Z",
"max_forks_repo_head_hexsha": "9307423db695a5e05acdf2e4bbadd8a1c9cbb480",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "TienanLi/umlthesis",
"max_forks_repo_path": "chapter2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9307423db695a5e05acdf2e4bbadd8a1c9cbb480",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "TienanLi/umlthesis",
"max_issues_repo_path": "chapter2.tex",
"max_line_length": 23,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9307423db695a5e05acdf2e4bbadd8a1c9cbb480",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "TienanLi/umlthesis",
"max_stars_repo_path": "chapter2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12,
"size": 45
} |
\section{Experimental}
My role in this work was entirely on the analysis of the measurements and the development of the chemically-consistent model.\footnote{The experimental measurements were designed and conducted by Drs Tom Arnold, Andrew Jackson, Adrian Sanchez-Fernandez, and Prof. Karen Edler, with the assistance of Dr Richard Campbell.}
However, it is necessary to briefly discuss the materials and experimental methods used to enable a complete understanding of the context of the work.
\subsection{Materials}
%
\begin{marginfigure}
\centering
\includegraphics[width=\linewidth]{reflectometry1/head_groups}
\caption{The two phospholipid forms investgated in this work, where R indicates the hydorcarbon tail; (a) phosphatidylglycerol (PG), (b) phosphocholine (PC).}
\label{fig:heads}
\end{marginfigure}
%
Choline chloride (\SI{99}{\percent}, Sigma-Aldrich), glycerol (\SI{99}{\percent}, Sigma-Aldrich), d$_{9}$-choline chloride (\SI{99}{\percent}, \SI{98}{\percent} D, CK Isotopes), and d$_{8}$-glycerol (\SI{99}{\percent}, \SI{98}{\percent} D, CK Isotopes) were used in the preparation of the DES.
This is achieved by mixing a $1$:$2$ ratio of choline-chloride and glycerol and heating at \SI{80}{\celsius} until a homogeneous, transparent liquid is formed.\autocite{smith_deep_2014}
This was then stored under a dry atmosphere to reduce the amount of water dissolved in the solvent.
The limited availability of deuterated precursors lead to only a fully protonated and a partially deuterated\footnote{Abbreviated to hDES and hdDES respectively.} being prepared and used in the NR measurements.
The partially deuterated subphase was prepared using the following mixtures of precursors: \SI{1}{\mole} of \SI{0.38}{\mole} fraction of h-choline-chloride/\SI{0.62}{\mole} fraction of d-choline-chloride; and \SI{2}{\mole} of \SI{0.56}{\mole} fraction of h-glycerol/\SI{0.44}{\mole} fraction of d-glycerol.
The water content of the DES was assessed before and after each experiment by Karl-Fisher titration (Mettler Toledo DL32 Karl-Fischer Coulometer, Aqualine Electrolyte A, Aqualine Catholyte CG A) and found to be always below \SI{0.3}{wt/\percent}.
This was taken to be a negligible amount and would not have a considerable impact on the DES characteristics.\autocite{hammond_liquid_2016,hammond_resilience_2017}
1,2-dipalmitoyl-\emph{sn}-glycero-3-phosphocholine (C$_{16}$ tails, \SI{>99}{\percent}), 1,2-dimyristoyl-\emph{sn}-glycero-3-phosphocholine (C$_{14}$ tails, \SI{>99}{\percent}), and the sodium salt of 1,2-dimyristoyl-\emph{sn}-glycero-3-phospho-(1'-rac-glycerol) (C$_{14}$ tails, \SI{>99}{\percent})\footnote{Abbreviated to DPPC, DMPC, and DMPG respectively.} were obtained from Avanti Polar Lipids and 2-dilauroyl-\emph{sn}-glycero-3-phosphocholine (C$_{12}$ tails, \SI{>99}{\percent})\footnote{Abbreviated to DLPC.} was obtained from Sigma Aldrich and all were used without further purification.
Deuterated versions of DPPC (d$_{62}$-DPPC, \SI{>99}{\percent}, deuterated tails-only) and DMPC (d$_{54}$-DMPC, \SI{>99}{\percent}, deuterated tails-only) were obtained from Avanti Polar Lipids and used without further purification.
These phospholipids were dissolved in chloroform solution (\SI{0.5}{\milli\gram\per\milli\liter}) at room temperature.\footnote{PC indicates where the phospholipid molecule contains a phosphocholine head group, while PG indicates a phosphatidylglycerol head group, the chemical structures of these can be seen in Figure~\ref{fig:heads}.}
In the XRR experiment, the sample was prepared in-situ using the standard method for the spreading of insoluble monolayers on water.
A small amount of the phospholipid solution was spread on the liquid surface.
Following the evaporation of the chloroform, it is assumed that the resulting system is a subsurface of solvent with a monolayer of the phospholipid at the interface.
The surface concentration is then controlled by opening and closing the polytetrafluoroethylene\footnote{Abbreviated to PTFE} barriers of a Langmuir trough.
To reduce the volume used in the NR experiments, a small Delrin adsorption trough was used that did not have controllable PTFE barriers.
Therefore, although the surface concentration was nominally the same as for the XRR, the lack of precise control meant that it was determined to be inappropriate to co-refine the XRR and NR contrasts together.
\subsection{Methods}
The XRR measurements were carried out at the I07 beamline at the Diamond Light Source, with a photon energy of \SI{12.5}{\kilo\electronvolt} using the double-crystal deflector system.\autocite{arnold_implementation_2012}
The reflected intensity was measured for a $q$ range of \SIrange{0.018}{0.7}{\per\angstrom}.
The data were normalised with respect to the incident beam and the background was measured from off-specular reflection and subsequently subtracted.
All of the samples were allowed at least one hour to equilibrate and preserved under an argon atmosphere.
XRR data were collected for each of the phospholipids, DLPC, DMPC, DPPC, and DMPG at four SPs each,\footnote{DLPC: \SIlist{20;25;30;35}{\milli\newton\per\meter}, DMPC: \SIlist{20;25;30;40}{\milli\newton\per\meter}, DPPC: \SIlist{15;20;25;30}{\milli\newton\per\meter}, DMPG: \SIlist{15;20;25;30}{\milli\newton\per\meter}.} as measured with an aluminium Wilhelmy plate; measurements were conducted at \SIlist{7;22}{\celsius}.
An aluminium Wilhelmy plate was used over a traditional paper due to the low wettability of paper by the DES.
The NR experiments were performed on the FIGARO instrument at the Institut Laue-Langevin using time-of-flight methods.\autocite{campbell_figaro_2011}
Data were collected at two incident angles; \SIlist{0.62;3.8}{\degree}, providing a $q$ range from \SIrange{0.005}{0.18}{\per\angstrom}.
Two SPs for each phospholipid and contrast were measured.\footnote{DMPC: \SIlist{20;25}{\milli\newton\per\meter}, DPPC: \SIlist{15;20}{\milli\newton\per\meter}.}
As with the XRR measurements, the samples were given at least one hour to equilibrate, kept under an inert atmosphere.
All measurements were conducted at \SI{22}{\celsius}.
| {
"alphanum_fraction": 0.7826794726,
"avg_line_length": 127.9791666667,
"ext": "tex",
"hexsha": "fee45eb5d00d47404e6b8b13313ad639eff778b1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4c76e837b1041472a5522427de0069a5a28d40c9",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "arm61/thesis",
"max_forks_repo_path": "reports/chapters/reflectometry1/methods.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4c76e837b1041472a5522427de0069a5a28d40c9",
"max_issues_repo_issues_event_max_datetime": "2019-06-04T17:11:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-04T17:11:33.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "arm61/thesis",
"max_issues_repo_path": "reports/chapters/reflectometry1/methods.tex",
"max_line_length": 597,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "4c76e837b1041472a5522427de0069a5a28d40c9",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "arm61/thesis",
"max_stars_repo_path": "reports/chapters/reflectometry1/methods.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-01T06:25:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-04T20:53:19.000Z",
"num_tokens": 1753,
"size": 6143
} |
\documentclass{beamer}
\usepackage[british]{babel}
\usepackage{beamerthemeWarsaw}
\usepackage{pgf}
\usepackage{multirow}
\usepackage{listings}
\usepackage{comment}
\newcommand{\labno}{8}
\newcommand{\labtitle}{MongoDB}
\newcommand{\lab}{Tutorial \labno: \labtitle}
\newcommand{\todo}[1]{\textbf{TODO}\footnote{\textbf{TODO:} #1}}
\hypersetup{colorlinks = true, linkcolor = orange, urlcolor = orange}
\title{\labtitle}
\author{Marios Fragkoulis}
\date{25 May 2015}
\begin{document}
\frame{\titlepage}
\section{Introduction}
%% DEFINE MORE SECTIONS
%% Upsert, Data model, Data aggregation
%%
\begin{frame}
\frametitle{Contents}
\begin{itemize}
\item Characteristics and use cases
\item Architecture
\item Data model - database, collection, document, fields
\item Operations - create, read, update, delete (CRUD)
\item Data aggregation
\item GUIs - Humongous
\end{itemize}
\end{frame}
%%
%%
\begin{frame}
\frametitle{Key characteristics}
\begin{itemize}
\item innovative
\item fast time-to-market
% fast iterative development with dynamic schema
\item scalable using shards
% sharding: store data across multiple machines to achieve horizontal scaling
% pluggable storage architecture
\item reliability, fault-tolerance with replica sets
% replica sets: primary and secondaries
% if primary is down, a secondary will become primary
% an arbiter can coordinate the election for a new primary by collecting heartbeats from
% secondaries
\item inexpensive to operate
% commodity hardware, well supported operations management
\item tuneable consistency levels
% Strict consistency when reading from primary.
% If read preference is set to (nearest) secondary (to achieve higher throughput,
% then stale data may be returned because of asynchronous replication.
% Eventual consistency when reading from secondaries because eventually
% the secondary member's state will reflect the primary's state
% Shards can also be replica sets.
\end{itemize}
\end{frame}
%%
%%
\begin{frame}
\frametitle{Use cases}
\begin{itemize}
\item Good for short run or single processes, but not long run processes
% compare with long run processes as is the case with an ERP environment
% Orcale best suited for that.
\item Example use case: fits well CMS, but not ERP
\item Good for fluid data, which do not fit a rigid schema
\begin{itemize}
\item dynamic schema
\item nested data model
\end{itemize}
\item Supports complex operations on fluid data
\item Good for online analytical processing (OLTP)
% does ok in batch processing
\end{itemize}
\end{frame}
%%
%%
\begin{frame}
\frametitle{Architecture}
\begin{itemize}
\item Client - server
\item Distributed
\begin{itemize}
\item within data centers
\item across data centers
\end{itemize}
\end{itemize}
\end{frame}
%%
%%
\begin{frame}
\frametitle{Essentials: look around the Mongo Shell (1)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
\texttt{mongo -u \textless user\textgreater{} -p \textless password\textgreater{} \textless db\_name\textgreater} & Connect to local Mongo database \\
\texttt{show dbs} & Show the list of existing databases \\
\texttt{db} & Show the db I am currently connected to \\
\texttt{db.getCollectionNames()} & Show the list of collections in a database \\
\texttt{db.\textless collection-name\textgreater. find()} & Show the list of documents in a collection \\
\hline
\end{tabular}
\end{frame}
%%
\begin{frame}
\frametitle{Essentials: look around the Mongo Shell (2)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
\texttt{db.\textless collection-name\textgreater. find(\textless filter\textgreater)} & Show the list of documents that satisfy the filter \\
\texttt{db.\textless collection-name\textgreater. insert(\textless document\textgreater)} & Insert a document in the colelction \\
\texttt{db.\textless collection-name\textgreater. remove(\textless filter\textgreater)} & Remove a document from the collection \\
\hline
\end{tabular}
\end{frame}
\section{Data model}
%%
\begin{frame}
\frametitle{Nested data model (1)}
\begin{itemize}
\item Strings
\item Numbers
\item Booleans
\item Pairs - string: value
\item Objects - A collection of name/value pairs surrounded by curly braces.
\item Arrays - A collection of objects or values surrounded by braces.
\end{itemize}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Nested data model (2)}
\begin{block}{A complete example that presents the data model.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
{
owner: {
name: "Vin",
surname: "Diesel"
},
kilometers: 40000,
modified: true,
accessories: [ { nitro: "NOS" }, "exhaust" ]
}
\end{lstlisting}
\end{block}
% kilometers, modified, accessories without quotes:
% the parser tries hard to be nice and understands those as fields.
\end{frame}
%%
\begin{frame}
\frametitle{Useful bits}
\begin{itemize}
\item Use double quotes when referring to document fields
(although their use is dependent on the
\href{http://en.wikipedia.org/wiki/JavaScript\_syntax\#Variables}{Javascript variable naming rules})
\item In values, use double quotes for strings only.
% \item Insert vs upsert
% \item Update vs replace
\item The data model is very flexible (but hurts data consistency)
% Elements of different types in arrays
% Documents of different form in the same collection
\end{itemize}
\end{frame}
%%
\begin{frame}
\frametitle{Naming rules as per MongoDB's Javascript API}
\begin{itemize}
\item Name syntax that does not require quotes complies to:
\begin{itemize}
\item names that start with a letter, \_, or \$ sign, and,
% Javascript is case sensitive so letters are: a-z and A-Z.
\item in addition to the above, subsequent characters can also be a number
\end{itemize}
\end{itemize}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data model: when quotes are necessary (1)}
\begin{block}{The carefree way.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.update( {"owner.name": "Vin",
"owner.surname": "Diesel" },
{
$set: { "kilometers": 40000,
"modified": true,
"accessories": [
{ "nitro": "NOS" } ] }
} )
\end{lstlisting}
\end{block}
% $currentDate works for MongoDB version 2.6.0 and onwards
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data model: when quotes are necessary (2)}
\begin{block}{The more readable way.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.update( {"owner.name": "Vin",
"owner.surname": "Diesel" },
{
$set: { kilometers: 40000,
modified: true,
accessories: [ { nitro: "NOS" } ] }
} )
\end{lstlisting}
\end{block}
% $currentDate works for MongoDB version 2.6.0 and onwards
\end{frame}
%%
\begin{frame}
\frametitle{Relationships between documents}
\begin{itemize}
\item embedded documents
\begin{itemize}
\item One-to-one relationships between documents
\item One-to-many relationships between documents
\end{itemize}
\item document references
\begin{itemize}
\item One-to-many relationships between documents
\end{itemize}
\end{itemize}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Referencing or embedding (1)}
\begin{block}{Referencing}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
{
_id: "574956232",
name: "Vin",
surname: "Diesel",
}
{
sign: "IMN5672",
kilometers: 0,
modified: false,
accessories: [],
tyres: "Bridgestone",
owner_id: "574956232"
}
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Referencing or embedding (2)}
\begin{block}{Embedding: the one-to-many relationship case}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
{
sign: "IMN5672",
kilometers: 0,
modified: false,
accessories: [],
tyres: "Bridgestone",
owners: [
{ name: "Vin",
surname: "Diesel" },
{ name: "Gin",
surname: "Diesel" } ]
}
\end{lstlisting}
\end{block}
% Save query time and effort
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Referencing or embedding (3)}
\begin{block}{Embedding: the unbounded many-to-many relationship case}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
{ ...
sign: "IMN5672",
manufacturer:
{ name: "Audi",
origin: "Germany" }
}
{ ...
sign: "NIA5352",
manufacturer:
{ name: "Audi",
origin: "Germany" }
}
\end{lstlisting}
\end{block}
% Save query time and effort.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Referencing or embedding (4)}
\begin{block}{Referencing: the unbounded many-to-many relationship case}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
{
name: "Audi",
origin: "Germany"
}
{...
sign: "IMN5672",
manufacturer_id: "Audi"
}
{ ...
sign: "NIA5352",
manufacturer_id: "Audi"
}
\end{lstlisting}
\end{block}
% Reduce replication, save storage space.
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Referencing or embedding (5)}
\begin{block}{Referencing: example query routine using Javascript in client}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
var cursor = db.customers.find();
while (cursor.hasNext()) {
c = cursor.next();
var cursor2 = db.cars.find(
{ name: c[ 'manufacturer_id' ] } );
while (cursor2.hasNext()) {
cr = cursor2.next();
printjson(c[ 'sign' ] + ", " +
cr[ 'name' ] + ", " + cr[ 'origin' ] );
}
}
\end{lstlisting}
\end{block}
% Reduce replication, save storage space.
\end{frame}
%%
\begin{frame}
\frametitle{Query the schema with the Mongo Shell}
\begin{tabular}{p{18em}p{6em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
% but no schema constraint applies so what is this?
\texttt{var schema = db.\textless collection-name\textgreater{}.findOne()} & Show the top-level schema of the first document in a collection \\
\texttt{for (var field in schema) \{ print (field) ; \} } & \\
% Works because schema is a document, i.e. a map, and you are iterating its keys.
\hline
\end{tabular}
\end{frame}
\section{Operations}
%%
%%
\begin{frame}
\frametitle{Create with the Mongo Shell (1)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
% but no schema constraint applies so what is this?
\texttt{use \textless db-name\textgreater{}} & Create and switch to database \\
\texttt{db.addUser(\textless user\textgreater{}, \textless password\textgreater{})} & Add user to database \\
\texttt{mongo --authenticationDatabase \textless db-name\textgreater{} -u \textless user\textgreater{} -p \textless password\textgreater{} \textless db-name-to-connect-to\textgreater} & Connect to Mongo database by authenticating in another database \\
\hline
\end{tabular}
\end{frame}
%%
\begin{frame}
\frametitle{Create with the Mongo Shell (2)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
\texttt{db.createCollection( \textless collection-name\textgreater, options)} & Create a collection; options are not necessary \\
\texttt{db.createCollection( \textless collection-name\textgreater, options)} & Create a collection; options are not necessary \\
\texttt{db.\textless collection-name\textgreater. insert( \{ field1: "value1", field2: "value2" \})} & Create a document in a collection \\
\hline
\end{tabular}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Insert embedded objects}
\begin{block}{Check out the nested data model}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.insert( { owner: { name: "Vin",
surname: "Diesel",
},
sign: "IMN5672",
kilometers: 0,
modified: false,
accessories: [],
tyres: "Bridgestone",
} )
\end{lstlisting}
\end{block}
% $currentDate works for MongoDB version 2.6.0 and onwards
\end{frame}
\begin{frame}
\frametitle{Update with the Mongo Shell}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
\texttt{db.\textless collection-name\textgreater. update( \{"field": "value" \} )} & Update a document in a collection \\
\texttt{db.\textless collection-name\textgreater. update( \{"field": "value", multi: true \} )} & Update multiple documents in a collection (those that match the filter) \\
\texttt{db.\textless collection-name\textgreater. upsert( \{ \} )} & Remove all documents from a collection \\
\texttt{db.\textless collection-name\textgreater. drop()} & Drop a collection \\
\hline
\end{tabular}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Update documents (1)}
\begin{block}{Filter 1 field \\
Update 1 field \\
Apply to a single document}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Update the field tyres to the value "Michelin"
// of a single document, car, in the collection
// cars whose field sign is "IMN5672".
db.cars.update( { sign: "IMN5672" },
{
$set: { tyres: "Michelin" },
$currentDate: { lastModified: true }
} )
\end{lstlisting}
\end{block}
%$set operator: replace the value of a field with the provided value
% $currentDate operator: set the value of a field to current date, either as a Date or as a timestamp
% $currentDate updates the lastModified field with the current Date
% If lastModified was set to false, the date would, again be set.
% The rationale is to provide consistency with $unset: false which
% removes a field from a document despite the false value.
% $currentDate works for MongoDB version 2.6.0 and onwards
% Other update operators: $inc, $mul, $rename, $setOnInsert, $unset, $min, $max
%Specifically, for arrays: $addToSet, $pop, $pullAll, $pull, $push
% Array operator modifiers: $each, $slice, $sort, $position
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Update documents (2)}
\begin{block}{Filter 2 fields \\
Update 1 field \\
Apply to a single document}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Update the field tyres to the value "Michelin"
// of a single document, car, in the collection
// cars whose field sign is "IMN5672".
db.cars.update( { "owner.name": "Vin",
"owner.surname": "Diesel" },
{
$set: { tyres: "Michelin" },
$currentDate: { lastModified: true }
} )
\end{lstlisting}
\end{block}
% $currentDate works for MongoDB version 2.6.0 and onwards
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Update documents (3)}
\begin{block}{Filter 2 fields \\
Update 2 fields \\
Apply to a single document}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Update the fields brand and model of object tyres
// of a single document, car, in the collection
// cars whose field sign is "IMN5672".
db.cars.update( { "owner.name": "Vin",
"owner.surname": "Diesel" },
{
$set: { "tyres.brand": "Michelin",
"tyres.model": "SC3" },
$currentDate: { lastModified: true }
} )
\end{lstlisting}
\end{block}
% $currentDate works for MongoDB version 2.6.0 and onwards
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Update documents (4)}
\begin{block}{Filter 2 fields \\
Update 2 fields \\
Apply to all documents (that match the filter}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Update the fields brand and model of object tyres
// of all documents, that model car instances,
// in the collection cars whose owner object field name
// is "Vin" and surname is "Diesel".
db.cars.update( { "owner.name": "Vin",
"owner.surname": "Diesel" },
{
$set: { "tyres.brand": "Michelin",
"tyres.model": "SC3" },
$currentDate: { lastModified: true }
},
{ multi: true } )
\end{lstlisting}
\end{block}
% $currentDate works for MongoDB version 2.6.0 and onwards
\end{frame}
\begin{frame}[containsverbatim]
\frametitle{Update documents (5)}
\begin{block}{Replace a document}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Provide the characteristic id (not the _id field)
// of the document to be replaced and the new
// document that will replace the old.
// The new document will not include any fields of
// the old document that are not present in the new one.
db.cars.update( { "car_id" = "3453453" },
{ "owner": { "name": "Gas",
"surname": "Diesel",
},
"kilometers": 0,
"modified": false
} )
\end{lstlisting}
\end{block}
% Explain auto-generated field _id.
% Pay attention to $set; none here.
\end{frame}
\begin{frame}[containsverbatim]
\frametitle{Update documents (6)}
\begin{block}{Replace a document with upsert}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Upsert: replace if found, insert if not found.
// The new document will not include any fields of
// the old document that are not present in the new one.
db.cars.update( { "car_id" = "3453454" },
{ "owner": { "name": "Gas",
"surname": "Diesel",
},
"kilometers": 0,
"modified": false
},
{ upsert: true } )
\end{lstlisting}
\end{block}
% Upsert: replace document or insert if not found
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Update document (7)}
\begin{block}{Add element to array field}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.update( {"owner.name": "Vin",
"owner.surname": "Diesel" },
{
$push { accessories: "GPS" },
} )
\end{lstlisting}
\end{block}
\end{frame}
% This will update a single document that will match the constraint.
% To update multiple documents use { multi: true }
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Update document (8)}
\begin{block}{Add multiple elements to array field}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.update( {"owner.name": "Vin",
"owner.surname": "Diesel" },
{
$push { "accessories": { $each: [ "GPS",
"ESP" ] } }
} )
\end{lstlisting}
\end{block}
\end{frame}
% This will update a single document that will match the constraint.
% To update multiple documents use { multi: true }
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Update document (9)}
\begin{block}{Add multiple elements to array field}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.update( {"owner.name": "Vin",
"owner.surname": "Diesel" },
{
$push { "accessories": { $each: [ "GPS",
"ESP" ] } },
} )
\end{lstlisting}
\end{block}
\end{frame}
% This will update a single document that will match the constraint.
% To update multiple documents use { multi: true }
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Update document (10) - rank}
\begin{block}{Add multiple elements to array field}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following document:
// D1: { carId: "4567564" },
// { speedTests: [ { name: "VD", time: 5 },
// { name: "PW", time: 4.7 } ] }
db.cars.update( { carId: "4567564" },
{
$push { speedTests: { $each: [
{ name: "VD", time: 5.2 },
{ name: "VD", time: 4.5 },
{ name: "PW", time: 4 } ]
$sort: { time: 1 },
$slice: 3 }
} } )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (1)}
\begin{block}{Query all documents}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.findOne()
db.cars.find()
db.cars.find( { } )
\end{lstlisting}
\end{block}
% findOne(): which one? according to the natural order of documents on disk.
% For capped collections, this is the same as the insertion order.
% Capped collections: fixed size, high throughput collections, similar to circular buffers (e.g. I/O).
% kilometers, modified, accessories without quotes:
% the parser tries hard to be nice and understands those as fields.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (2)}
\begin{block}{Query documents that satisfy a filter}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars whose sign is "IMN5672".
db.cars.find( { sign: "IMN5672" } )
\end{lstlisting}
\end{block}
% kilometers, modified, accessories without quotes:
% the parser tries hard to be nice and understands those as fields.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (3)}
\begin{block}{Query documents that satisfy a filter: inequality}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars with less than 10K kilometers and
// horsepower greater than 100.
db.cars.find( { kilometers: { $lt: 10000 } },
{ horsepower: { $gt: 100 } } )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (4)}
\begin{block}{Query documents that satisfy a filter: special case of inequality}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars with less than 10K kilometers or
// greater than 100K kilometers.
// The default for multiple filters on the same
// field is or.
db.cars.find( { kilometers: { $lt: 10000 } },
{ kilometers: { $gt: 100000 } } )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (5)}
\begin{block}{Query documents that satisfy a filter: logical or}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars with less than 10K kilometers or
// greater than 100K kilometers.
db.cars.find( {$or: [ { kilometers: { $lt: 10000 } },
{ horsepower: { $gt: 100 } } ]
} )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (6)}
\begin{block}{Query documents that satisfy a filter: logical and}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars with less than 10K kilometers or
// greater than 100K kilometers.
db.cars.find( {$and: [ { kilometers: { $lt: 10000 } },
{ kilometers: { $gt: 100000 } } ]
} )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (7)}
\begin{block}{Query documents that satisfy a filter: and, or combined}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars with less than 10K kilometers or
// greater than 100K kilometers.
db.cars.find( { $or: [ { kilometers: { $lt: 10000 } },
{ horsepower: { $gt: 100000 } } ],
{ "tyres.brand": "Michelin" }
} )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (8)}
\begin{block}{Query embedded documents that satisfy a filter - exact match}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars whose owner is named "Vin Diesel".
db.cars.find( { owner: { surname: "Diesel",
name: "Vin" } } )
\end{lstlisting}
\end{block}
% filter order matter.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (9)}
\begin{block}{Query embedded documents that satisfy a filter - exact match}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars whose owner is named "Vin Diesel".
db.cars.find( { owner: { name: "Vin",
surname: "Diesel" } } )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (10)}
\begin{block}{Query fields in embedded documents that satisfy a filter: exact match}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Find the cars whose owner is named "Vin Diesel".
db.cars.find( { "owner.name": "Vin",
"owner.surname": "Diesel" } )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (11)}
\begin{block}{Query embedded array that satisfies a filter - exact match}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS" },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS" },
// "exhaust", "GPS" ] ... }
db.cars.find( { accessories: [ "exhaust",
{ nitro : "NOS" } ] } )
\end{lstlisting}
\end{block}
% Filter order matters.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (12)}
\begin{block}{Query embedded array that satisfies a filter - exact match}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS" },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS" },
// "exhaust", "GPS" ] ... }
db.cars.find( { accessories: [ { nitro : "NOS" },
"exhaust" ] } )
\end{lstlisting}
\end{block}
% filter order matters.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (13))}
\begin{block}{Query embedded arrays that satisfy a filter}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS" },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS" },
// "exhaust", "GPS" ] ... }
db.cars.find( { accessories: "exhaust" } )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (14)}
\begin{block}{Query embedded arrays by position}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS" },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS" },
// "exhaust", "GPS" ] ... }
db.cars.find( { "accessories.1": "exhaust" } )
\end{lstlisting}
\end{block}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (15)}
\begin{block}{Query embedded arrays - element precise matching}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., services: [ 10000, 100000 ], ... }
// D2: {..., services: [ 10000, 50000, 100000 ], ... }
db.cars.find( { services: { $elemMatch: { $gt: 10000,
$lt: 100000 }
} )
\end{lstlisting}
\end{block}
% $elemMatch: at least one element of the array satisfies all the criteria.
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (16)}
\begin{block}{Query embedded arrays - match combination of elements}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., services: [ 10000, 100000 ], ... }
// D2: {..., services: [ 10000, 50000, 100000 ], ... }
db.cars.find( { services: { $gt: 10000, $lt: 100000 } }
)
\end{lstlisting}
\end{block}
% Some combination of elements satisfy the criteria
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (17)}
\begin{block}{Query embedded documents in embedded arrays that satisfy a filter}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS" },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS" },
// "exhaust", "GPS" ] ... }
db.cars.find( { accessories: "exhaust",
"accessories.nitro": "NOS" } )
\end{lstlisting}
\end{block}
% Filter order does not matter
% + Array queries: exact match, element match
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (18)}
\begin{block}{Query embedded documents in embedded arrays by position}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS" },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS" },
// "exhaust", "GPS" ] ... }
db.cars.find( { accessories: "exhaust",
"accessories.1.nitro": "NOS" } )
\end{lstlisting}
\end{block}
% Filter order does not matter
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Read documents (19)}
\begin{block}{Query embedded documents in embedded arrays by position}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: {..., accessories: [ { nitro: "NOS",
// bought: ISODate("2013-06-22T21:00:00Z") },
// "exhaust" ] ...}
// D2: {..., accessories: [ { nitro: "NOS",
// bought: ISODate("2010-03-12T11:00:00Z") },
// "exhaust", "GPS" ] ... }
db.cars.find( { accessories: { $elemMatch: {
nitro: "NOS",
bought: ISODate("2010-03-12T11:00:00Z")
} } } )
\end{lstlisting}
\end{block}
% Filter order does not matter
\end{frame}
\begin{frame}
\frametitle{Remove with the Mongo Shell}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Shell command} & \bfseries{Description} \\
\hline
\texttt{db.\textless collection-name\textgreater. remove( \{"field": "value" \} )} & Remove a document from a collection \\
\texttt{db.\textless collection-name\textgreater. remove( \{"field": "value", justOne: true \} )} & Remove only one document from a collection (multiple documents could match the condition) \\
\texttt{db.\textless collection-name\textgreater. remove( \{ \} )} & Remove all documents from a collection \\
\texttt{db.\textless collection-name\textgreater. drop()} & Drop a collection \\
\hline
\end{tabular}
\end{frame}
%%
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Remove documents (1)}
\begin{block}{Remove all documents from a collection.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.remove()
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Remove documents (2)}
\begin{block}{Remove all documents that match a filter.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.remove( { kilometers: 200000 } )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Remove documents (3)}
\begin{block}{Remove a single document that matches a filter.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.remove( { kilometers: 200000 }, 1 )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Remove documents (4)}
\begin{block}{Remove a specific document that matches a filter.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.findAndModify( { query: { kilometers: 200000 },
sort: { rating: -1 },
remove: true
} )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Remove documents (5)}
\begin{block}{Remove the first or last element from an array.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: { kilometers: 3000, accessories: [
// { nitro: "NOS" }, "exhaust" ] ...}
db.cars.update( { kilometers: 3000 },
{ $pop: { ingredients: -1 } } )
\end{lstlisting}
\end{block}
% Remove first element
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Remove documents (6)}
\begin{block}{Remove the first or last element from an array.}
%Elements can be of different types.
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
// D1: { kilometers: 3000, accessories: [
// { nitro: "NOS" }, "exhaust" ] ...}
db.cars.update( { kilometers: 3000 },
{ $pop: { ingredients: 1 } } )
\end{lstlisting}
\end{block}
% You can also have examples for $pull and $pullAll
\end{frame}
\section{Aggregation}
%%
\begin{frame}
\frametitle{Data aggregation}
\begin{itemize}
\item Aggregation pipelines
\item Map-Reduce
\item Single purpose aggregation
\end{itemize}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - aggregation pipelines (1)}
\begin{block}{Group documents subject to a condition and aggregate fields}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Aggregate the amount of money services
// have costed for each car that has less than
// 30000 kilometers.
db.cars.aggregate( [
{ $match: { kilometers: {
$lt: 30000 } } },
{ $group: { _id: "$sign", total: {
$sum: "$amount" } } }
] )
\end{lstlisting}
\end{block}
% Aggregate: wrapper to the aggregate database command
% Alternative to MapReduce, operates on single collection.
% The preferred alternative where the complexity of map-reduce
% does not allow providing guarantees as to the computation task.
% Some pipeline stages take a pipeline expression as operand.
% Pipeline expressions specify the transformation to apply to the input documents.
% Expressions have a document structure and can contain other expression.
% Pipeline expression can only operate on the current document in the pipeline.
% Expressions are stateless and are evaluated when seen by the aggregation
% context except for aggregators.
% Early filtering: $match, $limit, $skip
% $match and $sort can take advantage of an index when they occur at the
% beginning of the pipeline.
% Interesting operations: $unwind (flatten-like)
% Interesting examples: user preference data; likes example
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - aggregation pipelines (2)}
\begin{block}{Group documents subject to a condition and aggregate fields}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Aggregate the total amount of money services
// have costed for each car if the total amount
//does not exceed 1000 Euros.
db.cars.aggregate( [
{ $group: { _id: "$sign", total: {
$sum: "$amount" } } }
{ $match: { total: {
$lt: 1000 } } }
] )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - aggregation pipelines (3)}
\begin{block}{Group documents and then group again}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.aggregate( [
{ $group: { _id: {
tyre_brand: "$tyre.brand",
tyre_model: "$tyre.model" },
amount: { $sum: "$amount" }
},
{ $group: { _id: "$tyre.brand",
totalAmount: { $sum: "$amount" }
} } ] )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Map-Reduce (1)}
\begin{block}{Group documents subject to a condition, aggregate a field, and project some fields}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.mapReduce(
function() { emit( this.sign, this.amount); }
function(key, values) {
return Array.sum(values) },
{ query: { kilometers: { $lt: 30000} },
out: "services_total_amount_per_car" }
)
\end{lstlisting}
\end{block}
% MapReduce functions: javascript code
% Input: document of a single collection
% Output: can write to collections.
% Input and output collections can be sharded.
% In addition, to map and reduce, there is also the finalize function,
% which allows final modifications to the output of the map and reduce operation,
% e.g. additional calculations.
% If output is written to a collection, then subsequent map-reduce operations
% may happen that merge, replace-merge, or reduce new results with previous
% results.
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Map-Reduce (2.a)}
\begin{block}{Map car signs to amount of money paid in each service}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
var mapFunction = function() {
for (var it = 0; it < this.items.length;
it++) {
var key = this.items[it].sign;
var value = {
count: 1,
amount: this.items[it].amount
};
emit(key, value);
} };
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Map-Reduce (2.b)}
\begin{block}{Reduce}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
var reduceFunction = function(sign, visit) {
totalAmount = { count: 0, amount: 0 };
for (var it = 0; it < countObjVals.length;
it++) {
totalAmount.count +=
countObjVals[it].count;
totalAmount.amount +=
countObjVals[it].amount;
}
return totalAmount;
};
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Map-Reduce (2.c)}
\begin{block}{Reduce}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine we have the following documents:
var finalizeFunction = function(sign, visit_stats) {
visit_stats.avg =
visit_stats.amount / visit_stats.visits;
return visit_stats;
};
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Single purpose aggregation (1)}
\begin{block}{Count matching documents}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Count documents in the collection.
db.cars.count()
// Count documents in the collection that match a filter.
db.cars.count( { kilometers: 50000 } )
\end{lstlisting}
% They simplest way to aggregate
% Context: single collection
% Not very powerful or flexible
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Single purpose aggregation (2)}
\begin{block}{Return distinct field values from a collection}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.distinct( "sign" )
\end{lstlisting}
% They simplest way to aggregate
% Not very powerful or flexible
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data aggregation - Single purpose aggregation (3)}
\begin{block}{Group data based on field values}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.group(
key: { sign: 1 },
cond: { kilometers: { $lt: 50000 },
reduce: function(cur, result) {
result.sum += cur.sum },
initial: { sum: 0 }
)
\end{lstlisting}
% group does not support data in sharded collections.
\end{block}
\end{frame}
%%
\section{Indexes}
%%
\begin{frame}
\frametitle{Indexes}
\begin{itemize}
\item speedup queries that target a specific collection
\item are a primary performance optimization mechanism
\item limit the number of documents to examine
\item work by having documents in a sorted order for a specific (sub)field(s)
% actually indexes store references to fields
\end{itemize}
\end{frame}
%%
%%
\begin{frame}
\frametitle{Index types}
\begin{itemize}
\item \_id, default field, primary key, unique ascending index
% sorted by default on \_id?
\item single field
\item compound index
%user-defined index on multiple fields; order matters
\item multikey index
% array index
\item geospatial index
% index geospatial coordinate data
\item text index
% text search for string content in a collection using root words
\item hashed index
\end{itemize}
\end{frame}
%%
%%
\begin{frame}
\frametitle{Index properties}
\begin{itemize}
\item unique indexes
% ascending, reject duplicate values,
% otherwise functionally equal to other types of indexes
\item sparse indexes
% only index documents that contain the indexed field
% combinable with unique
\item TTL indexes
% automatically remove items after an amount of time
\end{itemize}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Single field indexes (1)}
\begin{block}{Create single field index and use in query}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "kilometers" : 1 } )
db.cars.find( {
kilometers : { $gt: 10000 }
} )
\end{lstlisting}
% Quotes needed?
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Single field indexes (2)}
\begin{block}{Create single field index on embedded document and use in query}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// Imagine the document
db.cars.createIndex( { "speedTests.time" : -1 } )
db.cars.find( {
"speedTests.time" : { $lt: 4 }
} )
\end{lstlisting}
% Quotes needed in createIndex?
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Compound indexes (1)}
\begin{block}{Create compound index and use in query}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "kilometers": 1,
"speedTests.time": 1 } )
db.cars.find( {
kilometers: { $gt: 10000 },
"speedTests.time": { $lt: 4 }
} )
\end{lstlisting}
\end{block}
% ability to support sort operation on compound index keys depends on
% - sorting order, which should be the same as the index
% - sort direction, which should be the same as the index or
% the reverse one for all indexes.
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Compound indexes (2)}
\begin{block}{Sort order}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "kilometers": 1,
"speedTests.time": 1 } )
// Good
db.cars.find().sort( {
kilometers : 1,
"speedTests.time" : 1
} )
// Good
db.cars.find().sort( {
kilometers : -1,
"speedTests.time" : -1
} )
// Error
db.cars.find().sort( {
kilometers : 1,
"speedTests.time" : -1
} )
// Error
db.cars.find().sort( {
kilometers : -1,
"speedTests.time" : 1
} )
\end{lstlisting}
\end{block}
% Index creation is motivated by application needs.
% Applications query for specific order;
% so create index with that sort order
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Compound indexes (3)}
\begin{block}{Index prefixes}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
// kilometers is prefix.
db.cars.createIndex( { "kilometers": 1,
"speedTests.time": 1,
"tyres.brand": text } )
// kilometers and speedTests.time are prefixes.
db.cars.createIndex( { "kilometers": 1,
"speedTests.time": 1,
"tyres.brand": text
} )
\end{lstlisting}
\end{block}
% Index can be used to query kilometers.
% Index can be used to query kilometers and speedTests.time.
% Index can be used to query kilometers, speedTests.time, and tyres.brand.
% Index can be used to query kilometers and tyres.brand with less
% efficiency than an index on tyres.brand and owner.name
% If both an index on kilometers and an index on kilometers, speedTests.time exist
% the first index can be removed. MongoDB will always use the compound index.
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Text indexes (1)}
\begin{block}{Index strings and query index using text}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "owner.surname": text,
"owner.name": text } )
// Query the index for the string Richie.
db.cars.find( { $text: {
$search: "Richie" } } )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Text indexes (2)}
\begin{block}{Index strings and query index with multiple strings}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "owner.surname": text,
"owner.name": text } )
// Space is interpreted as logical OR.
db.cars.find( { $text: {
$search: "Richie Kenningham Aho" } } )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Text indexes (3)}
\begin{block}{Index strings and query index with exclusion criteria}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "owner.surname": text,
"owner.name": text } )
// Bar is interpreted as negation.
db.cars.find( { $text: {
$search: "Richie Kenningham -Aho" } } )
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Text indexes (4)}
\begin{block}{Index strings, query index, and get relevance score}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "owner.surname": text,
"owner.name": text } )
// Bar is interpreted as negation.
db.cars.find( {
$text: { $search: "Richie" },
score: { $meta: "textScore" }
} )
\end{lstlisting}
\end{block}
% score is a new field
% $meta is an operator to query search metada.
% textScore is one such kind of metadata that reflects the
% relevance of the document to the search criteria.
% Additional: sort and limit.
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Text indexes (5)}
\begin{block}{Create text index and query with regular expression}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
db.cars.createIndex( { "sign" : 1 } )
db.cars.find( { sign : { $regex: /^IMN..../ } } )
\end{lstlisting}
% More efficient than a collection scan
% Can be used to query fields without index
% Differences to $text operator?
% I think the text operator is text index specific (and possibly optimized?)
\end{block}
\end{frame}
%%
\section{Data import}
%%
\begin{frame}
\frametitle{Data import (1)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Mongo command} & \bfseries{Description} \\
\hline
\texttt{mongoimport -d \textless dbName\textgreater{} -c \textless collectionName\textgreater{} -t \textless json | csv | tsv\textgreater{} --file \textless filename\textgreater{}} & Import data with mongoimport \\
\texttt{--host, -h} & Specify remote host \\
\texttt{--port, -p} & Specify port \\
\texttt{--username, -u} & Provide username \\
\texttt{--password, -p} & Provide password \\
\texttt{--db, -d} & Specify the database to import data \\
\hline
\end{tabular}
\end{frame}
%%
\begin{frame}
\frametitle{Data import (2)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Mongoimport option} & \bfseries{Description} \\
\hline
\texttt{--fields, -f} & Specify a comma-separated list of field names to use as header
when importing CSV or TSV files \\
\texttt{--type, -t} & Specify the type of file that contains the data to be imported; this can be
JSON, CSV, or TSV files \\
\texttt{--file} & Specify the file that contains the data to be imported \\
\hline
\end{tabular}
\end{frame}
%%
\begin{frame}
\frametitle{Data import (3)}
\begin{tabular}{p{12em}p{12em}}
\hline
\bfseries{Mongoimport option} & \bfseries{Description} \\
\hline
\texttt{--headerline} & In case of CSV or TSV files treat the first line as a header \\
\texttt{--upsert} & Update existing objects if matched \\
% comparison base is the _id field.
\texttt{--jsonArray} & Import data as multiple MongoDB documents within a single JSON array \\
\texttt{--stopOnError} & Stop import if error happens \\
\hline
\end{tabular}
\end{frame}
%%
\begin{frame}[containsverbatim]
\frametitle{Data import (4)}
\begin{block}{Use mongoimport to import JSON data}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
mongoimport -d CarDB -c cars --file cars.json
mongoimport -d CarDB -c cars -t json --file cars.json
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data import (5)}
\begin{block}{Use mongoimport to import JSON data expressed in the form of an array}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
mongoimport -d CarDB -c cars --file cars.json --jsonArray
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data import (6)}
\begin{block}{Use mongoimport to import CSV data}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
mongoimport -d CarDB -c cars -t csv
--file /srv/data/cars.csv
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data import (7)}
\begin{block}{Use mongoimport to import TSV data}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
mongoimport -d CarDB -c cars -t tsv
--file /srv/data/cars.csv
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data import (8)}
\begin{block}{Use mongoimport to import incoming data}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
mongoimport -d CarDB -c cars --stopOnError
--dbpath /srv/mongodb
\end{lstlisting}
\end{block}
\end{frame}
%%
%%
\begin{frame}[containsverbatim]
\frametitle{Data import (9)}
\begin{block}{Use mongoimport to import data to remote host}
\lstset{language=SQL,basicstyle=\footnotesize,commentstyle=\color{blue}\textit,
stringstyle=\color{red}\ttfamily,labelstyle=\tiny}
\begin{lstlisting}
mongoimport -h stereo.dmst.aueb.gr -p 28017
-u mfg -p mfgpass -d CarDB -c cars
--file /srv/data/data.json
\end{lstlisting}
\end{block}
\end{frame}
%%
\section{GUI}
%%
\begin{frame}
\frametitle{Humongous (1)}
\begin{itemize}
\item Cloud-based GUI
\item Connect to remote MongoDB instance for free
\item Perform CRUD operations
\end{itemize}
\end{frame}
%%
\end{document}
| {
"alphanum_fraction": 0.6827570645,
"avg_line_length": 29.5143442623,
"ext": "tex",
"hexsha": "6ecc44b582dbe73a41c2bf49afaa9ca383bddb59",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "46c47817da19a11b655393f4954efb2146f8f8fe",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "istlab/ba-lab",
"max_forks_repo_path": "1-MongoDB/1-MongoDB.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "46c47817da19a11b655393f4954efb2146f8f8fe",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "istlab/ba-lab",
"max_issues_repo_path": "1-MongoDB/1-MongoDB.tex",
"max_line_length": 254,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "46c47817da19a11b655393f4954efb2146f8f8fe",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "istlab/ba-lab",
"max_stars_repo_path": "1-MongoDB/1-MongoDB.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 16540,
"size": 57612
} |
\chapter{Glossary of technical terms}
% @@@@@ incomplete
| {
"alphanum_fraction": 0.6721311475,
"avg_line_length": 8.7142857143,
"ext": "tex",
"hexsha": "ee164b2e1735cba9e3715163fd317397954e5856",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0d502abe795540a3dfc99d43726d3fc29a5e6e5d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "squarePenguin/parvsl",
"max_forks_repo_path": "docs/glossary.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "0d502abe795540a3dfc99d43726d3fc29a5e6e5d",
"max_issues_repo_issues_event_max_datetime": "2019-03-25T17:02:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-25T17:02:38.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "squarePenguin/parvsl",
"max_issues_repo_path": "docs/glossary.tex",
"max_line_length": 37,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0d502abe795540a3dfc99d43726d3fc29a5e6e5d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "squarePenguin/parvsl",
"max_stars_repo_path": "docs/glossary.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15,
"size": 61
} |
\chapter{System Analysis, Design \& Implementation}\label{chap3}
%##########################################
\vspace*{50 ex}
%##########################################
\paragraph*{Outline:} This chapter presents the following:
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item Introduction
\item System Analysis
\item Design
\item Implementation
\end{enumerate}
\newpage
\section{Introduction}\label{chap3:intro}
In my proposed project \textbf{Social Media Android Application}, User can create account,post images and writes their description, sent messages,check status of his or his friends, see popular news and share those popular news.
\section{System Analysis}
System analysis consists following parts
\subsection{System Objective}
Communication over a network is one field where this tool finds wide ranging application. Chat application establishes a connection between 2 or more systems connected over an intranet or ad-hoc. This tool can be used for large scale communication and conferencing in an organization or campus of vast size, thus increasing the standard of co-operation. In addition it
converts the complex concept of sockets to a user friendly environment. This software can have
further potentials, such as file transfer, video calling and voice chatting options that can be worked upon later.
\subsection{Relation to External Environment}
This tool helps in two major aspects -
\bigskip
\begin{itemize}
\item Resolving the names of all the system connected in a network and enlisting them.
\item Used for communication between multiple systems enlisted in the resolved list.
\end{itemize}
\subsection{ Design Considerations}
\textbf{Approach :}\\
\hfil
\hfil
The tool has been designed using XML \& Android in-build interface.\\
\noindent
\bigskip
\textbf{Methodology :}\\
The user interacts with the tool using a GUI
\begin{itemize}
\item The GUI operates in two forms, the List form \& the chat form.
\item The List form contains the names of all the systems connected to a network.
\item The chat form makes the actual communication possible in the form of text and images
\end{itemize}
\subsection{ System Architecture}
The chat application works in two forms..
\paragraph{List Form :}
In this form, all the names of the systems connected to a network are enlisted. These
names can later be used for communication with the help of touch on display, or in simple
language: texts or images.
\paragraph{Chat form :}
This form is called only when an element is selected from the List form. In this form, a
connection is created between the host system and the selected system with the help of a
socket.
\paragraph{Flow Chart :}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{flow-chart.png}
\caption{\label{img1} Flow Chart}
\end{figure}
\vspace{70ex}
\subsection{ Operational Concepts and Scenarios}
Operation of the application based on the inputs given by the user:
\paragraph{List Form :}
\begin{itemize}
\item When initialized, returns a list containing the names of all the system connected in a network.
\item Contains two sections Refresh and Connect
\item When Refresh section is auto refreshes the list of names.
\item When the Connect section is worked properly or a name is clicked, the chat form is
initialized with a connection between the host and the client machine.
\item Note: If no name is selected, and connect section is clicked an error box is displayed.
\end{itemize}
\paragraph{Chat form :}
\begin{itemize}
\item Contains a rich text-view which cannot be edited but only displays the messages from one user to another, including the self sent message, as in any chat application.
\item Contains a text-view for messages to be written that is sent across the network.
\item Contains messages and images Send buttons.
\item When the sent button is clicked, in the background, the text in the text-view is encoded
and sent as a packet over the network to the client machine. Here this message is
decoded and is shown in the rich text-view.
\item To make it more realistic, the self sent message is shown in the rich text-view as well. Both
the messages is differentiated with the help of the identifier name at the beginning of each
message in the rich text-view
\end{itemize}
\paragraph{Exit :}
The user exits the software in two scenarios:
\begin{itemize}
\item Exits the chat form, the list form remains intact.
\item Exits the list form, this is when the application is closed or user has been logout.
\end{itemize}
\section{Design}
My design phase includes :
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item Flow Chart
\item Data Flow Diagram
\item Unified Modeling Language(UML)
\item Database Design
\item Implementation
\end{enumerate}
\subsection{Flow Chart}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.3]{flow-diagram.png}
\caption{\label{img2} Flow Diagram}
\end{figure}
\noindent
The above diagram is the Flow diagram of this thesis. Here are the steps explained :-
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item In the Above Diagram First of the user will enter its details then it will check
for whether the provided details are correct or not if the user details are found
correct then user will enter his or her home page activity and do the his or
her required operations.\\
\item As I have mentioned some of the features here which are of the user as he
or she can search news, send friend request, accept friend request, chat with his friends, see his friends profile, all users status, find user form all users list, see blog post, like on post, comments on posts or add new blog posts.
\end{enumerate}
\subsection{Data Flow Diagram}
\subsubsection{User Case Table :}
\noindent
The below diagram is the User Case Table of this thesis. Here are the steps explained :-
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item In the below diagram level 1 explains the feature of my proposed application.\\
\item In the below diagram level 2 explains what does each sections of level 1.\\
\item In the below diagram level 3 explains who will manage each sections.\\
\item In the below diagram Admin means Database administrator
\end{enumerate}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{user-table.png}
\caption{\label{img3} User Case Table}
\end{figure}
\subsubsection{Authentication System :}
The below diagram is the authentication diagram of this thesis. Here are the steps explained :-
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item In my proposed application user can register with the help of unique valid email-id, Name and their secure password.\\
\item In this user can login with the help of unique email and their password.\\
\item In this user can also logout.
\end{enumerate}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{auth.png}
\caption{\label{img4} Use Case Diagram of Authentication System}
\end{figure}
\noindent
\subsubsection{Contacts Form :}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.9]{friends.png}
\caption{\label{img5} Use Case Diagram of Contacts Form}
\end{figure}
\vspace{50ex}
\subsubsection{Chat Form :}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{chat-diagram.png}
\caption{\label{img6} Use Case Diagram of Chat Form}
\end{figure}
\subsubsection{Maintenance :}
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{maintenance.png}
\caption{\label{img7} Use Case Diagram of Maintenance}
\end{figure}
\vspace{50ex}
\subsection{UML Class Diagram}
Here is a Unified Modeling Language(UML) class diagram for this thesis which consists four classes user class, Account class, Blog\_Post class and Individual\_chat class and shows composition of user class to their other classes
\begin{figure}[!ht]
\centering
\includegraphics[scale=1]{project-uml.png}
\caption{\label{img8} UML Class diagram}
\end{figure}
\vspace{10ex}
\subsection{Database Design}
Here is the screenshot of my existing database...
\bigskip
\noindent
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.3]{databaseone.png}
\caption{\label{img8} Firebase users database text messages list}
\end{figure}
\vspace{10ex}
The above diagram is the real time database storage diagram which contains following informations...\\
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item Users : Users Details like device token, users name, profile image URL, thumb-image URL, users status and his online status .\\
\item FRIENDS : It contains users friends list\\
\item {FRIENDS\_REQUEST : It contains users friend request type sent or received.\\}
\item message : It contains users messages list with his friends.\\
\item notifications : When user send friend request it saves informations about friend request.\\
\item Chat : It contains online or offline status of users.
\end{enumerate}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.3]{databasetwo.png}
\caption{\label{img9} Firebase users authentication list}
\end{figure}
\noindent
The above diagram is authentication diagram which contains following informations
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item Users valid email address.\\
\item Users password which should be greater than 5 characters.\\
\end{enumerate}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.3]{databasethree.png}
\caption{\label{img10} Firebase storage folders list}
\end{figure}
\noindent
The below diagram is storage diagram which contains following informations
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item message\_Images folder contains all chats related images .\\
\item Profile\_Images contains thumb-images of all users.\\
\item Picture\_images contains images of all users.\\
\item post\_images contains blog post images.
\end{enumerate}
\section{Implementation}
\subsection{Functional Requirements}
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item User Registration : \\User must be able to register for the application through a valid Email Address. and password. On installing the application, user must be prompted to register with email, password and their name. If user skips this step, user will not be able to use this App. The users email will be the unique identifier of his/her account on this application.\\
\item Send Message : \\User should be able to send instant message to any his/her friends and user will get notify when his friends will online by one green color image.\\
\item Broadcast Message : \\ User should be able to post images and their description. User should be able
to broadcast post to all users.\\
\item Message Status :\\ User must be able to know on when the message has come and how much old it is.
\end{enumerate}
\subsection{Non Functional Requirements}
\begin{enumerate}
\setlength{\itemsep}{-0.3em}
\item Privacy :\\ Messages shared between users should be encrypted to maintain privacy\\
\item Robustness : \\ In case users App crashes, a backup of their chat history must be
stored on remote database servers to enable recoverability.
\\
\item Performance : \\ Application must be lightweight and must send messages instantly.\\
\end{enumerate}
%\section{Summary}
%In this chapter... | {
"alphanum_fraction": 0.7666341809,
"avg_line_length": 40.024822695,
"ext": "tex",
"hexsha": "87f4a2840720e5e8de71e80a7d843a3e71c43b8a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "729d26f0421f475cfc23c0b7633338cb704b4701",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RadheTians/Latex-Project-Report-Template",
"max_forks_repo_path": "Report-Template-2/chap3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "729d26f0421f475cfc23c0b7633338cb704b4701",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RadheTians/Latex-Project-Report-Template",
"max_issues_repo_path": "Report-Template-2/chap3.tex",
"max_line_length": 377,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "729d26f0421f475cfc23c0b7633338cb704b4701",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RadheTians/Latex-Project-Report-Template",
"max_stars_repo_path": "Report-Template-2/chap3.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-15T04:17:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-02T04:43:57.000Z",
"num_tokens": 2814,
"size": 11287
} |
\subsection{Choosing Ewald Sum Variables}
\label{ewaldoptim}
\subsubsection{Ewald sum and SPME}
This section outlines how to optimise the accuracy of the Ewald sum
parameters for a given simulation. In what follows the directive {\bf
spme} may be used anywhere in place of the directive {\bf ewald} if
the user wishes to use the Smoothed Particle Mesh Ewald
\index{Ewald!SPME} method.
As a guide\index{Ewald!optimisation} to beginners \D{} will calculate
reasonable parameters if the {\bf ewald precision} directive is used
in the CONTROL file (see section \ref{controlfile}). A relative error
(see below) of 10$^{-6}$ is normally sufficient so the directive
\vskip 1em
\noindent
{\bf ewald precision 1d-6}
\vskip 1em
\noindent
will cause \D{} to evaluate its best guess at the Ewald parameters
$\alpha$, {\tt kmax1}, {\tt kmax2} and {\tt kmax3}.
(The user should note that this represents an {\em estimate}, and there
are sometimes circumstances where the estimate can be improved
upon. This is especially the case when the system contains a strong
directional anisotropy, such as a surface.) These four parameters
may also be set explicitly by the {\bf ewald sum } directive in the
CONTROL file. For example the directive
\vskip 1em
\noindent
{\bf ewald sum 0.35 6 6 8}
\vskip 1em
\noindent
would set $\alpha= 0.35$ \AA$^{-1}$, {\tt kmax1} = 6, {\tt kmax2 = 6}
and {\tt kmax3 } = 8. The quickest check on the accuracy of the Ewald
sum\index{Ewald!summation} is to compare the Coulombic energy ($U$) and the coulombic virial
($\cal W$) in a short simulation. Adherence to the relationship $U =
-{\cal W}$ shows the extent to which the Ewald sum\index{Ewald!summation} is correctly
converged. These variables can be found under the columns headed {\tt
eng\_cou} and {\tt vir\_cou} in the OUTPUT file (see section
\ref{outputfile}).
The remainder of this section explains the meanings of these
parameters and how they can be chosen. The Ewald
sum\index{Ewald!summation} can only be used in a three dimensional
periodic system. There are three variables that control the accuracy:
$\alpha$, the Ewald convergence parameter; $r_{\rm cut}$ the real
space forces cutoff; and the {\tt kmax1,2,3} integers \footnote{{\bf
Important note:} For the SPME method the values of {\tt kmax1,2,3}
should be double those obtained in this prescription, since they
specify the sides of a cube, not a radius of convergence.} that
effectively define the range of the reciprocal space sum (one integer
for each of the three axis directions). These variables are not
independent, and it is usual to regard one of them as pre-determined
and adjust the other two accordingly. In this treatment we assume that
$r_{\rm cut}$ (defined by the {\bf cutoff} directive in the CONTROL
file) is fixed for the given system.
The Ewald sum splits the (electrostatic) sum for the infinite,
periodic, system into a damped real space sum and a reciprocal space
sum. The rate of convergence of both sums is governed by $\alpha$.
Evaluation of the real space sum is truncated at $r=r_{\rm cut}$ so it
is important that $\alpha$ be chosen so that contributions to the real
space sum are negligible for terms with $r>r_{\rm cut}$. The relative
error ($\epsilon$) in the real space sum truncated at $r_{\rm cut}$ is
given approximately by\index{Ewald!optimisation}
\begin{equation}
\epsilon \approx {\rm erfc}(\alpha r_{\rm cut})/r_{\rm cut}
\approx \exp[-(\alpha.r_{\rm cut})^2]/r_{\rm cut} \label{relerr}
\end{equation}
The recommended value for $\alpha$ is 3.2/$r_{\rm cut}$ or greater
(too large a value will make the reciprocal space sum very slowly
convergent). This gives a relative error in the energy of no greater
than $\epsilon = 4\times 10^{-5}$ in the real space sum. When using
the directive {\bf ewald precision} \D{} makes use of a more sophisticated
approximation:
\begin{equation}
{\rm erfc}(x) \approx 0.56 \exp(-x^2)/x
\end{equation}
to solve recursively for $\alpha$, using equation \ref{relerr} to give
the first guess.
The relative error in the reciprocal space term is approximately
\begin{equation}
\epsilon \approx \exp(- k_{max}^2/4\alpha^2)/k_{max}^2
\end{equation}
where
\begin{equation}
k_{max} = \frac{2\pi}{L}~{\tt kmax}
\end{equation}
is the largest $k$-vector considered in reciprocal space, $L$ is the
width of the cell in the specified direction and {\tt kmax} is an integer.
For a relative error of $4\times 10^{-5}$ this means using $k_{max}
\approx 6.2 \alpha$. {\tt kmax} is then
\begin{equation}
{\tt kmax} > 3.2~L/r_{\rm cut}
\end {equation}
In a cubic system, $r_{\rm cut}~=~L/2$ implies ${\tt kmax}~=~7$. In
practice the above equation slightly over estimates the value of {\tt
kmax} required, so optimal values need to be found experimentally. In
the above example {\tt kmax}~=~5 or 6 would be adequate.
If your simulation cell is a truncated octahedron or a rhombic
dodecahedron then the estimates for the {\tt kmax} need to be
multiplied by $2^{1/3}$. This arises because twice the normal number
of $k$-vectors are required (half of which are redundant by symmetry)
for these boundary contributions \cite{smith-93b}.
If you wish to set the Ewald parameters manually (via the {\bf ewald
sum} or {\em spme sum} directives) the recommended approach is as follows\index{Ewald!optimisation}. Preselect the
value of $r_{\rm cut}$, choose a working a value of $\alpha$ of about
$3.2/r_{\rm cut}$ and a large value for the {\tt kmax} (say 10 10 10
or more). Then do a series of ten or so {\em single} step simulations
with your initial configuration and with $\alpha$ ranging over the
value you have chosen plus and minus 20\%. Plot the Coulombic energy
(and $-{\cal W}$) versus $\alpha$. If the Ewald sum\index{Ewald!summation} is correctly
converged you will see a plateau in the plot. Divergence from the
plateau at small $\alpha$ is due to non-convergence in the real space
sum. Divergence from the plateau at large $\alpha$ is due to
non-convergence of the reciprocal space sum. Redo the series of
calculations using smaller {\tt kmax} values. The optimum values for
{\tt kmax} are the smallest values that reproduce the correct
Coulombic energy (the plateau value) and virial at the value of
$\alpha$ to be used in the simulation.
Note that one needs to specify the three integers ({\tt kmax1, kmax2,
kmax3}) referring to the three spatial directions, to ensure the
reciprocal space sum is equally accurate in all directions. The values
of {\tt kmax1}, {\tt kmax2} and {\tt kmax3} must be commensurate with
the cell geometry to ensure the same minimum wavelength is used in all
directions. For a cubic cell set {\tt kmax1} = {\tt kmax2} = {\tt
kmax3}. However, for example, in a cell with dimensions $2A = 2B = C$
(ie. a tetragonal cell, longer in the c direction than the a and b
directions) use 2{\tt kmax}1 = 2{\tt kmax}2 = ({\tt kmax}3).
If the values for the {\tt kmax} used are too small, the Ewald sum\index{Ewald!summation} will
produce spurious results. If values that are too large are used, the
results will be correct but the calculation will consume unnecessary
amounts of {\em cpu} time. The amount of {\em cpu} time increases with
${\tt kmax1}\times{\tt kmax2} \times {\tt kmax3}$.
\subsubsection{Hautman Klein Ewald Optimisation}
Setting the HKE \index{Ewald!Hautman Klein} parameters can also be
achieved rather simply, by the use of a {\bf hke precision}
directive in the CONTROL file e.g.
\vskip 1em
\noindent
{\bf hke precision 1d-6 1 1}
\vskip 1em
\noindent
which specifies the required accuracy of the HKE convergence
functions, plus two additional integers; the first specifying the
order of the HKE expansion ({\tt nhko}) and the second the maximum
lattice parameter ({\tt nlatt}). \D{} will permit values of {\tt nhko}
from 1-3, meaning the HKE Taylor series expansion may range from
zeroth to third order. Also {\tt nlatt} may range from 1-2, meaning
that (1) the nearest neighbour, and (2) and next nearest neighbour,
cells are explicitly treated in the real space part of the Ewald
sum. Increasing either of these parameters will increase the accuracy,
but also substantially increase the cpu time of a simulation. The
recommended value for both these parameters is 1 and if {\em both} these
integers are left out, the default values will be adopted.
As with the standard Ewald and SPME methods, the user may set alternative
control parameters with the CONTROL file {\bf hke sum} directive e.g.
\vskip 1em
\noindent
{\bf hke sum 0.05 6 6 1 1}
\vskip 1em
\noindent
which would set $\alpha=0.05~$\AA$^{-1}$, {\tt kmax1 = 6}, {\tt kmax2 =
6}. Once again one may check the accuracy by comparing the Coulombic
energy with the virial, as described above. The last two integers
specify, once again, the values of {\tt nhko} and {\tt nlatt}
respectively. (Note it is possible to set either of these to zero in
this case.)
Estimating the parameters required for a given simulation follows a
similar procedure as for the standard Ewald method (above), but is
complicated by the occurrence of higher orders of the convergence
functions. Firstly a suitable value for $\alpha$ may be obtained when
{\tt nlatt}=0 from the rule: $\alpha=\beta/r_{cut}$, where $r_{cut}$
is the largest real space cutoff compatible with a single MD cell and
$\beta$=(3.46,4.37,5.01,5.55) when {\tt nhko}=(0,1,2,3)
respectively. Thus in the usual case where {\tt nhko}=1, $\beta$=4.37.
When {\tt nlatt}$\ne$0, this $\beta$ value is multiplied by a factor
$1/(2*nlatt+1)$.
The estimation of {\tt kmax1,2} is the same as that for the standard
Ewald method above. Note that if any of these parameters prove to be
insufficiently accurate, \D{} will issue an error in the OUTPUT file,
and indicate whether it is the real or reciprocal space sums that is
questionable.
| {
"alphanum_fraction": 0.7496687392,
"avg_line_length": 46.719047619,
"ext": "tex",
"hexsha": "5da040506ae417eb27ba1e38429a3f6e595b3bb7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f2712ca1cdddd154f621f9f5a3c2abac94e41e58",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "zzalscv2/DL_POLY_Classic",
"max_forks_repo_path": "manual/ewald.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f2712ca1cdddd154f621f9f5a3c2abac94e41e58",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "zzalscv2/DL_POLY_Classic",
"max_issues_repo_path": "manual/ewald.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f2712ca1cdddd154f621f9f5a3c2abac94e41e58",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "zzalscv2/DL_POLY_Classic",
"max_stars_repo_path": "manual/ewald.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2796,
"size": 9811
} |
\documentclass[12pt]{article}
\usepackage{url}
\begin{document}
\begin{enumerate}
\item
Replace the modules for managing resources by threading a
resource structure through the code.
\item
Use your own functions to write error messages and debug messages.
Use the gcc to check the correctness of the format string, see also
E-mail von rhomann, 5.102004
\item
Use basic object oriented style as suggested by Gordon.
\item
Always wrapup the data structures, even in cases where an error occurs.
\item
Never use printf as an output, but a callback function.
\item
Implement a streaming approach to generate the output.
\item
Use variable number of arguments for debug, see
\url{http://gcc.gnu.org/onlinedocs/gcc-4.0.0/gcc/Variadic-Macros.html#Variadic-Macros}
\item
Use \texttt{typeof}-Operator to sometimes declare variables, for example
\begin{verbatim}
typeof (*x) y;
\end{verbatim}
declares a pointer to variables of the type of variable \texttt{x}.
See \url{http://gcc.gnu.org/onlinedocs/gcc-4.0.0/gcc/Typeof.html#Typeof}.
\end{enumerate}
\item
The part of the program managing resources should push references to all
resources onto a stack. If an error occurs, all resources on the stack
should be freed. This would lead to an appropriate behaviour in the error case.
\item
maybe it is better to use Gordon's style of programming.
\end{enumerate}
\section{The following usefull classes exist in Gordon's Implementation}
\begin{enumerate}
\item
\texttt{Alpha} as replacement for \texttt{Alphabet}. However, mapping
of arbitray alphabets is not supported. One maybe should add functions
to output an entire sequence of symbols. It would also be nice to
have functions, which encode a stream of symbols.
\item
\texttt{Bioseq} as replacement for \texttt{Multiseq}. Need example to
see how it works. What about the performance? Reimplement this on the
base of encodedsequences.
\item
\texttt{Fasta-reader}: Reading fasta files.
\item
\texttt{Dlist}: Double linked list.
\item
\texttt{error}: Error functions.
\item
\texttt{hashtable}: Hash table
\item
\texttt{IO}: input/output
\item
\texttt{Mergesort}: with buffer
\item
\texttt{option}: Option parsing
\item
\texttt{progressbar}: progress bar
\item
\texttt{scorefunction}
\item
\texttt{scorematrix}
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.785620342,
"avg_line_length": 31.6805555556,
"ext": "tex",
"hexsha": "da1c144e12239ce81ad2f2f2f396ddfa8673fba2",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-01-03T21:58:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-09T13:32:57.000Z",
"max_forks_repo_head_hexsha": "5f2835198eaa489432faf9fb47a802bc0f71b8ad",
"max_forks_repo_licenses": [
"ISC"
],
"max_forks_repo_name": "gordon/vstree",
"max_forks_repo_path": "src/doc/libdoc/Newrules.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "5f2835198eaa489432faf9fb47a802bc0f71b8ad",
"max_issues_repo_issues_event_max_datetime": "2021-12-03T07:02:24.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-01-11T19:05:53.000Z",
"max_issues_repo_licenses": [
"ISC"
],
"max_issues_repo_name": "gordon/vstree",
"max_issues_repo_path": "src/doc/libdoc/Newrules.tex",
"max_line_length": 86,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "5f2835198eaa489432faf9fb47a802bc0f71b8ad",
"max_stars_repo_licenses": [
"ISC"
],
"max_stars_repo_name": "gordon/vstree",
"max_stars_repo_path": "src/doc/libdoc/Newrules.tex",
"max_stars_repo_stars_event_max_datetime": "2019-02-11T23:47:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-11T21:38:26.000Z",
"num_tokens": 607,
"size": 2281
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{tikz}
\usepackage{csquotes}
\usepackage{array,multirow,graphicx}
\usepackage{float}
\usepackage{multicol}
\usepackage[hidelinks]{hyperref}
\graphicspath{{./assets/}}
\def\checkmark{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;}
\title{An Ethical Exploration of Email-based Intelligent Virtual Assistants for CreaTe}
\date{Reflection, June 2020}
\author{Anand Chowdhary, Creative Technology BSc, University of Twente}
\begin{document}
\pagenumbering{gobble}
\maketitle
\tableofcontents
\newpage
\pagenumbering{arabic}
\section{Introduction}
Last year, over 100 billion emails were sent every day, according to the Email Statistics Report. A very common use case of sending emails in a work environment is to set up appointments for in-person or virtual meetings. However, several email exchanges are required in order to find a suitable time and place where all parties are available, and this back-and-forth calendar conflict resolution wastes 17 minutes per meeting on average, according to X.AI's Dennis Mortensen. This can add up to a significant waste of time (over 100 hours per year), which can be otherwise used for productive work.
For some professionals, scheduling these meetings manually can feel like a ``frustrating distraction from the things that matter" \cite{cranshaw_calendar.help:_2017} so much so that they hire assistants to help with the task. However, not everyone can afford full-time assistants and will therefore turn to software solutions. Intelligent virtual assistants over email built using machine learning can help by automating these scheduling messages. An AI assistant can access a user's calendar and can find empty meeting slots based on the location of the user location and their scheduling preferences, and then send and respond to emails on their behalf.
In this graduation project, I aim to design and develop an AI-powered intelligent virtual assistant that automates the email scheduling process. Professionals will be able to use the web interface of the assistant service to solve the ``time waste" problem when it comes to appointment scheduling. In the future, the capabilities of the assistant can be extended to essentially everything a human assistant can do, from sending outbound marketing emails and following up with coworkers on tasks, to automating other parts of the professional's life.
Of course, the product also has several ethical implications and possible social disruptions, such as job loss for secretaries by encouraging automation and ensuring data and privacy protection, apart from answering the main ethical conundrum — whether end users who receive emails from the service are informed that the email is written by an AI assistant, not a real human.
\section{Ethical Toolkit}
The Ethical Toolkit published by Santa Clara University's Markkula Center of Applied Ethics is used as a framework for ethical analysis in this section. Although an opinionated implementation, the core principles underlying this framework are essentially universal. For example, the ACTWith Model is an information processing model of moral cognition as cognitive agency and its 2-by-2 matrix of open and closed can offer a similar framework that highlights the same important focus areas \cite{white_understanding_2010}. The core idea is to gain empathy towards stackholders and foster a positive feedback loop.
The Toolkit is also very useful during the product development phase, as showcased in the Design and Development stages of the following integration flowchart. The Toolkit is not only useful for ethical analysis, but laying the toolkit on top of the product development cycle shows my workflow which aims to combine the product development lifecycle with this Toolkit.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{cycle.jpg}
\caption{Application of the Ethical Toolkit}
\label{fig:checkbox}
\end{figure}
\subsection{Ethical Risk Sweeping}
The first tool, ethical ``risk-sweeping", focuses on seeking to understand the moral risks that can be created or exacerbated by the choices in building and deploying this thesis project. Learning from history, there have been countless times when failure to identify such risks has resulted in real harm.
In this case, there can be ethical risks that arise, such as:
\begin{enumerate}
\item Authorship: Should recipients be made aware that the email is sent by an automated agent, not a human assistant?
\item Job Loss: Using digital agents that can perform a human assistant's job more accurately, faster, and more inexpensively will lead to job loss
\item Gender Bias: Assistant names like Alexa and Siri reenforce and spread gender bias and imply that only women should do such roles
\item Technical Risks: Since personal information, including addresses and phone numbers, may be stored by users, data protection must be taken very seriously, and bank-grade security measures must be implemented
\end{enumerate}
These themes are discussed in detail in the following sections.
\subsubsection{Authorship: ``Sent by an AI"}
There are also several ethical questions that arise about whether recipients should know that an email was sent to them by a virtual agent, not a person. This section discusses the importance of authorship in ethical interaction design, and how the product in this thesis tackles those challenges.
\paragraph{Automated Journalism}
If you read major American newspapers such as \emph{Forbes} or the \emph{Los Angeles Times}, chances are you've already read a story entirely written by an AI-powered software system. This process is known as automated journalism, and highlights an important question about authorship: \emph{Who is the author of an article written by a virtual agent?} A 2005 study found that research participants attribute story credit to the programmers who developed the AI or the news organization publishing the story \cite{montal_i_2017}.
However, this is not the full picture. Since machine learning models require large amounts of training data that the agent ``learns" from (and, in some cases depending on the implementation, the output is largely inspired by the training data), the human authors of said data are also worthy contenders for the credit.
As another example, \emph{Summly}, a London-based startup founded in 2011 was acquired by \emph{Yahoo!} for \$30 million in 2013. At the name suggests, \emph{Summly} specialized in summarizing news sources using AI-powered software. Using ``thousands of sources", users could read algorithmically-generated summaries of news stories using their app. If a human was summarizing a news story, say from 20 paragraphs to 2 paragraphs, we wouldn't give them credit over the new story. In this case, the AI is essentially just a summarizing tool, except that it uses multiple, sometimes hundreds, of difference sources for each story. I would argue that the published summary should still be credited to the hundreds of authors of the data the AI used to generated summaries.
\paragraph{Crowdsourced Authorship}
Google's \emph{reCATCHA} is a popular challenge-response test for websites based on CAPTCHA (completely automated public Turing test to tell computers and humans apart), and is used by millions of websites because it's a simple and effective way to prevent spam or DDoS attacks \cite{poongodi_intrusion_2019}. Unbeknownst to most users, one of the two words entered by users was used to transcribe old books in the public domain \cite{january_2018_captcha_nodate}. Once all books in Google's collection were accurately transcribed by the public, the focus was shifted to transcribing text in photos from Google Street View in 2012. By 2014, almost all challenges were used to train Google's AI for use cases such as better Google Image Search results, more accurate Google Maps navigation, and object detection in Google Photos \cite{noauthor_im_nodate}.
Although Section 3(d) of \emph{reCAPTCHA}'s Terms of Service ensure that users allow ``sending that data to Google for analysis", a similar argument to automated journal can be presented: Should end users, i.e., the public, receive partial credit for the thousands of human hours that have been spent on training Google's AI? Furthermore, should people be compensated? From a legal standpoint, Google has their tracks covered, but this does showcase the ethical dilemma of virtual agents generating content and what happens when there is no clear authorship.
\paragraph{Authorship and Credibility}
The website \emph{ThisPersonDoesNotExist.com} shows you a headshot of an AI-generated person using a generative adversarial network (GAN). To my eyes, this is completely indistinguishable from the profile picture of an actual person. That particular GAN, called StyleGAN, was written by Nvidia and published in \emph{arXiv} \cite{karras_style-based_2019}. Although this is a proof of concept, it opens the door to applications like realistic 3D modeling. On the flip side, AI-generated content can be harmful too. ``While that is exciting, others may fear for the more sinister uses for the technology such as contributing to DeepFakes, computer-generated images superimposed on existing pictures or videos, that can be used to push fake news" \cite{noauthor_this_2019-1}.
So, why is authorship important? In one word, credibility. If you know the name of the author of an article in a major newspaper or magazine, you can find out more information about them, perhaps by visiting their social media handles. If you have any questions about their work, or found a mistake in their article, you can contact them directly. In the case of an article written by an unnamed virtual agent, the only option is to find the contact information of the publication.
It may be hard to answer the authorship question, but people are almost certain that the quality of articles generated by an AI do not match that of a trained journalist. In a 2020 paper published by Spain-based \emph{University of Castilla–La Mancha}, a large sample size of participants (N=465) of media personnel, professors, students, and journalists concluded that was that the ``quality of automated news presents some important shortcomings" \cite{calvo-rubio_percepcion_2019}. They did, however, highlight the ``need to bet on a solid training of journalists that integrates the use of emerging technologies".
\paragraph{Authorship Clarity}
In news articles written by virtual agents, there is ``no visible indicator for readers to verify whether an article was written by a robot or human", which raises issues of transparency \cite{dorr_ethical_2017}. Both \emph{Summly} and related software claim the created work as their own, or strongly imply it by not citing individual source authors.
In the case of EIVA, if an AI assistant is impersonating a human assistant to send emails on the professional's behalf, it raises the same ethical question of whether the end user receiving the email should know that it was not written by a human. In my personal opinion, I am completely fine with deceiving recipients on such a trivial authorship question, but I understand that this sets a powerful and potentially harmful precedent for AI authorship.
This is why the ethical toolkit (see Section 3) recommends complete customizability and ownership from the user. Using the EIVA website's settings page, consumers can set preferences about whether they want their assistant to inform end users about the fact that they are virtual agents, not human assistants. By default, this is preselected for all users (see Section 3.1), and is a simple checkbox user interface with the message ``Inform recipients that your assistant is a virtual assistant". This clarity highlights the commitment to personalization that such a product should bring.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{checkbox.png}
\caption{Checkbox user interface}
\label{fig:checkbox}
\end{figure}
\subsubsection{Job Loss}
Just like in other applications of automation software, loss of employment for assistants is an important social disruption that this product unfortunately encourages. In the interest of saving both time and money, companies may choose to deploy AI-powered assistants on an organization-wide level and terminate the employment of all their secretaries in the future.
In a survey of administrative assistants, all but one reported that most of their work was scheduling-related \cite{erickson_assistance:_2008}. Though in the current state, EIVA only focuses on appointment scheduling, this is expected to change in the future as more features are added. Slowly, software agents like EIVA will be able to do more and more of the day-to-day job of an assistant, and will prove to be both faster and more inexpensive.
EIVA cannot currently perform more actions like writing emails from scratch (it uses fixed templates), and even if automated agents do, creative work is much harder to automate, though definitely not impossible. This might suspect that other professions, like the journalists discussed in the previous section, are safe. In that same 2020 paper from Spain, the group of journalists and academics inconclusively debated whether or not technology will not have a negative impact on the journalistic labor market, but agreed that journalists should be trained to use modern technologies such as this.
\paragraph{Mass Unemployment}
Taking a more big picture perspective, ``widespread job loss" is considered an effect of smart software bots such as EIVA entering the workforce. According to \emph{McKinsey}, ``depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely" \cite{noauthor_impact_2020}.
From a purely ethics perspective, it may be argued that we should do everything we can to prevent mass unemployment, since it leads to several effects on the physical and mental health of people. A study from the US National Library of Medicine concluded that `symptoms of somatization, depression, and anxiety were significantly greater in the unemployed than employed`" \cite{linn_effects_1985}. However, I firmly believe that this is not an optional evolutionary step that can be delayed; it's a fundamental shift that market forces will determine, regardless of whether it unemploys large sections of the population.
\emph{CGP Grey}'s short film \emph{Humans Need Not Apply} goes a step further and defines the analogy between humans and horses as follows: just like ``mechanical muscles" such as automobiles replaced horses, ``mechanical minds" like software bots (much like EIVA) will replace humans. He formulates that just like horses may have argued that previous technological advancements only made their job easier (like horse wagons), humans currently argue that AI in the workplace will help humans transition to more creative work \cite{noauthor_youtube_nodate}. However, this argument does not work because a ``poetry-driven economy" cannot sustain; creative work has a popularity effect, which is why only a small percent of the population can sustain with such jobs. The question policymakers have to answer is: \emph{What are we going to do when we have significant portions of the population unemployed, through no fault of their own?} Perhaps we have to look at answers such as promoting tourism, investing in retraining, exploring financial solutions such as Universal Basic Income, and so on.
\paragraph{Human Assistants}
For the purposes of this thesis project, there is a strong conclusion that as EIVAs become smarter and market adoption increases, there definitely will be a cause-effect relationship with human assistant unemployment. I'd further say that someone who has used an EVA (and perhaps paid around \$10 per month for it) will not ever want to go back to a human assistant that is slower, makes more mistakes, and costs over 100x more.
Personally, I built a prototype version of EIVA (then called \emph{Ara}) and used it for several months to confirm appointments. In fact, in a 2017 interview titled ``UT student among the top 50 young entrepreneurs" about my listing in the Dutch business newspaper \emph{Het Financieele Dagblad}'s annual list of the 50 most-innovative entrepreneurs and professionals in the Netherlands with \emph{UToday}, \emph{University of Twente}'s independent publication, Jelle Posthuma stated the following in the beginning of the article \cite{noauthor_ut_nodate}:
\begin{displayquote}
It was easy to schedule an interview with the first-year student of Creative Technology. His ``personal assistant" Ara sent a friendly email replying: ``You’ll be welcome to come next Wednesday." Chowdhary is a co-owner of the company Oswald Labs, which develops products for people with disabilities. His office is in Roombeek. ``Anand, you’re working hard: you even have a personal assistant…" A big grin appears on the young student’s face. ``Yes, I built her myself. Ara is a computer. Her AI recognizes emails and schedules my appointments."
\end{displayquote}
Since I have already used a preliminary version of EIVA and it's clear how much useful it is, and how much impact it has on recipients, I am certain that I would not want to go back to a human assistant for scheduling. Though this is validation for the product, it is also a small proof for the power of automated agents doing human jobs. Therefore, the ethical implications of job loss due to AI is a large consideration when building such tools. Though currently in its infancy, the software bot industry is very harmful to the status quo, especially in terms of general population employment.
\subsection{Ethical Pre- and Post-mortems}
Like the first step focuses on individual risks, the second step is about avoiding systemic ethical failures. This is very important because, as discussed above, historical precedent suggests that cascade effect is common; multiple failures together add up to catastrophic ethical problems, although any one of them individually would not be significant cause for concern.
The idea behind a pre-mortem is to exercise moral imagination to analyze potential ethical disasters without waiting for them to occur.
The first question to be answered in \emph{How could EIVA fail for ethical reasons?} To answer this, we must take a closer look at some of the underlying principles, like protection of personal data. Similarly, \emph{What systems can we put in place to reduce failure risk?} can be answered with the perspective of protecting sensitive information using standards like encryption. The issue of privacy is further discussed in the following sections.
\subsection{Expanding the Ethical Circle}
As discussed in the previous section, the scope of harm has not been well-understood previously, during the ethical and product development stage. This is in part because of the fact that designers and engineers may ignore the interests of certain stakeholders, or at least don't consider them sufficiently.
According to Vallor, Shannon, Green, and Raicu, ``To mitigate these common errors, design teams need a tool that requires them to ‘expand the ethical circle’ and invite stakeholder input and perspectives beyond their own."
\subsubsection{Identifying Key Stakeholders}
The basic stakeholders are the users who will interact with EIVA -- the consumers of the assistant service, and the recipients of emails sent by the assistant. However, both these highly interested people have largely different powers; one controls the behavior of the assistant and the other merely interacts with it.
Therefore, although it's important to consider the inputs of each stakeholder, it can be useful to prioritize stakeholders based on how much interest then have in the product, and how much power they have over the designers and engineering building the product. The Power-Interest Grid (Figure 3) can be a useful tool to prioritize your tasks based on four categories \cite{noauthor_stakeholder_nodate}.
\begin{enumerate}
\item \textbf{High power, highly interested people} (Manage Closely): we must fully engage these people, and make the greatest efforts to satisfy them:
\begin{enumerate}
\item Users who are using EIVA and have their own assistant that schedules appointments
\item External client who is funding and invested in this research project
\end{enumerate}
\item \textbf{High power, less interested people} (Keep Satisfied): put enough work in with these people to keep them satisfied, but not so much that they become bored with your message
\begin{enumerate}
\item Companies who might not directly care about EIVA, but the difference in employable assistants when compared to EIVA
\end{enumerate}
\item \textbf{Low power, highly interested people} (Keep Informed): adequately inform these people, and talk to them to ensure that no major issues are arising. People in this category can often be very helpful with the detail of your project
\begin{enumerate}
\item Users who are recipients in emails sent by EIVA; they don't have the power to control or customize the assistant, but are directly communicating with it
\item Human personal assistants whose employment may feel threatened
\item People who have signed up to start using the EIVA service on launch
\end{enumerate}
\item \textbf{Low power, less interested people} (Monitor): again, monitor these people, but don’t bore them with excessive communication
\begin{enumerate}
\item People who have participated in the research and are unsure whether they would want to use the service in the future
\end{enumerate}
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{power-interest.jpg}
\caption{Power-Interest Grid}
\label{fig:checkbox}
\end{figure}
These groups of stakeholders can also be divided based on the factor they are interested in and impacted by \cite{mendelow_environmental_1981}. The table below highlights these key stakeholder groups and their impact interest areas:
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& \multicolumn{4}{|c|}{\textbf{Impact Interest Areas}} \\
\hline
\textbf{Stakeholder Group} & \rotatebox[origin=c]{90}{\textbf{ Job Loss }} & \rotatebox[origin=c]{90}{\textbf{ EIVA Sales }} & \rotatebox[origin=c]{90}{\textbf{ EIVA Features }} & \rotatebox[origin=c]{90}{\textbf{ EIVA Reliability }} \\
\hline
Consumers using EIVA & & & \checkmark & \checkmark \\
\hline
Users interacting with EIVA & & & & \checkmark \\
\hline
External client & & \checkmark & \checkmark & \checkmark \\
\hline
Human assistants & \checkmark & & \checkmark & \\
\hline
Companies & \checkmark & & & \checkmark \\
\hline
\end{tabular}
\end{center}
\subsection{Case-based Analysis}
EIVA is perhaps best-described as a combination of several creative and powerful solutions for scheduling from the past -- sending emails, calendar invitations, and AI assistants. As such, it also inherits the ethical risk and lessons from these products.
\subsubsection{Privacy}
The privacy precedent that most virtual assistants have set is not highly positive. Personal assistants in the form of smart speakers like Amazon Echo (running Alexa) and Google Home (running Assistant) have ``begun to significantly alter people’s everyday experiences with technology" \cite{pridmore_personal_2020}. These virtual assistants can continually improve with increased usage using deep learning \cite{kepuska_next-generation_2018}. Although this makes the assistants more useful over time, they also ``amplify the overall debate about privacy issues" \cite{zeng_end_2017}.
\paragraph{Amazon Echo}
For example, an Echo device unintentionally recorded a Portland family's private conversations and shared it with a random person from their contact list \cite{noauthor_this_nodate}. A prime example of postmortems is Amazon's response, ``We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future," which suggests that this may have been prevented if the device was tested better.
For EIVA, the learning is simple: make sure no personal information is recorded if it is not strictly necessary, and users should be able to configure what they want to share. All privacy-compromizing applications, regardless of how important they may be, should be opt-in, not opt-out.
\subsubsection{Gender Bias}
There is often visible gender bias when it comes to personal assistants or secretaries \cite{noauthor_why_2018}. According to \emph{CNN}, the rise of the secretary began with the increased paperwork during the the Industrial Revolution, ``the job became popular in the 1950s, when 1.7 million women were ‘stenographers, typists or secretaries" \cite{noauthor_its_2013}. This sexist ``tradition" has become so widespread that there have been a large number of cases of executives requesting female blonde assistants. ``There is definitely a problem when an employer expects their new hire to look a certain way or assumes that everyone working in support is female. `No, I definitely wouldn’t consider a male PA' -- that comment is ubiquitous" \cite{williams_secretaries_2016}.
Historically, these were executive assistants, but modern technologies have translated them to virtual assistants as well. When asked about why Cortana is female, a \emph{Microsoft} spokesperson said Cortana can technically be genderless, but the company did immerse itself in gender research when choosing a voice and weighed the benefits of a male and female voice \cite{pcmag_real_2018}, ``for our objectives — building a helpful, supportive, trustworthy assistant — a female voice was the stronger choice".
Apple's Siri and Google Assistant offer the option of changing the voice to male, but Alexa and Cortana don't have male counterparts. The first United Nations' examination of gendering of AI technology found that ``gender imbalances in the digital sector can be `hard-coded' into technology products" \cite{noauthor_id_nodate}. In that report, the following problems are highlighted:
\begin{itemize}
\item Digital assistants reflect, reinforce and spread gender bias
\item They model acceptance and tolerance of sexual harassment and verbal abuse
\item They send explicit and implicit messages about how women and girls should respond to requests and express themselves
\item They make women the ‘face’ of glitches and errors that result from the limitations of hardware and software designed predominately by men
\item They force synthetic ‘female’ voices and personality to defer questions and commands to higher (and often male) authorities
\end{itemize}
The outdated argument that ``this has always been the way things are" was the same when defending unacceptable historic precedents like slavery. It is unfortunate that this sexist default has slipped into digital assistants as well, but it is not very hard to explore how this can be fixed. The UN report makes 15 recommendations, including ``[ending] the practice of making digital assistants female by default."
For EIVA, this is an important lesson because I've personally made this mistake before. With the intention of sounding friendly, the original name for EIVA was \emph{Ara}, chosen because it was short and memorable. However, the gender role that accompanies this is a big problem, therefore the name was changed to EIVA, which is an acronym for Email-based Intelligent Virtual Assistant, and yet sounds friendly and human-like. However, though the name of the service is EIVA, there is no reason why users cannot select their own names for their assistant.
Unisex names like \emph{Alex} and \emph{Jesse} can be used by end users; the idea is to make the assistant completely customizable. To make this easy, the first setting in the user interface is to select the assistant's name and signature, which users can freely pick.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{name.png}
\caption{Assistant name and signature user interface}
\label{fig:checkbox}
\end{figure}
\subsection{Ethical Benefits of Creative Work}
The previous sections have been focused on discovering ethical errors and trying to mediate them, but ethical design is not limited to identifying risks, it's about leading to a positive outcome. In the case of EIVA, saving time in itself is a major help for most people; if you have several meeting every week, you can save almost 100 hours a year by switching to EIVA.
Therefore, a workflow must be established to formalize the process of understanding the ethical benefits of this product. The first question to answer is \emph{Why are we doing this, and for what good ends?} In this case, work-life balance is an important factor, and that saved time---although just 15 minutes per meeting---adds up to being more productive at work, and having saved time that can be better used at home or the gym. This is kept in mind during the prototyping and development phase, to make sure people have all the options to customize and personalize the assistant based on their preferences and to maximize time saved.
The next, and perhaps most important, question is \emph{Will our customers truly be better off with an EIVA than without it? Are we trying to generate inauthentic needs or manufactured desires, simply to justify a new thing to sell?} The answer to the second question is clear: people spend a lot of valuable time in scheduling appointments. As discussed in Section 2.1.2, in a survey of administrative assistants, all but one reported that most of their work was scheduling-related \cite{erickson_assistance:_2008}. This strongly suggests that a non-trivial amount of human hours are spent on scheduling, and customers would definitely be better off saving that time. Ideally, an EIVA will not replace the human assistant, but help them do their job better and faster.
Other important questions to keep in mind are \emph{Has the ethical benefit of this technology remained at the center of our work and thinking?} and \emph{What are we willing to sacrifice to do this right?} and these require a continuous evaluation of the ethical values embedded in the product. Though the ethical benefit has been motivated by the financial benefit of saving billable time, and therefore money, and perhaps selling EIVA as a service that does justice to the unit economics of time saved, this does not mean that the ethical benefit is lost.
Furthermore, as discussed in the following section, a significant amount of thought has been given to taking security precautions such as implementing bank-grade measures to keep sensitive information safe. Values such as these have been a core part of the product, as witnessed by initial testers. Although there have only been 6 people who have tested EIVA so far (the research is still in an early testing stage), the average response to question \emph{How would you rate the privacy features available in the app?} has been 4.85 out of 5 \cite{chowdhary_anandchowdhary/thesis_2020}. This is arguably a higher degree of privacy and security that you would get if you manage appointments using a human assistant.
\subsection{Think About the Terrible People}
Although positive thinking can be a powerful tool to explore the ideal scenarios for a product, ``sometimes what can be a virtue becomes a vice", and in reality, there will always be those who try and use technology for exploitation or unfair personal gain.
\subsubsection{Explorations}
\emph{Who will want to abuse, steal, misinterpret, hack, destroy, or weaponize what we built?} Since the EIVA has access to personal information such as calendar URLs and meeting locations (addresses, email IDs, IM usernames, etc.) of the user, people may be incentivized to sell this information.
\emph{Who will use it with alarming stupidity/irrationality?} People who are not very technically literate and sign up for the service may confuse the intention, and make their personal information available to the public, for example by adding their home address as a location the assistant can recommend for incoming meeting requests.
\emph{What rewards/incentives/openings has our design inadvertently created for those people?}
\emph{How can we \emph{remove} those rewards/incentives?} Implementing strict security protocols can help make sure unauthorized people don't gain access to personal information. The following section discusses some of the measures taking in building and testing the application presented in this paper.
\subsubsection{Implementations}
In terms of building the webapp and backend APIs, steps have been taken to ensure highest standards of security. For example, all communication between the frontend app and backend are encrypted with 256-bit TLS/SSL. All requests made using the insecure HTTP protocol are automatically redirected to the secure HTTPS standard. Furthermore, all collected logs for data analytics are encrypted with the industry-standard AES-256, the same security used by banks to protect your information.
Other common methods of exploitation, such as port scanning, are prevented by only opening up the required ports. In this case, only ports 80/443 (for web traffic), 22 (for SSH), and 19999 (for NetData) are open. All other ports are closed to any incoming traffic by default. Steps are also taken to prevent DDoS attacks like installing Cloudflare's DDoS Protection that shows mandatory CAPTCHA challenges to bad apples.
For API protection, strict rate limiting has been implemented. Web APIs use rate limits to ``prevent unauthorized users to compute service levels with an high confidence while still allowing the creation of useful value-added services" \cite{firmani_computing_2019}. A rate limit of 1,000 requests per minute for authenticated requests, and 60 requests per minute for unauthenticated requests is enforced.
For authorization, the industry-standard access/refresh token methodology is used, with an access token and refresh token expiry duration of 15 minutes and 30 days respectively. This means that if someone gets access to a user's access token (which is interchangeable with the session ID in legacy systems), that token will automatically be invalidated after 15 minutes. Furthermore, even if they get access to the refresh token, users have the option to remotely logout and disable sessions from the web app interface.
To store sensitive information such as passwords, a one-way, undecryptable cryptographically secure hash function is used, \emph{bcrypt}. This means that even if someone gains access to the database through a breach in security, passwords are not stored in plain text and therefore useless to them.
Overall, since users may have personal information available to the virtual assistant, many industry-standard security compliance steps have been taken in the development and testing of the product.
\subsection{Ethical Feedback and Iteration}
Technological ethics is an ongoing process, and does not stop when the product is launched or in the hands of consumers. Furthermore, as more and more intelligent systems like EIVA continue to shape society, the ethical impact is always going to be a moving target.
For gathering feedback from stakeholders, several measures are taken. For example, users can log in to the webapp and access the feedback form by clicking on the `Help' icon on the bottom-right corner of each page. This ensures that the system is set up for encouraging feedback and to identify where the feedback is coming from.
\subsubsection{Continuous Integration (CI)}
From a technical perspective, unit tests, end-to-end test, and a CI process has been set up to ensure that no bad code gets deployed to users. Whenever a new commit is pushed to the central repository, the code is only deployed to the production app when there is a successful build \cite{noauthor_what_2019}. When there is an error in the build (for example, if a test fails), the developers get a notification for mitigation.
Furthermore, error tracking has been enabled using Sentry. Using Sentry, developers can get notifications, including specific lines of code, when an error is throws, and users can be prompted for feedback \cite{anser_sentry.io_2017}. Therefore, even if there is an error in the webapp while a user is using it, the Sentry feedback loop ensures that swift code-fixing action is taken.
\subsubsection{Sensible Defaults}
Using the feedback from users, especially for customizable options related to individual privacy, the less secure (more open) options should always be opt-in, not opt-out. Based on the results of the initial survey with users and feedback from stakeholders, combined with the empathic design approach of trying to understand user interest by stepping into their shoes, ensures that we set sensible default.
It has been estimated that 95\% of all users stick to the default settings in an application, and don't bother to change them \cite{catalanotto_95_2019}. This is an important responsibility on the hands of designers and engineers to ensure that the default options are most secure. In EIVA, options such as email tracking, directly connecting a calendar, etc., are all optional and opt-in, so the default state is always most secure.
For other features like setting the default email language, error emails, etc., are also based on results of the survey and therefore directly a result of the feedback loop. Instead of selecting opinionated defaults, this empathetic approach helps keep users safe most of the time, while adding to the product's utility.
\subsubsection{Feedback Loop}
The core principle in this Toolkit, the same as in the ACTWith Model, is the positive feedback loop that ensures new information takes us back to the drawing board. The particular implementation of this model was explored earlier (see Figure 1), and this feedback loop has been tightly integration in the product development process.
\bibliography{citations-reflection.bib}
\bibliographystyle{apalike}
\listoffigures
\paragraph{License}
This work is licensed under a \emph{Creative Commons} Attribution 4.0 International License (CC BY 4.0), © 2020 Anand Chowdhary. The full text of the license and the work is available at \url{https://github.com/AnandChowdhary/thesis}.
\end{document}
| {
"alphanum_fraction": 0.8037552372,
"avg_line_length": 120.4548286604,
"ext": "tex",
"hexsha": "058373eedb41615d9ea8d3c42dd934c095744bef",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "47df27c6b159088bf0086312558031c68613630e",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "AnandChowdhary/bsc-thesis",
"max_forks_repo_path": "reflection.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "47df27c6b159088bf0086312558031c68613630e",
"max_issues_repo_issues_event_max_datetime": "2020-07-06T10:35:59.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-03T07:04:21.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "AnandChowdhary/thesis",
"max_issues_repo_path": "reflection.tex",
"max_line_length": 1094,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "47df27c6b159088bf0086312558031c68613630e",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "AnandChowdhary/thesis",
"max_stars_repo_path": "reflection.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-15T13:28:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-15T13:28:34.000Z",
"num_tokens": 8160,
"size": 38666
} |
\section{CIFAR10 example} % Jonathan / b-j8501 / Jul 11 & % b-j8505 / Jul 12
In this section, we will give more details about training a neural network with PyTorch. We will start with the simple CIFAR10 example introduced before, and explain how to modify each component of it.
The first of all, the first six lines here just consist the \emph{import} statements. Basically all this does is bringing code from this imported libraries from PyTorch in your program allows to use the classes, the functions that for implement each of these libraries.
\begin{python}
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
\end{python}
\subsection{The \emph{`Net'} class}
Then each neural network we should to implement should exactly contain the class in the following. Which you does is that you define a new class called Net, which is a subclass of nn.Module. The nn.Module is a class implemented in the package torch.nn. Bascially you don't have to know exactly what the Module class does, but the point here is Network is going to be a subclass of it. Let go to it step by step.
Any class defined in Python has to have the \emph{\_\_init\_\_} function. It will run when the class be created. Late on, in the code when you implement like \emph{`conv\_net = Net()'}, it will create an instance of the Net class. And when the instance is created, the \emph{\_\_init\_\_} function is run. And we can see that the \emph{\_\_init\_\_} function create each of the pieces of the neural network, which are the operators we need for our neural network. This linear (\emph{nn.Linear}), convolution (\emph{nn.Conv2d}) and max pool (\emph{nn.MaxPool2d}) is implement in somewhere else. Roughly speaking, they simply contain how much parameters and a forward function. For example the \emph{`nn.Linear'} class:
\begin{itemize}
\item contains parameters which consists from a matrix and a vector $(W,b)$,
\item and the `forward' function is
$${\rm forward}(x) = Wx+b.$$
\end{itemize}
Later on, we will talk about what \emph{nn.Conv2d} and \emph{nn.MaxPool2d} do.
And then to make is useful, we need to define this forward function. Basically, it tells you, given this neural network and some inputs, what the output is. In this particular case, what the network does is to apply the first convolution, and a ReLU function. The ReLU function is
$${\rm ReLU}(x) = \max\{0,x\}.$$
And then to apply a max pooling. Then applies the next convolution, a ReLU function and pooling layer.
Then, what the \emph{`x.view'} does is take $x$, which is still an image (2 dimensional) and flat it out into a vector. Then applies the first linear layer, a ReLu function, and the second linear layer, a ReLU function, and finally the third linear layer. And it returns $x$. What is contained in the class is all the parameters for each of these layers and the forward function which tells you the order and how to apply them. So that is what network consist.
\begin{python}
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
\end{python}
\subsection{The `main' function}
What we want to do is to load a bunch of data, and train this network (all the parameters) on these data. So the \emph{main} function is going to do that. The two lines \emph{trainset} and \emph{trainloader} load the train set, and the two lines \emph{testset} and \emph{testloader} load the test set. The \emph{trainloader} is a class which specifies some way of giving you the data. In our particular case, we construct the \emph{trainloader}, we pass the \emph{trainset}, the images, to it and also a bunch of the parameters, like the batch size, shuffle, number of workers. When you calculate the gradient, you don't use the whole dataset at once, and we only use some of them. The number of data we use to calculate the gradient is the batch size. Here we pass the \emph{`batch\_size'} to the \emph{DataLoader} what happen is the later on when we loop over everything in the \emph{trainloader} (see the for loop in the main function), it will return the images four at a time because we set `batch\_size=4'. The shuffle is true means that you return the data without replacement. It the same thing to the test set.
The \emph{criterion} is a class containing the loss function. Given the output of the neural network and the true label applies the loss function. The optimize class, we will talk about it later on. When you construct the \emph{`optim'} class, the parameters you have to pass are \emph{`net.parameters'} that tell the optimizer which parameters need to be optimized, and other hyperparameters. It has a function called \emph{`step'}. Whenever the \emph{`step'} function is called it going to modify the parameters in some way based on whatever the gradients are. The optimizer assume you already calculated the gradients and then it does some sort of step based on the gradients. That explains why when you actually run the training we have to call \emph{`optimizer.zero\_grad'} and \emph{` loss.backward'} independently of \emph{`optimizer.step'}. \emph{`optimizer.step'} doesn't handle taking the gradient, it assumes that all the gradients already been taking properly.
\begin{python}
def main():
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shufflex=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
conv_net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
\end{python}
\subsection{\emph{`DataLoader'}}
Next, we want to talk about how can we change the Dataloader and how can we make it provide the data samples in a different way. In particular, we already talk about the difference between sampling with and without replacement. Let us recall that a little bit.
The loss function:
$$L(\theta) = \sum_{i=1}^n l(x_i,\theta)$$
we want to calculate the gradient of the loss function.
$$\nabla L(\theta) = \sum_{i=1}^n \nabla_\theta l(x_i,\theta)$$
and $n$ generally could be very large, $n \sim 10^4 - 10^7$. In this situation, to compute the gradients over all the data point is computationally not feasible. So just approximate the gradients by sampling some of the data points.
Initially, consider the computation complexity. Stochastic gradient descent
\begin{itemize}
\item approximate the gradient by considering a sample of data points
$$\{x_{i_1}, \ldots, x_{i_k}\}$$
where $\{i_1, \ldots, i_k\}$ is randomly chosen in each iteration, then
$$\nabla L(\theta) \approx \sum_{l=1}^k \nabla_\theta l(x_{i_l},\theta)$$
\item $k$ is called the mini-batch size (controls the accuracy of "noise" in the sample).
This is a sampled gradient, containing some noise. If I have larger mini-batch size, we will get a more accuracy gradients, so the mini-batch size $k$ is an important hyper-parameter.
\item $i_1, \ldots, i_k$ sampled with/without replacement.
\begin{itemize}
\item with replacement: there can be repetitions
\item without replacement: there can't be repetitions
\end{itemize}
\end{itemize}
What we implement is sampling without replacement cross the whole epoch. When we randomly choose these data points, we can't choose the same data points again until we around the whole data set.
Now, let's see how we can change it. First, go to the PyTorch documents page\footnote{\url{https://pytorch.org/docs/stable/data.html}}. Here is an explanation what exactly is this class does, and the important thing for us is what the possible inputs. So when I construct a DataLoader, I need to pass some variables. Whenever we construct a DataLoader, we have to pass it a dataset because there is no default value. In our program, we passed the train set, CIFAR10. You also can pass something else you want to it. And all the other variables here have some default values. There are options here to keep the defaults or to change it to what you want We can see that some of the defaults actually already changed in our program. Instead of the default value 1, we set the batch size as 4, and shuffle as true instead of false. Now, we know how to change the sampling strategies. We probably want to change some of these other inputs. Look here, we can see the description are. In particular, you will see sampler, the default value be none, which defined the strategy. Most likely, if we construct the samlper and pass it to the DataLoader, we can change the with and without replacement strategy.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{./figures/497Proj_DataLoader}
%\caption{DataLoader}
\end{figure}
Now, we need to look at the sampler class. If we want to change the sampler, we need to define a class that is a subclass of the sampler, and contain the sampler strategy.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{./figures/497Proj_Sampler}
%\caption{}
\end{figure}
In our case, if you want change that with or without replacement, somebody has already written it.
It turns out the RandomSampler.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{./figures/497Proj_RandomSampler}
%\caption{}
\end{figure}
Let's use our program as an example to show how to use a sampler class. First, we need to create an instance of the RandomSampler class and pass it to the DataLoader as the sampler variable.
\begin{python}
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,download=True, transform=transform)
with_replacement_sampler = torch.utils.data.RandomSampler(trainset, replacement=True)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, sampler=with_replacement_sampler, num_workers=2)
\end{python}
Now if we run this code, the data will be sampled with replacement. This is just to illustrate how you go about changing something about the model and the training process. You just look up the classes, look up in the documentation, see what they do. And often the most of thing that you may want to do, someone already program them in a simple way to do them. You can just turn on like we just doing it. Just notice that you don't have to implement it yourself in this case. You just need to look out these classes do and figure out which tools for use. So we recommend just for fun just try to the CIFAR10 example with and without replacement to see difference.
\subsubsection{Codes for the CIFAR10 example}
\begin{python}
# cifar10_example.py
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def main():
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
conv_net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
\end{python}
\subsection{The training part of CIFAR10 example}
In this section, we will analysis the training part code in the CIFAR10 example.
\begin{python}
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
\end{python}
In particular, we will see what the \emph{loss.backward} and \emph{optimizer.step} do
\subsubsection{\emph{loss.backward}}
Let's start with the automatically differentiation of PyTorch. This function \emph{loss.backward} hides some pretty complicated code that automatically figure out how to calculate the gradient w.r.t. the output.
Let's start with some simple example.
\begin{python}
import torch
from torch.autograd import Variable
x = Variable(torch.randn(3,3), requires_grad = True)
\end{python}
\emph{Variable} is a fundamental data type in PyTorch, which contains a tensor and some other things, like a flag in our example called \emph{requires\_grad}. So if I set \emph{requires\_grad} is true, it just indicating to PyTorch that in the future whatever \emph{x} is involved in (computational graph), it going to calculate the gradient of x. Also, one of the other things the Variable contains is the gradient value. Which right now should be empty. You can check it by running \emph{x.grad}.
Now let's see what happen when we use this Variable x in some calculations.
\begin{python}
y = torch.sum(x)
\end{python}
Here, \emph{torch.sum} means all the values of x. Here, we create a new variable y which consists of the sum of the entries of x. And PyTorch allows us to automatically calculate the gradient of the output w.r.t. any inputs. This is an extremely simple example.
\begin{equation}
\begin{split}
x &= \left(\begin{array}{ccc}
* &\cdots &* \\
\vdots &\ddots &\vdots \\
* &\cdots &*
\end{array}\right) \\
y &= \left(\begin{array}{c}
1 \\ \vdots \\ 1
\end{array}\right)^\top
x\
\left(\begin{array}{c}
1 \\ \vdots \\ 1
\end{array}\right)
=\sum_{i=1}^3 \sum_{j=1}^3 x_{ij} \\
\frac{{\rm d} y}{{\rm d} x} &= \left(\begin{array}{ccc}
1 &1 &1 \\
1 &1 &1 \\
1 &1 &1
\end{array}\right)
\end{split}
\end{equation}
How to let PyTorch compute this. Simply call
\begin{python}
torch.autograd.grad(y,x)
\end{python}
This function automatically calculate the derivative of the second augment w.r.t the first. And other way, which is more commonly used, is to call
\begin{python}
y.backward()
\end{python}
This function will calculate the derivative of y w.r.t everything that depends on the requirements of gradients (see the flag \emph{requires\_grad}). In our case, it will calculate the gradient of y w.r.t. x and store the gradient value in \emph{x,grad}.
These are two common ways that let PyTorch to calculate the gradients. We just give a simple example with one input and one output, but this whole thing works even for much more complicated examples with multiple input variables. And PyTorch can automatically calculate the gradients of these variables.
And there is one important thing to note is now if I call the \emph{y.backwark} again, what happen to the \emph{x.grad}? It will keep the same? or it will be changed? The answer is, unfortunately, it adds the gradient to whatever is already stored in the \emph{x.grad}. So, this explains why we have to zero the gradients of all of the parameters (see \emph{optimizer.zero\_grad}) before we call
\emph{loss.backward} in our codes.
\subsubsection{\emph{optimizer.step}}
The \emph{loss.backward} step calculates the gradients of all of our parameters and stores in \emph{.grad} fields. Now, let's look at our \emph{optimizer}. This \emph{optimizer} may be a class you want to write yourself if you want to try some new algorithm. It will help to know how it works.
Notice that the function we are calling here is \emph{optimizer.step}. For example, we set optimizer as SGD. After we calculate the gradients of all of the parameters, then we call this step function, it will add the values of the multiplication of learning rate and negative gradients to current values of the parameters. The \emph{optimizer} class assume that gradients already been calculated and stored in \emph{.grad} fields, and the class just decide what to do with the gradients.
Now, let use Logistic regression as an example.
\begin{python}
# lr_example.py
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class LogisticRegression(nn.Module):
def __init__(self):
super(LogisticRegression, self).__init__()
self.linear = nn.Linear(10, 5)
def forward(self, x):
return self.linear(x)
\end{python}
We just write a logistic regression class with 10 input features and 5 output features. Then we will use this example to see what a the network class exactly contains.
Run the following codes in python terminal
\begin{python}
import torch
from lr_example import LogisticRegression
net = LogisticRegression()
x = torch.randn(10)
y = net(x)
\end{python}
Here we create an instance of LogisticRegression class, and set x as input and y as output. The variables this class contains is the parameters which is a 10-by-5 matrix and a vector as the bias. And take a vector x as input, and apply the forward step (see \emph{net(x)}, `forward' can be dropped), it will run the forward function to calculate $W x+b$.
There is one thing have to know is that when you call \emph{backward} on something has to be a scalar. It can't be for example a tensor. For example, if I try to call \emph{y.backward}, it will be given an error, because you can only call this for scalar outputs. It won't to calculate a Jacobian matrix of a vector output w.r.t. a vector input. However, if you pass it the derivatives of each of this components, then it can continuous do the \emph{.backward} function. So, for example, if we we want to calculate the derivative of the sum of $y$, we can do it as follows:
\begin{python}
y.backward(torch.ones(5))
\end{python}
If you have a vector that you want to call \emph{.backward} on, you have to pass it a vector with the same size, it gives the gradients of each of the components of that vector w.r.t. the output. Actually, it calculate the Jacobian times the vector. In general, it can't calculate the Jacobian of a non-scalar output, but if you know what you want to multiply the Jacobian with, then you can pass it to \emph{.backward} as well.
Then we can see the parameter of a network
\begin{python}
params = list(net.parameters())
\end{python}
Now \emph{params} contains matrix $W$ and vector $b$. Also, we can see the gradient values of them.
\begin{python}
params[0].grad
params[1].grad
\end{python}
Here we give an explanation about it. For $x\in\mathbb{R}^n$ and $y\in\mathbb{R}^d$, $y$ is some function of $x$.
\begin{equation}
\begin{split}
y &= f(x) \\
\frac{{\rm d} y}{{\rm d} x} &= \left(\begin{array}{ccc}
\frac{{\rm d} y_1}{{\rm d} x_1} &\cdots &\frac{{\rm d} y_d}{{\rm d} x_1} \\
\vdots &\ddots &\vdots \\
\frac{{\rm d} y_1}{{\rm d} x_n} &\cdots &\frac{{\rm d} y_d}{{\rm d} x_n}
\end{array}\right)
\end{split}
\end{equation}
In this case, \emph{y.backward()} will return an error, but we can call \emph{y.backward(v)} for some vector $v\in\mathbb{d}$. Then it will calculate
\begin{equation}
\frac{{\rm d} y}{{\rm d} x}\cdot v = \left(\begin{array}{ccc}
\frac{{\rm d} y_1}{{\rm d} x_1} &\cdots &\frac{{\rm d} y_d}{{\rm d} x_1} \\
\vdots &\ddots &\vdots \\
\frac{{\rm d} y_1}{{\rm d} x_n} &\cdots &\frac{{\rm d} y_d}{{\rm d} x_n}
\end{array}\right)
\left(\begin{array}{c}
v_1 \\ \vdots \\v_d
\end{array}\right)
= \nabla(y\cdot v)
\end{equation}
This is basically because if you want to compute the whole Jacobian matrix, it requires a lot of time and space and it always not necessary.
Maybe this will make more sense if I set another variable $z$ as the sum of $y$,
\begin{python}
net.zero.grad()
y = net(x)
z =torch.sum(y)
\end{python}
In our example, we can see $z$ is a function of $y$ and $z = g(y) = y \cdot v$.
If I take \emph{z.backward}, this is the same computation with the previous did. Because the derivatives of $z$ w.r.t. $y$ is $\frac{{\rm d} z}{{\rm d}y} = (1,1,1,1,1)^\top$. This is what the \emph{.backward} does. It calculate the gradients multiplied by the Jacobian of previous layer to get the previous gradients, and so on.
You can call
\begin{python}
z.backward()
params[0].grad
params[1].grad
\end{python}
and compare the output with the previous one.
In practice, we would construct some loss function, and call \emph{backward} on it. Particular, in our cifar10 example, it consists of a bunch of parameters, particular, each of these layers has its parameters, and when you call forward, it performs some complicated calculations. It will build some graph that contains all the dependencies of these calculations. And then, after you apply the network
and apply a loss function to it, you call \emph{loss.backward}, this step gives you all the gradients of the parameters, and then this is a special optimizer class, which takes those stored gradients and use them to perform like a forward step.
The most important thing to remember are don't forget the \emph{optimizer.zero\_grad} because the \emph{backward} step adds the new calculated gradients to that already stored in the \emph{grad} field. Of course, if you have two loss functions, called \emph{loss1} and \emph{loss2}, you can call
\begin{python}
loss1.backward()
loss2.backward()
\end{python}
and gradients now is the sum of those two gradients.
| {
"alphanum_fraction": 0.7189988623,
"avg_line_length": 58.7350230415,
"ext": "tex",
"hexsha": "53398296f71d7897479edf812c17bc91d79d3b7c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "liuzhengqi1996/math452",
"max_forks_repo_path": "6DL/497Proj_cifar.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "liuzhengqi1996/math452",
"max_issues_repo_path": "6DL/497Proj_cifar.tex",
"max_line_length": 1198,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "liuzhengqi1996/math452",
"max_stars_repo_path": "6DL/497Proj_cifar.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6802,
"size": 25491
} |
\chapter*{\centering Acknowledgements}
\quad Write acknowledgements, if your want to.
| {
"alphanum_fraction": 0.7931034483,
"avg_line_length": 21.75,
"ext": "tex",
"hexsha": "694c5e6e5692ca655c1d40ee949464fc98ec152f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bfe056d9d370d9e5fe2782381b872bf102a960ba",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "himanshu520/HerbrandEquivalence",
"max_forks_repo_path": "reports/Rep_Mid_8/acknowledgement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bfe056d9d370d9e5fe2782381b872bf102a960ba",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "himanshu520/HerbrandEquivalence",
"max_issues_repo_path": "reports/Rep_Mid_8/acknowledgement.tex",
"max_line_length": 46,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bfe056d9d370d9e5fe2782381b872bf102a960ba",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "himanshu520/HerbrandEquivalence",
"max_stars_repo_path": "reports/Rep_Mid_8/acknowledgement.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 21,
"size": 87
} |
% !TeX spellcheck = en_US
% arara: pdflatex
% arara: bibtex
% arara: pdflatex
% arara: pdflatex
\documentclass[journal,10pt,twoside]{IEEEtran}
\usepackage[utf8]{inputenc}
\usepackage{times,textcomp,amssymb}
\usepackage[cmex10]{amsmath}
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
\usepackage{breqn,cite}
%\usepackage{epstopdf}
\usepackage[dvipsnames]{xcolor}
\usepackage[pdftex]{graphicx}
\usepackage{subfig}
\usepackage[justification=centering]{caption}
\usepackage[section]{placeins} % floats never go into next section
%\let\labelindent\relax % Compact lists
\usepackage{array,booktabs,enumitem,microtype,balance} % nice rules in tables
% amsmath sets \interdisplaylinepenalty = 10000
% preventing page breaks from occurring within multiline equations
\interdisplaylinepenalty=2500
%tikz figures
\usepackage{tikz}
\usetikzlibrary{automata,positioning,chains,shapes,arrows}
\usepackage{pgfplots}
\usetikzlibrary{plotmarks}
\newlength\fheight
\newlength\fwidth
\pgfplotsset{compat=newest}
\pgfplotsset{plot coordinates/math parser=false}
\newcommand{\EB}[1]{\textit{\color{blue}EB says: #1}}
\newcommand{\FR}[1]{\textit{\color{ForestGreen}FR says: #1}}
\newcommand{\LA}[1]{\textit{\color{orange}LA says: #1}}
\newcommand{\FS}[1]{\textit{\color{red}FS says: #1}}
\usepackage{hyperref}
\definecolor{dkpowder}{rgb}{0,0.2,0.7}
\hypersetup{%
pdfpagemode = {UseOutlines},
bookmarksopen,
pdfstartview = {FitH},
colorlinks,
linkcolor = {dkpowder},
citecolor = {dkpowder},
urlcolor = {dkpowder},
}
\addto\extrasenglish{%
\renewcommand{\sectionautorefname}{Section}%
\renewcommand{\subsectionautorefname}{Subsection}%
}
\pdfminorversion=7 % fixes warnings of eps to pdf included images
\clubpenalty=10000
\widowpenalty=10000
%%%%%%%%%%%%%%%%
\begin{document}
\bstctlcite{etalControl} % use et al. after 5 authors in bib
\title{A study on the Iterated Prisoner's Dilemma}
\author{%
\IEEEauthorblockN{Elia Bonetto, Filippo Rigotto, Luca Attanasio and Francesco Savio}
\IEEEauthorblockA{Department of Information Engineering, University of Padova -- Via Gradenigo, 6/b, 35131 Padova, Italy\\ %Email:
{\tt\{eliabntt94,rigotto.filippo,blackwiz4rd,francesco.savio196\}@gmail.com}}
}
\markboth{High Level Programming -- Computational Physics Lab, Fall 2018}%
{Bonetto \MakeLowercase{\textit{et al.}}: A study on the Iterated Prisoner's Dilemma}
\IEEEpubid{\raisebox{-1.1pt}{\includegraphics[height=7.5pt]{by-sa}} \copyright~2019 The authors. Licensed under \href{https://creativecommons.org/licenses/by-sa/4.0/deed.en}{Creative Commons Attribution -- ShareAlike 4.0}}
\maketitle
%%%%%%%%%%
\begin{abstract}
In this work, the popular Iterated Prisoner's Dilemma game is analyzed in different matching scenarios: (i) the classical version between two players, (ii) a generalization of the classical version between multiple players, (iii) an extension of (ii) allowing the population of players to evolve or (iv) allowing players' strategies to randomly change between rounds, according to a gene representing the grade of cooperation. % (a Nature choice, in Game Theory terms).
Rounds' statistics are collected to have an insight on which is the best strategy, if there is an absolute winner, which is the evolution and which is the players' score of each scenario.
\end{abstract}
\section{Introduction} \label{s:intro}
\IEEEPARstart{T}{he} Prisoner's Dilemma (PD) is a classical game analyzed in Game Theory which attempts to model social and economical interactions. It is a \textit{dilemma} because, if exploited to explain the emergence of altruism in human society or, generally, in animal society, it fails badly at a first glance. The game is based on a couple of players who have to make a decision on whether to cooperate or not with their opponent. As we will see shortly, even if the intuition tells us that the best choice is to \textit{cooperate}, the only win-ever strategy in a one-shot game is \textit{not} to cooperate (\textit{defect}).
More insights on this aspect can be found in \autoref{s:game} which gives a theoretical and mathematical introduction on the Prisoner's Dilemma problem and on its iterated version.
In \autoref{s:str}, we illustrate the strategies (the definitions of the players' ways of acting) we have implemented among all the possible ones.
Furthermore, in Sections [\ref{s:IPD2P},\ref{s:IPDMP},\ref{s:rIPDMP},\ref{s:crIPDMP}] we explore the results of the simulations for each case study.
\autoref{s:ml} is about a brief introduction and review of related works approaching the problem by using machine learning and artificial intelligence procedures, such as reinforcement learning and evolutionary algorithms.
Eventually, in \autoref{s:conc}, some final considerations summarizing the analysis are presented.
All the tables of tournament results, statistics and additional figures can be found in the \hyperref[s:appendix]{Appendix}.
All the code, developed in \textit{Python 3}, is available on \href{https://github.com/eliabntt/iterative_prisoner_dilemma}{GitHub}.
\section{The dilemma explained} \label{s:game}
The classical formulation of the PD implies that given two prisoners in a scenario where their conviction depends on their mutual cooperation, they can either stay silent or fink, respectively cooperate or defect.
Another possible formulation is by means of a trade-off game, the \textit{closed bag exchange}:
\begin{quote}
\textit{Two people meet and exchange closed bags, with the understanding that one of them contains money and the other contains a purchase. Either player can choose to honor the deal by putting into his or her bag what he or she has agreed, or he or she can defect by handing over an empty bag.}
\end{quote}
Mathematically, the PD can be expressed with linear algebra. The key component is the \textit{Payoff matrix} $M$, which quantifies the reward of each player depending on whether he\footnote{For the rest of the paper, the player is considered a male, for the sake of simplicity in writing. Of course, nothing changes assuming a female player.} cooperated or defected:
$$
M =
\begin{pmatrix}
R & S \\
T & P
\end{pmatrix}
$$
where $T$ (Temptation), $R$ (Reward), $S$ (``Sucker's''), $P$ (Punishment) are integer numbers that satisfy the following conditions, as proven by Rapoport~\cite{rapoport}:
$$
T>R>P>S; \quad 2R > T+S
$$
For example, $T=3$, $R=2$, $P=1$ and $S=0$, or $T=5$, $R=3$, $P=1$, $S=0$, the default for all our experiments.
\IEEEpubidadjcol
$R$ is returned for both players if they both cooperate, $P$ if they both defect; if the two players' actions differ, $S$ is for the player who cooperated and $T$ is for the one who defected.
Similarly, each player's choice (move or action) for a single round can be represented by one of the two axis in $\mathbb{R}^2$, i.e. $u_C=\begin{pmatrix} 1 \\ 0 \end{pmatrix}$ or $u_D=\begin{pmatrix} 0 \\ 1 \end{pmatrix}$, where the first axis stands for \textit{Cooperate} and the second for \textit{Defect}. Being $u_1$ and $u_2$ the moves of the first and second player respectively, their rewards $r_1$ and $r_2$ can then be computed as:
$$
r_1 = u_1^T M u_2
\quad
\quad
r_2 = u_2^T M u_1
$$
For a single-shot game, namely a game which is played only once, the best strategy (choice of action) may seem for both players to cooperate. If both players cooperate, this leads to a good payoff which maximizes the global outcome, evaluated as the sum of the payoffs for each of them. This is indeed the \textit{Pareto dominating strategy}. Here lies the dilemma: when a player is facing the decision to cooperate or not, if he chooses to cooperate, he also realizes that the best choice of action would have been instead to betray the other player as this leads, \textit{irrespective of the opponent's move}, to a better payoff for himself.
Both players are rational, which also implies some degree of selfishness, and that they are fully aware of the rules of the game (they have \textit{common knowledge}, using Game Theory terms). In addition, they move simultaneously (so no one knows the opponent's move beforehand). Provided these statements and by doing the simple reasoning explained above, the two players will conclude that the best way of acting is to defect: this would lead to a slightly lower payoff if the opponent defects (minor punishment), but to a higher one if the other player chooses to cooperate, allowing them both to gain something in any case, instead of risking to loose everything. As a result, the only reasonable conclusion is that the only \textit{Nash equilibrium}, or the only way to always win this game in a single-shot scenario, is to defect. Thus, we have just observed that the \textit{Nash equilibrium} is not \textit{Pareto optimal}: playing cooperate is not feasible since the player is in danger against a defecting opponent. Hence, the only strategy in which nobody wants to deviate is to defect, as also noted by Fogel in~\cite{fogelEvolvingBehaviors}.
If repeated games are taken into consideration, this reasoning could loose some meaning. In particular, this is the case of the multiplayer Iterated Prisoner Dilemma (IPD), since time and memory (history) must be considered and the combination of the players may have some unexpected outcomes.
Colman supported this concept by indicating that the 2-player IPD is different from the generalized N-player version, and that strategies that work well in the first scenario may fail in large groups~\cite[p.142]{colman2016game,yao1994experimental}.
In addition, Grim Triggers, Tit For (Two) Tat, random or even more articulated strategies can be introduced, changing the balance of the game and of the previously defined winning cases.
Moreover, in this game, or more generally in iterated games, the notions of Nash equilibrium, Pareto optimal or evolutionary stable strategies\footnote{A strategy is said to be \textit{evolutionary stable} if it cannot be overwhelmed by the joint effect of two or more competing strategies. As a matter of fact, Lorberbaum, Boyd, Farrell and Ware proved that no pure or mixed strategy is ev. stable in the long run, if future moves are discounted (see~\cite{lorb94}).} do not suggest new, efficient and interesting strategies since they inherently loose some meaning due to the intrinsic nature of iterated games~\cite{mathieu2017}.
Winning a game in this setup simply means achieving a better payoff with respect to the opponents and this could be carried out even without playing such strategies.
\section{Strategies} \label{s:str}
The strategy is represented as a function which outputs either $u_C$ or $u_D$. Based on the strategy, such function might depend on one or both players' history of moves, or on the number of moves played up to that moment and so on.
The strategy is based on a probability density function. In this project both probabilistic and deterministic strategies are used.
The strategies based on probability are:
\begin{description}
\item[Nice guy] always cooperate (function's output is always $u_C$).
\item[Bad guy] always defect (function's output is always $u_D$).
\item[Mainly nice] randomly defect $k\%$ of the times, $k<50$.
\item[Mainly bad] randomly defect $k\%$ of the times, $k>50$.
\item[Indifferent] randomly defect half $(k=50\%)$ of the times.
\end{description}
The deterministic strategies are:
\begin{description}
\item[Tit-for-Tat (TfT)] start by cooperating the first time, then repeat opponent's previous move.
\item[Tit-for-Two-Tat (Tf2T)] cooperate the first two times, then defect only if the opponent defected last two times.
\item[Grim-Trigger (GrT)] always cooperate until the opponent's first defect move, then always defect.
\end{description}
The strategies are considered static in case they apply the same move at each iteration as in \textit{Nice guy} or \textit{Bad guy}, and dynamic elsewhere.
These players' strategies are generally fixed in time, i.e. a player cannot change its strategy between rounds, unless specifically requested by the case study rules.
Many more and much complicated strategies could be analyzed and our implementation is open and structured so that it is easy to add one strategy just by changing few lines in the code.
\section{Two players IPD} \label{s:IPD2P}
In this section, the IPD intercourse between two players is evaluated: each player has an assigned strategy, he is unaware of the opponent's way of acting and he plays accordingly to its strategy definition without the possibility to change it. Both players know only their respective history of choices. Each game is repeated for a fixed number of rounds, unknown to the two players. The main metric evaluated as output of this game is the winner, or in other words the one who achieves a higher payoff at the end of the round.
The number of iterations is set to 50 and can be modified by means of a simple option when launching the program.
This could also be seen as the number of moves during the match, and is a factor unknown to the players; if otherwise, a smart player could adopt a supposedly optimal strategy: for example, against a \textit{Nice} (or \textit{TfT}) strategy the best choice of action would be to cooperate in all but the last round, gaining advantage from knowing the number of runs, as inferred from a \textit{backward induction} reasoning.
Results do not depend on this value if only deterministic strategies are employed; conversely, it will slightly influence the random ones.
Note that in every simulation an optional \textit{seed} value can be fixed, so as to have reproducible results from the pseudo-random number generator.
All the possible combinations between players, which represents each strategy presented in \autoref{s:str}, are evaluated, including the case in which the player plays against himself (or, similarly, against a player with the same strategy).
This is a simple repetition of the single-shot game with the addition of memory and the possibility to add probabilistic and more elaborated strategies, as it has already been highlighted. Since the population is not a concern in this particular context (which is a single A vs B game), the winning strategy in all cases is to \textit{not} cooperate, or, in other terms, the \textit{Always Bad guy} strategy, as expected.
As a matter of fact, in all scenarios \textit{Bad guy} reaches at least the same reward of the opponent, but more often it gets a higher one as in Figures~[\ref{fig:badvstft},\ref{fig:badvsnice},\ref{fig:badvsmainlybad}].
Furthermore, a game facing a \textit{Bad guy} against another \textit{Bad guy} as in \autoref{fig:badvsbad}, or similarly against a \textit{Mainly bad} (\autoref{fig:badvsmainlybad}), leads to the same cumulative reward for both players in the former case or almost the same in the latter. The obtained reward is not as good as if players were playing against (mainly) nice strategies.
This is a first important insight that verifies and points out what has been seen from a theoretical point of view earlier: defecting is always a winning strategy, but it may be non-optimal; on the other hand, if both are cooperating, so playing the Pareto optimal combination, each of them gains more in terms of payoff and would get an advantage if they choose to defect against the other. In the latter case, memory is important as it allows revenge and reactive strategies to exist, keeping in mind that the total number of rounds is unknown.
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-Bad-Bad}
\caption{Score evolution, Bad vs Bad}
\label{fig:badvsbad}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-Bad-TitForTat}
\caption{Bad vs TfT}
\label{fig:badvstft}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-Bad-Nice}
\caption{Bad vs Nice}
\label{fig:badvsnice}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-Bad-MainlyBad(k=72)}
\caption{Bad vs Mainly bad}
\label{fig:badvsmainlybad}
\end{figure}
Taking a closer look, the combination of \textit{Nice} and one between \textit{TfT} or \textit{Nice} leads to better payoffs at the end of the runs as in Figures~[\ref{fig:tftvsindiff},\ref{fig:nicevsnice},\ref{fig:nicevstft}]. The underlying idea is that both players are getting the highest reward, not just one of them, and these choices are cumulatively better, compared to the \textit{Bad}-\textit{Nice} combination.
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-Indifferent-TitForTat}
\caption{TfT vs Indifferent}
\label{fig:tftvsindiff}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-Nice-Nice}
\caption{Nice vs Nice}
\label{fig:nicevsnice}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-TitForTat-Nice}
\caption{Nice vs TfT}
\label{fig:nicevstft}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipd2p/ipd2p-rewards-TitForTat-TitForTat}
\caption{TfT vs TfT}
\label{fig:tftvstft}
\end{figure}
The \textit{TfT} strategy is interesting, because \textit{TfT} leads to almost the same cumulative reward as the opponent, and it is highly adaptive, even if it is fast-forgiving when it plays against a mainly bad strategy. In other words, \textit{TfT} is robust because it never defects first and never takes advantage for more than one iteration at a time~\cite{fogelEvolvingBehaviors}.
In addition to these considerations, simulations were performed multiple times to get insights of the mean and variance of the rewards ruling these games. It is obvious that the static strategies (as the \textit{Nice-Nice}, \autoref{fig:boxnn}), or the non-triggering ones, or the ones without variations have constant mean and 0 standard deviation. On the other hand, it is interesting to notice that random strategies have a non-null variance, as shown in \textit{Mainly Bad-TfT}, \autoref{fig:boxmbvtft}. However, this does not imply that \textit{TfT} could ever win against such a strategy, it is only pointing out that there is a variation on subsequent runs based on the randomness of at least one of the two players: the \textit{TfT} strategy is a reactive strategy so it will be always ``late'', meaning that a player applying it will always have at most the same points of the opponent at the end of the game. There may be particular cases where a \textit{Mainly Nice} player may defect a \textit{Mainly Bad} opponent but these are just outliers in the overall simulation.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{../img/ipd2p/ipd2p-boxplot-Nice-Nice}
\caption{Nice vs Nice}
\label{fig:boxnn}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{../img/ipd2p/ipd2p-boxplot-TitForTat-MainlyBad(k=72)}
\caption{TfT vs Mainly Bad}
\label{fig:boxmbvtft}
\end{figure}
Moreover, it can be seen that the only strategies that reach 0 as a final payoff are the \textit{Nice} ones, while the \textit{TfT, Tf2T, GrT, Bad} have a higher minimum value.
It is impossible to make the optimal score against all strategies. The most intuitive reason is a consequence of the first move: to play optimally against a \textit{Bad} guy, it is necessary to defect at the first round, and, as already discussed, to play optimally against \textit{GrT} (or equivalently \textit{TfT}), it is necessary to cooperate until the last round where you should defect~\cite{mathieu2017}.
But the number of rounds is unknown to the players and they should know in advance the type of opponent: this would enable them to adapt their strategy, but is not allowed by the rules of the game.
Finally, two additional metrics have been introduced: \textit{yield} and \textit{achieve}.
Being $p$ and $q$ two players, each with a given strategy, the metrics can be expressed as
$$
\mathrm{yield}(p) = \frac{\mathrm{points}(p)}{\mathrm{optimal\_pts}(q)} \quad
\mathrm{achieve}(p) = \frac{\mathrm{points}(p)}{\mathrm{hoped\_pts}(p,q)}
$$
where $\mathrm{points}(p)$ is the number of points at the end of the round,
$\mathrm{optimal\_pts}(p)$ is the maximum that $p$ could achieve if he knew $q$'s moves in advance, and
$\mathrm{hoped\_pts}(p,q)$ is the best result player $p$ can achieve in the optimal scenario or, in other words, supposing that the opponent $q$ would respond in a way such that $p$ could maximize his payoff.
\textit{Yield} represents how well the player has performed against its opponent with respect to the maximum that he could get if he knew its opponent's moves in advance, while \textit{achieve} represents how far the player is with respect to its best expectation.
The following considerations arise on the grounds of \autoref{tab:ipd2p}.
The \textit{yield} metric backs up our claim that \textit{Bad} is the only win-always strategy as it is the only one that gives a stunning $100\%$ for all the matches playing a perfect move against every opponent, a result that can also be seen in \autoref{tab:ipd2pavg}. In other words, a player using this strategy does not need to know in advance which strategy the opponent is adopting. Moreover, this metric points out how \textit{TfT, Tf2T, GrT} strategies are more resilient, namely, they respond well to strategies in that a player does not reach its maximum achievable points, but performs well irrespective of the opponents' strategy. In particular, \textit{GrT} reaches scores over $90\%$ even against \textit{(Mainly) Bad} players.
The \textit{achieve} column is a coupled metric that takes into account both players. We notice once again that, ruling out the same-strategy couples, \textit{TfT, Tf2T} and \textit{GrT} strategies achieve results (almost) always comparable with the opponents, meaning that they are at least as good as them.
On the contrary, taking the averages of these two metrics with respect to all the subsequent matches, it can be seen from \autoref{tab:ipd2pavg} how the only strategies that achieve high performance on both are the \textit{Tf(2)T} and \textit{GrT}, meaning that even if they do not win every time, they achieve pretty high payoffs.
On the long run, on the basis of these numbers, the conclusion is that these the strategies would emerge since yielding the maximum possible payoff does not imply achieving high overall results.
More insights about this part, including the complete collection of the generated pictures, can be found in the repository, in the supplementary material and in the \hyperref[s:appendix]{Appendix}, where collected statistics are presented.
\section{Multiple players IPD - Round-robin scheme} \label{s:IPDMP}
The IPD with \textit{round-robin} (RR) scheme, used to match-up the opponents, consists in a number of players, with multiple strategies, not necessarily different, with each player playing once against each other for a fixed \texttt{NUM\_ITER} times. This value is set by default to 50 in simulations but can be changed with a parameter when launching the program.
Each player chooses its fixed strategy at the beginning of the tournament and holds it throughout the course of the match without knowing the strategies of the other players.
In short, it is a variation of the previous case, in which multiple players, with possibly different strategies, play in a RR way. The variation consists in the fact that a single player will win the tournament if, at the end, he has the highest cumulative payoff.
Since there are $C=N\cdot (N-1)/2$ possible couples of players and $I$ iterations of the game, at the end, the total number of matches will be $C\cdot I$. From the perspective of each single player, the total number of match to attend is simply $I\cdot(N-1)$.
Tournament statistics like points and counts of cooperation and defection moves, along with the percentage of cooperation, are shown in \autoref{tab:ipdmp50}.
As a validation proof, our results have been compared to the ones obtained from the Axelrod Tournament Demo software, \cite{demosw} but this software does not implement all the strategies considered in this work. For example, \textit{GrT} is named \textit{Spiteful}, but the software cannot set \textit{Mainly Bad/Good} strategies with a given probability of cooperating for which a \textit{Random} agent is used as a substitute. Thus, slightly different outcomes were foreseen due to this constraint; however, this notwithstanding, the evolution of the tournament is quite similar between the two simulations.
Doing a special simulation with only deterministic strategies leads to the same results, as it can be seen comparing \autoref{tab:ipdmp10stat} and \autoref{fig:ipdmp10statsw} in the \hyperref[s:appendix]{Appendix}.
Throughout our tests, we noticed how the results of the tournament, and of the following case studies, depend on the initial population and the balance between the amount of ``good'' and ``bad'' players. Changing the population could lead to different results: an insight that is rarely pointed out in the literature.
Analyzing the 50-players game as in Figures~[\ref{fig:ipdmp50evo},\ref{fig:ipdmp50evosw},\ref{fig:ipdmp50boxsingle},\ref{fig:ipdmp50boxfinal}], where a random strategy is assigned to each player, the winning strategy is \textit{GrT}. Just behind it, there is \textit{TfT}, followed by a tight set of \textit{Tf2T}, \textit{(Mainly) Bad}, \textit{Bad} and \textit{Indifferent} strategies. Lastly, \textit{(Mainly) Nice} strategies achieve the lowest scores.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/ipdmp/ipdmp-evolution-of-game-50}
\caption{50 players, evolution of the game}
\label{fig:ipdmp50evo}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipdmp/ipdmp50-plot-det}
\caption{50 players, evolution -- software results \cite{demosw}}
\label{fig:ipdmp50evosw}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/ipdmp/ipdmp-boxplot-single-match-50}
\caption{50 players, boxplot of a single match}
\label{fig:ipdmp50boxsingle}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/ipdmp/ipdmp-boxplot-final-points-50}
\caption{50 players, boxplot of the final points}
\label{fig:ipdmp50boxfinal}
\end{figure}
In a 10-players game, as presented in Figures~[\ref{fig:ipdmp10evo},\ref{fig:ipdmp10evosw}], the best overall strategy is \textit{TfT}. As pointed out previously, \textit{TfT} is a reactive strategy that leads in most of the cases to almost the same reward as the opponent. After several tries, it is found that a ``good'' setup to get this outcome includes more ``nice'' strategies than ``bad'' ones in order to have a \textit{TfT} winner or to defect the ``bad'' players, there should be enough people with strategies having limited power against ``good'' players (so spiteful or reactive ones). This consideration is not common in the literature but in our opinion it is important and worth noticing, although it can be explained by the game's insights. This statement helps to interpret results and assign them the right meaning.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/ipdmp/ipdmp-evolution-of-game-10}
\caption{10 players, evolution of the game}
\label{fig:ipdmp10evo}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ipdmp/ipdmp10-plot-det}
\caption{10 players, evolution -- software results \cite{demosw}}
\label{fig:ipdmp10evosw}
\end{figure}
Historically \textit{TfT} was considered the best strategy to win the tournament since it is simple, and one of the best strategies for maximizing the player's score.
This fact was demonstrated in the extensive tests done by Axelrod, thoroughly described in \cite{axelrod1981evolution,axelrod1984evolution} and taken up as a starting point in \cite{mathieu2017}: \textit{TfT}, proposed by Anatol Rapoport, was the winner over all strategies.
However, here it is proven that \textit{TfT} wins only in specific tournaments cases depending on the initial population. Moreover, spiteful strategies like \textit{GrT} seem to get the maximum in heterogeneous and more mixed populations. In any case, both have an extreme but effective behavior in our testbed. Possible advances can be introduced by taking into account more than the last one or two moves, allowing for more intricate and complex strategies~\cite{mathieu2017}.
In each tournament, some variations of the results can be seen by repeating the simulation multiple times with the same initial population and generating boxplots; since random strategies have been introduced, the result of a one-shot complete game may differ with respect to the average results. These are rare cases that ought to be considered as outliers. Moreover, running simulations with different strategies or with a different initial population (i.e. 20 or 30 players, or by changing the seed value), obtained results can be different, especially since the balance between the number of \textit{(Mainly) Bad} and \textit{(Mainly) Nice} guys changes from the previously analyzed scenarios. The results of the tournaments are not predictable without knowing the initial population but this is an information available only to external observers and \textit{not} to the actual players.
The results are backed also by \textit{achieve} and \textit{yield} metrics that do not change much with respect to ``A vs B'' games.
It can be easily noticed how it is the combination of the two that ``matters'', rather than either of them alone, although obviously players which have an higher \textit{achieve} value are usually in the ``winner'' part of the chart.
\section{Repeated multiple players IPD} \label{s:rIPDMP}
The previously defined MPIPD tournament is now iterated many times and the population changes based on the results obtained in the previous round: the scheme is denoted as a \textit{Repeated MPIPD}~(rMPIPD).
Two main separated scenarios have been developed to study the behavior, the evolution of the populations and the convergence speed by simulations: static and increasing populations (with three separated sub-cases). A population is said to be converged in our simulations if more than $3/4$ of it have the same strategy type at the end of a complete round. The basic rules are the same as pointed out in the previous sections (\textit{common knowledge}, etc.).
\subsection{Static Population} \label{ss:rIPDMPc}
In this case, the number of players is fixed. Each player implements a strategy choosing it with equal probability from the strategies set. At the end of each round, the population is sorted with respect to the cumulative payoff and a fixed percentage $x$ ($30\%$ is the default in our simulations) of it, starting from the beginning of the list, is ``doubled'', so that for each player in this subset, another player with the same strategy is added to the population. Likewise, the players in the last $x\%$ of the chart are then removed from the game, regardless of their strategies. In this way, the total number of players is ensured to be static and then the convergence of the population through consecutive rounds can be studied. After this step, the scores are zeroed and the tournament can restart.
If convergence is not reached after a maximum number of repetitions, execution of the program is stopped.
This method is similar to that used by Axelrod in his tests~\cite[\S 2.6]{mathieu2017}~\cite{axelrod1984evolution}.
Figures~[\ref{fig:constR},\ref{fig:constFI},\ref{fig:constLI}] show the evolution of a population of 50 players over some iterations.
Details on the evolution of the population, grouped by strategy type, are presented in \autoref{tab:ripdmp-const}.
\begin{figure}[!ht]
\centering
\includegraphics[width=.95\columnwidth]{../img/ripdmp-const/ripdmp-evolution-const-pop-50}
\caption{Evolution of a constant population of 50 players}
\label{fig:constR}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.95\columnwidth]{../img/ripdmp-const/ripdmp-scores-const-pop-50-r0}
\caption{First iteration scores ($it=0$)}
\label{fig:constFI}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ripdmp-const/ripdmp-scores-const-pop-50-r3}
\caption{Last iteration scores ($it=4$)}
\label{fig:constLI}
\end{figure}
\pagebreak
It can be easily observed how the \textit{GrT} and \textit{TfT} strategies very quickly outpace the others: in just two iterations, they represent almost half of the population, with a predominance of \textit{TfT} players. At the fifth iteration, we can see that \textit{GrT} takes the lead but, as previously stated, results depend on the initial population: for example, by fixing the seed to $24$ as in \autoref{fig:constRseed24} and \autoref{tab:ripdmp-const-24}, it can be pointed out how \textit{TfT} players dominate, while using $1209$ as seed (\autoref{fig:constRseed1209} and \autoref{tab:ripdmp-const-1209}) leads to a final population formed by mostly \textit{Bad} players.
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ripdmp-const/seed24/ripdmp-evolution-const-pop-50}
\caption{Evolution of a 50-players pop., $seed = 24$}
\label{fig:constRseed24}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ripdmp-const/seed1209/ripdmp-evolution-const-pop-50}
\caption{Evolution of a 50-players pop., $seed = 1209$}
\label{fig:constRseed1209}
\end{figure}
These results add some new insights to the previous results obtained by the simulation of the iterated Prisoner's Dilemma: \textit{TfT} overwhelms its brother \textit{Tf2T}. Taking scores into account, we can notice how \textit{GrT} and \textit{TfT} are pretty similar since they do not trigger each other.
\subsection{Increasing Population} \label{ss:rIPDMPi}
In this case, the number of players (population) is increased at each iteration. Three different ways of adding population between rounds have been implemented; after each round, a player has a certain probability based on his ranking to have a child of the same type:
\begin{enumerate}
\item The probability is $p(i)=1- i\ /\ num\_players$ where $i$ is the position reached by the player. The winner of the round is indeed doubled, because $p(0)=1$, while the looser is not, as $p(last)=0$.
For each player, a random number $d$ is drawn, according to a uniform probability distribution, and compared with $p(i)$. If $p(i)$ is greater than $d$ the player is effectively doubled, otherwise not.
\item The ordered population is split into three sets of equal size $A,B,C$. For each player in the population, a random number $d$ is drawn and its strategy is doubled if:
\begin{itemize}
\item $d>0.2$ if the player belongs to $A$
\item $d>0.5$ if the player belongs to $B$
\item $d>0.8$ if the player belongs to $C$
\end{itemize}
This is an alternative way to promote best strategies, due to the higher probability of being doubled, and obstruct less performing players, whose total number does not increase significantly.
\item A player's score is defined as its obtained points divided by the maximum obtained score in the whole population. The player's strategy is doubled if a drawn random number is greater than its score.
\end{enumerate}
In our software, the first of the three proposed methods is used by default. Other methods can be used by setting programs' parameters.
Figures~[\ref{fig:incrR},\ref{fig:incrFI},\ref{fig:incrLI}] show the evolution of a population of 50 players over four iterations using alternative 1. In this case, convergence is not reached at the fifth iteration, since the population is increasing, but the simulation still shows the same behavior. The \textit{GrT} and \textit{TfT} strategies are getting stronger and stronger. In conclusion, in the future, i.e. evaluating the problem with more iterations, the population will increase with similar behavior and converge to these strategies as in the constant population scenario.
\begin{figure}[!ht]
\centering
\includegraphics[width=.9\columnwidth]{../img/ripdmp-incr/alt1/ripdmp-evolution-increasing-pop-50}
\caption{Evolution of an increasing pop. from 50 players}
\label{fig:incrR}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/ripdmp-incr/alt1/ripdmp-scores-increasing-pop-50-r0}
\caption{First iteration scores ($it=0$)}
\label{fig:incrFI}
\end{figure}
%\begin{figure}[!ht]
% \centering
% \includegraphics[width=1\columnwidth]{../img/ripdmp-incr/alt1/ripdmp-scores-increasing-pop-50-r2}
% \caption{Middle iteration scores ($it=2$)}
% \label{fig:incrMI}
%\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/ripdmp-incr/alt1/ripdmp-scores-increasing-pop-50-r4}
\caption{Last iteration scores ($it=4$)}
\label{fig:incrLI}
\end{figure}
The other alternatives give similar results that do not change our considerations. Full details on the evolution of the population for all alternatives can be found in the \hyperref[s:appendix]{Appendix} (Tables~[\ref{tab:ripdmp-incr},\ref{tab:ripdmp-incrA2},\ref{tab:ripdmp-incrA3}] and Figures~[\ref{fig:incrRa2},\ref{fig:incrRa3}]).
The only partial exception to this trend is alternative 3 because the score obtained by each player is not equally distributed as in the other two cases. This benefits the strategies that have achieved the best overall score and disadvantages all the other strategies.
It is interesting to note how Wu and Axelrod \cite{IPDnoise} exploit the presented behavior of \textit{TfT} to react to noise in the game: by slightly altering the strategy in both directions, adding generosity (some percentage of opponent's defections go unpunished) or contrition (avoid responding to a defect move when a player's previous defection was unintended), the ``error'' can be quickly recovered and cooperation can be successfully restored. Note, however, that in this work these two variations of the \textit{TfT} strategy are not implemented.
Simulation times grow at each iteration since for each player, as we have seen in \autoref{s:IPDMP}, $I\cdot(N'-1)$ rounds are added and $I\cdot(N'-1)\cdot{NUM\_ITER}$ iterations have to be played and the results sorted: this easily explodes. Taking into account the dependence with respect to the initial population, what is suggested from the previous constant population case is preserved also when the number constraint is relaxed.
\section{rMPIPD with changing strategies} \label{s:crIPDMP}
A step further is made by allowing players to change their strategies in the rIPDMP setup, from which the main structure is unaltered.
Each player has a gene $c$, representing his attitude to cooperate. This value is initially assigned a value of $c=k/100$, where $k$ is the probability of cooperating for random, \textit{Bad} and \textit{Nice} strategies. For \textit{GrT}, \textit{TfT} and \textit{Tf2T} $c$ is set to $0.5$ since these strategies do not have an intrinsic specific attitude for cooperating.
A single round of the game goes as follow: a rIPDMP's round is played, then new players are generated following the first alternative presented in the previous \autoref{ss:rIPDMPi}, and for each one of the ``old'' players a new $c$ is generated and their strategies change accordingly. This will be repeated until convergence or a maximum number of iterations is reached.
Two alternatives are proposed to change $c$ after each round.
\begin{enumerate}
\item For each player, a new random $c_N$ is generated
\item The change of strategy is again based on players' ranking and probability. Bad players (those with $k>50$) have their $c$ updated as
$c_N = (c+(i/num\_players)^2)/2$, that is, players high in the chart will go to a \textit{less} cooperative behavior and vice-versa.
Good players are updated according to $c_N = (c+((1-i)/num\_players)^2)/2$, that is, opposite from before, players high in the chart will have a \textit{more} cooperative behavior and vice-versa.
\end{enumerate}
At this point, if the absolute value of the difference between the old $c$ and the new $c_N$ is greater than a threshold (set to $0.1$) the strategy will change.
The new strategy is picked from a set made of six different random strategies plus \textit{GrT, TfT, Tf2T}.
Note that \textit{GrT} is not available if the player's strategy is going to change towards the good side: in our vision, \textit{GrT} is a revengeful strategy, and the fact that others call it \textit{spiteful} also support that. We cannot propose this strategy if someone wants to be a ``good'' player. On the contrary, a player can move to a less cooperative behavior and eventually become of a \textit{GrT} type.
\textit{TfT, Tf2T} are treated as being in the middle of the range, like \textit{Indifferent}, since they are reactive strategies.
The random strategy generation is bounded based on the strategy's $id$ ($k$ for probability strategies) and $c_N$:\footnote{As a consequence, 4 different cases have to be handled, depending on $id$ or $50$ being considered inside the $\min$ and $\max$ functions in the bounds.}
\begin{itemize}
\item if $c_N \ge 0.5$ the strategies stay on the ``good'' side, between $(1-c_N)\times 100$ and $\min(id,50)$.
\item if $c_N < 0.5$ the strategies stay on the ``bad'' side, between $\max(id,50)$ and $(1-c_N)\times 100$.
\end{itemize}
The evolution of a population initially made of 50 players is presented in Figures~[\ref{fig:incrC},\ref{fig:incrCFI},\ref{fig:incrCMI}] and details can be found in \autoref{tab:cripdmp}: the metrics ``To more (less) cooperative'' are purely indicative of the inclination of players that changed their strategy.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/cipdmp-incr/alt1/cipdmp-evolution-increasing-pop-50}
\caption{Evolution of an increasing pop., from 50 players, with changing strategies}
\label{fig:incrC}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/cipdmp-incr/alt1/cipdmp-scores-increasing-pop-50-r0}
\caption{First iteration scores ($it=0$)}
\label{fig:incrCFI}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{../img/cipdmp-incr/alt1/cipdmp-scores-increasing-pop-50-r4}
\caption{Middle iteration scores ($it=4$)}
\label{fig:incrCMI}
\end{figure}
%\begin{figure}[!ht] % too long, ugly formatting
% \centering
% \includegraphics[width=1\columnwidth]{../img/cipdmp-incr/alt1/cipdmp-scores-increasing-pop-50-r6}
% \caption{Last iteration scores ($it=6$)}
% \label{fig:incrCLI}
%\end{figure}
The second alternative is shown on \autoref{tab:cripdmpA2} and \autoref{fig:incrC2}.
In both cases it is easy to see that the same strategies of the previously investigated scenarios take the lead, namely \textit{GrT}, \textit{Tf(2)T} and \textit{Bad} players. In our simulations we have found that either with a randomly generated or a deterministic cooperation factor the evolution of the population still converges toward the strategies we previously identified as winningly: while from \autoref{fig:incrC} we can note how \textit{TfT} and \textit{Tf2T} jointly leave behind all other strategies, using alternative 2 as in \autoref{fig:incrC2} results in \textit{Bad} players to be the majority of the population after a few rounds.
Finally, we can note how the sets of ``good'' and ``bad'' players have more or less the same cardinality, that is, there is no substantial differences betweeen the two populations. Given the results obtained in the previous simulations we can expect no substantial changes in the future of these simulations and different results with different initial populations. Much more articulated simulations may be done, for example by excluding some strategies or the generation of the population, but we wanted to keep a common framework within all the steps of this work.
\section{Machine Learning approaches} \label{s:ml}
Since the beginning of research on this subject, studies have been developed to find some pattern that can be exploited by learning algorithms.
By the formulation of the IPD game, if such a pattern exists, it has to be learned using unsupervised techniques, since the final outcome of the players depends on their actions.
Players should learn to cooperate --- equivalently, to enhance their altruism --- and here the aim is to do so by adaptively \textit{learning} a strategy. A hindrance is that, in the PD, the Nash equilibrium solution (as stated before, to defect) is not a desirable learning target~\cite{coopSeqRL}.
Reinforcement learning (RL) is an unsupervised learning technique whereby the system must select an output (in case of PD, an action: cooperate or defect) for which it receives a scalar evaluation. RL requires to find the best output for any given input, and is based on the idea that the tendency to produce an action should be strengthened if the action led to favorable results, and weakened otherwise~\cite{sandholmRL}.
Sandholm and Crites \cite{sandholmRL} employed recurrent neural networks (RNN) and the Q-learning algorithm, a particular RL procedure that works by estimating the value of state-action pairs, to train agents to play against the \textit{TfT} strategy and against an unknown opponent. While the first task was easily learned, the second one proved to be more difficult due to non-stationary behavior and lack of \textit{a priori} knowledge of a policy to encourage cooperation.
More recently, Wang \cite{kedaoRL} extended this study with newly developed structures for the RNN part, to test both finite and infinite iterations setups. However, his tests led to pretty much the same results Sandholm had previously obtained.
Finally, evolutionary and particle swarm algorithms are used by Harper \textit{et al.} in a very extensive study \cite{plosRLdominant} to train strategies to perform well against over 170 distinct opponents even in noisy tournaments.
What is noted from the literature is that it is an easy task for a player to learn to compete against a deterministic player, while it is indeed difficult to generalize to an opponent with unspecified behavior.
This is a very specific problem that, given the restrictions of the game, cannot be fully solved by machine learning procedures. As a matter of fact, the rules of the game restrict it in a very simple but powerful area where this kind of methods gain their strengths: information. In particular, we cannot allow the algorithm to know the opponent moves in advance, or their type, or how much the game will go on or any other information that would be viable and helpful to solve the game. Furthermore, the intrinsic ``incoherence'' between rational behavior and optimal outcome (the Nash equilibrium vs Pareto efficiency choices) makes the task extremely difficult to be fully examined by machine learning approaches.
\section{Conclusions and future work} \label{s:conc}
We presented our implementation of the Prisoner's Dilemma and we analyzed the outcomes of several different case studies.
We pointed out that there is no ``best'' strategy for the game: each individual strategy will work better when matched against a ``worse'' strategy. When paired with a mindless strategy like probability strategies, \textit{TfT} sinks to its opponent's level. This is why \textit{TfT} is not the ``best'' strategy. In order to win, a player should figure out its opponent's strategy and then pick a strategy that is best suited for the situation.
We also showed advances in the literature that try to address the iterative version of the game with machine learning approaches, noting that it is no trivial task.
An important insight is also the tight dependance with respect to the initial population and seed: this factor could vary the results in an unpredictable and unexpected way. It is not trivial to address which player would be the winner given an initial population.
Finally, the introduction of the possibility to change the strategies within the tournament does not vary the final results, but only the way the population gets there.
This work can be easily extended in many ways: a lot of new and more complex strategies were created since the original tournaments made by Axelrod and these may be incorporated in the analysis, alongside the already available ones.
The Axelrod library, \cite{Knight2016Axel,axel-lib} written in Python and also used by \cite{plosRLdominant}, contains more than 270 standard, deterministic and learning-based strategies, and is now the reference framework to study the prisoner's dilemma.
Furthermore, research may be directed to find and implement different features or network structures that achieve better results with automated learning. It would be also interesting to extend the memory of the players to the whole history of the game, allowing them to make predictions and belief on the type of the opponents.
\balance
\bibliographystyle{IEEEtran}
\bibliography{report}
\onecolumn
\appendix[Additional figures and tables of tournament results] \label{s:appendix}
%\appendices
%\section{Additional figures} \label{a:fig}
%\section{Tables of tournament results} \label{a:tab}
\begin{table}[ht]
\caption{2-players IPD, statistics}
\label{tab:ipd2p}
\centering
\begin{tabular}{ll|rrrr|rrrr} \toprule
\multicolumn{2}{c}{Strategies} & \multicolumn{4}{c}{Scores Player 1} & \multicolumn{4}{c}{Scores Player 2} \\
Player 1 & Player 2 & avg & std & yield & achieve & avg & std & yield & achieve \\ \midrule
Bad & Bad & 50.0 & 0.00 & 100.00 & 20.00 & 50.0 & 0.00 & 100.00 & 20.00 \\
Bad & TitFor2Tat & 58.0 & 0.00 & 100.00 & 23.20 & 48.0 & 0.00 & 96.00 & 19.51 \\
Bad & GrimTrigger & 54.0 & 0.00 & 100.00 & 21.60 & 49.0 & 0.00 & 98.00 & 19.76 \\
Bad & Indifferent & 146.0 & 11.03 & 100.00 & 58.40 & 26.0 & 2.76 & 52.00 & 12.84 \\
Bad & MainlyNice (k=27) & 205.6 & 11.66 & 100.00 & 82.24 & 11.1 & 2.91 & 22.20 & 6.39 \\
Bad & TitForTat & 54.0 & 0.00 & 100.00 & 21.60 & 49.0 & 0.00 & 98.00 & 19.76 \\
Bad & MainlyBad (k=72) & 102.0 & 15.39 & 100.00 & 40.80 & 37.0 & 3.85 & 74.00 & 16.48 \\
Bad & Nice & 250.0 & 0.00 & 100.00 & 100.00 & 0.0 & 0.00 & 0.00 & 0.00 \\
TitFor2Tat & TitFor2Tat & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
TitFor2Tat & GrimTrigger & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
TitFor2Tat & Indifferent & 91.4 & 4.52 & 62.50 & 52.76 & 160.4 & 11.19 & 79.57 & 79.78 \\
TitFor2Tat & MainlyNice (k=27) & 110.3 & 8.54 & 59.34 & 68.97 & 164.3 & 5.80 & 71.78 & 90.45 \\
TitFor2Tat & TitForTat & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
TitFor2Tat & MainlyBad (k=72) & 73.1 & 4.18 & 71.23 & 36.40 & 127.1 & 14.69 & 87.02 & 57.08 \\
TitFor2Tat & Nice & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
GrimTrigger & GrimTrigger & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
GrimTrigger & Indifferent & 149.2 & 11.50 & 98.43 & 60.51 & 30.7 & 4.12 & 53.99 & 15.39 \\
GrimTrigger & MainlyNice (k=27) & 194.8 & 11.15 & 97.57 & 79.75 & 22.3 & 8.22 & 35.21 & 12.68 \\
GrimTrigger & TitForTat & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
GrimTrigger & MainlyBad (k=72) & 101.4 & 12.19 & 98.23 & 41.02 & 41.9 & 3.39 & 75.36 & 18.73 \\
GrimTrigger & Nice & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
Indifferent & Indifferent & 112.3 & 10.21 & 74.56 & 56.17 & 112.3 & 10.21 & 74.56 & 56.17 \\
Indifferent & MainlyNice (k=27) & 150.3 & 16.27 & 77.36 & 75.42 & 97.3 & 12.86 & 64.02 & 54.59 \\
Indifferent & TitForTat & 117.5 & 7.27 & 73.92 & 60.08 & 115.5 & 7.99 & 73.39 & 59.30 \\
Indifferent & MainlyBad (k=72) & 77.5 & 14.15 & 70.26 & 38.68 & 126.5 & 11.59 & 84.81 & 57.50 \\
Indifferent & Nice & 200.8 & 5.00 & 80.32 & 100.00 & 73.8 & 7.49 & 49.62 & 49.20 \\
MainlyNice (k=27) & MainlyNice (k=27) & 132.7 & 12.88 & 67.31 & 75.06 & 132.7 & 12.88 & 67.31 & 75.06 \\
MainlyNice (k=27) & TitForTat & 135.6 & 3.83 & 67.33 & 77.61 & 133.6 & 4.80 & 66.86 & 76.82 \\
MainlyNice (k=27) & MainlyBad (k=72) & 61.9 & 8.84 & 56.42 & 34.92 & 170.4 & 10.44 & 86.87 & 77.27 \\
MainlyNice (k=27) & Nice & 179.8 & 4.33 & 71.92 & 100.00 & 105.3 & 6.50 & 55.26 & 70.20 \\
TitForTat & TitForTat & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
TitForTat & MainlyBad (k=72) & 87.6 & 7.05 & 81.94 & 40.00 & 92.1 & 7.03 & 83.34 & 41.70 \\
TitForTat & Nice & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\
MainlyBad (k=72) & MainlyBad (k=72) & 88.5 & 12.09 & 81.42 & 40.03 & 88.5 & 12.09 & 81.42 & 40.03 \\
MainlyBad (k=72) & Nice & 222.0 & 8.20 & 88.80 & 100.00 & 42.0 & 12.30 & 38.68 & 28.00 \\
Nice & Nice & 150.0 & 0.00 & 60.00 & 100.00 & 150.0 & 0.00 & 60.00 & 100.00 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{2-players IPD, overall yield and achieve}
\label{tab:ipd2pavg}
\centering
\begin{tabular}{l|rr} \toprule
Strategy & yield & achieve \\ \midrule
Bad & 100.00 & 43.09 \\
GrimTrigger & 76.91 & 77.89 \\
Indifferent & 70.73 & 54.95 \\
MainlyBad (k=72) & 82.56 & 49.87 \\
MainlyNice (k=27) & 58.17 & 58.53 \\
Nice & 49.29 & 71.93 \\
TitFor2Tat & 65.45 & 75.29 \\
TitForTat & 68.91 & 77.32 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{50-players IPD, sorted by points, statistics}
\label{tab:ipdmp50}
\centering
\begin{tabular}{l|rrrr|rrrrr} \toprule
& \multicolumn{4}{c}{Points} & \multicolumn{2}{c}{Coop. count} & \multicolumn{2}{c}{Defect count} & \\
Strategy & avg & std & yield & achieve & avg & std & avg & std & Coop. \% \\ \midrule
GrimTrigger & 6406.5 & 80.89 & 77.66 & 73.21 & 1334.5 & 18.42 & 1115.5 & 18.42 & 54.47 \\
GrimTrigger & 6393.8 & 56.80 & 77.91 & 73.90 & 1317.6 & 21.68 & 1132.4 & 21.68 & 53.78 \\
GrimTrigger & 6385.3 & 72.00 & 77.06 & 73.85 & 1336.4 & 23.56 & 1113.6 & 23.56 & 54.55 \\
GrimTrigger & 6382.0 & 88.09 & 77.76 & 73.59 & 1327.7 & 23.79 & 1122.3 & 23.79 & 54.19 \\
GrimTrigger & 6380.4 & 92.53 & 78.39 & 73.35 & 1343.3 & 37.35 & 1106.7 & 37.35 & 54.83 \\
TitForTat & 5884.6 & 26.69 & 71.37 & 72.60 & 1664.8 & 10.76 & 785.2 & 10.76 & 67.95 \\
TitForTat & 5882.7 & 27.79 & 71.41 & 72.55 & 1662.7 & 14.49 & 787.3 & 14.49 & 67.87 \\
TitForTat & 5881.4 & 26.80 & 70.99 & 73.34 & 1665.1 & 11.69 & 784.9 & 11.69 & 67.96 \\
TitForTat & 5880.2 & 34.27 & 71.13 & 72.82 & 1662.7 & 14.85 & 787.3 & 14.85 & 67.87 \\
TitForTat & 5878.9 & 32.66 & 71.47 & 72.29 & 1660.8 & 13.31 & 789.2 & 13.31 & 67.79 \\
TitForTat & 5875.6 & 32.89 & 71.10 & 72.58 & 1661.3 & 13.60 & 788.7 & 13.60 & 67.81 \\
TitForTat & 5862.2 & 22.34 & 71.23 & 72.51 & 1654.7 & 9.78 & 795.3 & 9.78 & 67.54 \\
TitForTat & 5861.2 & 27.54 & 71.27 & 72.66 & 1655.7 & 10.52 & 794.3 & 10.52 & 67.58 \\
TitFor2Tat & 5648.6 & 16.61 & 67.62 & 72.29 & 1829.7 & 10.07 & 620.3 & 10.07 & 74.68 \\
TitFor2Tat & 5644.3 & 18.15 & 67.25 & 71.73 & 1834.4 & 11.92 & 615.6 & 11.92 & 74.87 \\
TitFor2Tat & 5639.4 & 18.24 & 67.14 & 72.01 & 1833.6 & 18.54 & 616.4 & 18.54 & 74.84 \\
TitFor2Tat & 5637.9 & 22.20 & 67.10 & 72.51 & 1821.5 & 14.67 & 628.5 & 14.67 & 74.35 \\
TitFor2Tat & 5636.4 & 34.72 & 67.09 & 72.63 & 1824.9 & 19.76 & 625.1 & 19.76 & 74.49 \\
TitFor2Tat & 5636.4 & 18.00 & 67.33 & 71.98 & 1825.7 & 8.56 & 624.3 & 8.56 & 74.52 \\
TitFor2Tat & 5621.5 & 29.34 & 67.45 & 71.36 & 1817.2 & 17.42 & 632.8 & 17.42 & 74.17 \\
MainlyBad (k=78) & 5456.8 & 40.52 & 85.00 & 49.18 & 545.4 & 22.11 & 1904.6 & 22.11 & 22.26 \\
MainlyBad (k=85) & 5424.8 & 59.18 & 89.74 & 47.57 & 373.9 & 17.92 & 2076.1 & 17.92 & 15.26 \\
MainlyBad (k=81) & 5416.4 & 60.69 & 87.56 & 47.69 & 469.9 & 15.43 & 1980.1 & 15.43 & 19.18 \\
MainlyBad (k=81) & 5411.3 & 65.52 & 87.63 & 48.21 & 459.6 & 19.84 & 1990.4 & 19.84 & 18.76 \\
MainlyBad (k=70) & 5396.5 & 63.52 & 80.99 & 50.32 & 728.7 & 17.36 & 1721.3 & 17.36 & 29.74 \\
MainlyBad (k=97) & 5387.7 & 64.01 & 97.66 & 45.52 & 74.5 & 6.52 & 2375.5 & 6.52 & 3.04 \\
MainlyBad (k=99) & 5379.8 & 62.25 & 99.21 & 43.99 & 26.3 & 5.77 & 2423.7 & 5.77 & 1.07 \\
Bad & 5362.4 & 78.10 & 100.00 & 44.00 & 0.0 & 0.00 & 2450.0 & 0.00 & 0.00 \\
Bad & 5359.2 & 41.10 & 100.00 & 44.33 & 0.0 & 0.00 & 2450.0 & 0.00 & 0.00 \\
MainlyBad (k=98) & 5352.7 & 35.40 & 98.74 & 43.58 & 48.7 & 5.46 & 2401.3 & 5.46 & 1.99 \\
Bad & 5343.2 & 41.91 & 100.00 & 43.87 & 0.0 & 0.00 & 2450.0 & 0.00 & 0.00 \\
Bad & 5330.8 & 33.93 & 100.00 & 43.51 & 0.0 & 0.00 & 2450.0 & 0.00 & 0.00 \\
Bad & 5322.0 & 21.00 & 100.00 & 43.61 & 0.0 & 0.00 & 2450.0 & 0.00 & 0.00 \\
Indifferent & 5275.9 & 89.52 & 69.73 & 53.26 & 1218.6 & 28.88 & 1231.4 & 28.88 & 49.74 \\
Indifferent & 5265.9 & 89.16 & 69.80 & 53.75 & 1219.9 & 21.74 & 1230.1 & 21.74 & 49.79 \\
Indifferent & 5265.5 & 67.17 & 69.57 & 53.43 & 1221.6 & 15.01 & 1228.4 & 15.01 & 49.86 \\
Indifferent & 5248.7 & 56.13 & 69.04 & 53.87 & 1222.9 & 21.45 & 1227.1 & 21.45 & 49.91 \\
Indifferent & 5240.5 & 62.17 & 69.48 & 52.91 & 1217.9 & 29.42 & 1232.1 & 29.42 & 49.71 \\
Indifferent & 5240.4 & 62.79 & 68.70 & 54.14 & 1220.1 & 32.81 & 1229.9 & 32.81 & 49.80 \\
Indifferent & 5232.4 & 82.32 & 70.66 & 54.88 & 1223.4 & 26.90 & 1226.6 & 26.90 & 49.93 \\
MainlyNice (k=42) & 5138.0 & 74.36 & 65.63 & 53.96 & 1421.1 & 25.18 & 1028.9 & 25.18 & 58.00 \\
Nice & 4948.5 & 38.81 & 46.50 & 67.35 & 2450.0 & 0.00 & 0.0 & 0.00 & 100.00 \\
Nice & 4943.7 & 28.55 & 45.92 & 67.43 & 2450.0 & 0.00 & 0.0 & 0.00 & 100.00 \\
Nice & 4942.5 & 42.83 & 46.37 & 67.76 & 2450.0 & 0.00 & 0.0 & 0.00 & 100.00 \\
Nice & 4941.6 & 46.10 & 46.37 & 68.24 & 2450.0 & 0.00 & 0.0 & 0.00 & 100.00 \\
Nice & 4940.1 & 32.53 & 46.17 & 67.22 & 2450.0 & 0.00 & 0.0 & 0.00 & 100.00 \\
Nice & 4926.0 & 43.24 & 46.05 & 66.86 & 2450.0 & 0.00 & 0.0 & 0.00 & 100.00 \\
MainlyNice (k=17) & 4760.4 & 67.95 & 52.93 & 59.70 & 2033.3 & 20.59 & 416.7 & 20.59 & 82.99 \\
MainlyNice (k=3) & 4695.9 & 121.06 & 45.29 & 60.44 & 2379.4 & 9.16 & 70.6 & 9.16 & 97.12 \\
MainlyNice (k=8) & 4592.4 & 108.72 & 48.55 & 61.11 & 2260.2 & 10.91 & 189.8 & 10.91 & 92.25 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{10-players IPD, static strategies, sorted by points, statistics}
\label{tab:ipdmp10stat}
\centering
\begin{tabular}{l|rrr|rrr} \toprule
Strategy & Points & yield & achieve & C count & D count & Coop. \% \\ \midrule
TitForTat & 946 & 76.89 & 64.34 & 254 & 196 & 56.44 \\
TitForTat & 946 & 76.89 & 64.34 & 254 & 196 & 56.44 \\
TitForTat & 946 & 76.89 & 64.34 & 254 & 196 & 56.44 \\
TitForTat & 946 & 76.89 & 64.34 & 254 & 196 & 56.44 \\
Bad & 866 & 100.00 & 38.49 & 0 & 450 & 0.00 \\
Bad & 866 & 100.00 & 38.49 & 0 & 450 & 0.00 \\
Bad & 866 & 100.00 & 38.49 & 0 & 450 & 0.00 \\
Bad & 866 & 100.00 & 38.49 & 0 & 450 & 0.00 \\
Nice & 750 & 33.33 & 55.56 & 450 & 0 & 100.00 \\
Nice & 750 & 33.33 & 55.56 & 450 & 0 & 100.00 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=.6\columnwidth]{../img/ipdmp/ipdmp10-table-det}
\caption{10-players IPD, static strategies, software results \cite{demosw}}
\label{fig:ipdmp10statsw}
\end{figure}
\begin{table}[ht]
\caption{rMPIPD, constant pop. of 50, strategy evolution through repetitions}
\label{tab:ripdmp-const}
\centering
\begin{tabular}{l|ccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 \\ \midrule
\textbf{GrimTrigger} & 5 & 5 & 9 & 17 & 30 \\
TitFor2Tat & 7 & 7 & 8 & 6 & 2 \\
TitForTat & 8 & 8 & 15 & 19 & 14 \\
Nice & 6 & 6 & 3 & 0 & 0 \\
MainlyNice (k=3) & 1 & 1 & 1 & 0 & 0 \\
MainlyNice (k=8) & 1 & 1 & 0 & 0 & 0 \\
MainlyNice (k=17) & 1 & 1 & 0 & 0 & 0 \\
MainlyNice (k=42) & 1 & 1 & 1 & 1 & 1 \\
Indifferent & 7 & 7 & 3 & 0 & 0 \\
MainlyBad (k=70) & 1 & 1 & 0 & 0 & 0 \\
MainlyBad (k=78) & 1 & 1 & 1 & 0 & 0 \\
MainlyBad (k=81) & 2 & 2 & 1 & 1 & 0 \\
MainlyBad (k=85) & 1 & 1 & 0 & 0 & 0 \\
MainlyBad (k=97) & 1 & 1 & 1 & 1 & 0 \\
MainlyBad (k=98) & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=99) & 1 & 1 & 1 & 1 & 1 \\
Bad & 5 & 5 & 5 & 3 & 1 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{rMPIPD, constant pop. of 50, $seed = 24$, strategy evolution}
\label{tab:ripdmp-const-24}
\centering
\begin{tabular}{l|rrrrrrrr} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \midrule
GrimTrigger & 1 & 1 & 2 & 3 & 6 & 11 & 15 & 13 \\
TitFor2Tat & 5 & 5 & 3 & 2 & 0 & 0 & 0 & 0 \\
\textbf{TitForTat}& 9 & 9 & 14 & 23 & 35 & 35 & 35 & 37 \\
Nice & 3 & 3 & 2 & 1 & 1 & 0 & 0 & 0 \\
MainlyNice (k=4) & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
MainlyNice (k=5) & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
MainlyNice (k=28) & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
MainlyNice (k=34) & 2 & 2 & 2 & 2 & 0 & 0 & 0 & 0 \\
Indifferent & 5 & 5 & 3 & 2 & 2 & 0 & 0 & 0 \\
MainlyBad (k=51) & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
MainlyBad (k=52) & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
MainlyBad (k=56) & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
MainlyBad (k=58) & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
MainlyBad (k=61) & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
MainlyBad (k=70) & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
MainlyBad (k=92) & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 \\
MainlyBad (k=93) & 1 & 1 & 2 & 1 & 1 & 0 & 0 & 0 \\
MainlyBad (k=96) & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
MainlyBad (k=97) & 1 & 1 & 2 & 2 & 1 & 1 & 0 & 0 \\
Bad & 11 & 11 & 11 & 9 & 4 & 3 & 0 & 0 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{rMPIPD, constant pop. of 50, $seed = 1209$, strategy evolution}
\label{tab:ripdmp-const-1209}
\centering
\begin{tabular}{l|rrrrr} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 \\ \midrule
GrimTrigger & 1 & 1 & 1 & 2 & 3 \\
TitFor2Tat & 5 & 5 & 4 & 1 & 0 \\
TitForTat & 8 & 8 & 6 & 2 & 1 \\
Nice & 6 & 6 & 2 & 2 & 1 \\
MainlyNice (k=4) & 2 & 2 & 2 & 2 & 0 \\
MainlyNice (k=12) & 1 & 1 & 1 & 1 & 0 \\
MainlyNice (k=24) & 1 & 1 & 0 & 0 & 0 \\
MainlyNice (k=31) & 1 & 1 & 1 & 0 & 0 \\
MainlyNice (k=34) & 1 & 1 & 1 & 0 & 0 \\
Indifferent & 11 & 11 & 9 & 6 & 3 \\
MainlyBad (k=58) & 1 & 1 & 1 & 1 & 0 \\
MainlyBad (k=63) & 1 & 1 & 2 & 2 & 1 \\
MainlyBad (k=74) & 1 & 1 & 2 & 1 & 1 \\
MainlyBad (k=76) & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=82) & 1 & 1 & 2 & 3 & 2 \\
MainlyBad (k=99) & 1 & 1 & 2 & 3 & 4 \\
\textbf{Bad} & 7 & 7 & 13 & 23 & 33 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{rMPIPD, increasing pop. (alternative 1), strategy evolution through repetitions}
\label{tab:ripdmp-incr}
\centering
\begin{tabular}{l|cccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 & 5 \\ \midrule
\textbf{GrimTrigger} & 5 & 9 & 18 & 35 & 66 & 125 \\
TitFor2Tat & 7 & 12 & 20 & 31 & 44 & 58 \\
\textbf{TitForTat} & 8 & 14 & 26 & 42 & 72 & 119 \\
Nice & 6 & 6 & 8 & 12 & 18 & 23 \\
MainlyNice (k=3) & 1 & 1 & 1 & 1 & 1 & 2 \\
MainlyNice (k=8) & 1 & 1 & 1 & 2 & 4 & 5 \\
MainlyNice (k=17) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=42) & 1 & 2 & 3 & 5 & 6 & 8 \\
Indifferent & 7 & 7 & 9 & 10 & 11 & 13 \\
MainlyBad (k=70) & 1 & 1 & 2 & 3 & 4 & 4 \\
MainlyBad (k=78) & 1 & 2 & 3 & 4 & 4 & 4 \\
MainlyBad (k=81) & 2 & 3 & 4 & 5 & 6 & 7 \\
MainlyBad (k=85) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=97) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=98) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=99) & 1 & 2 & 2 & 2 & 2 & 2 \\
Bad & 5 & 8 & 9 & 10 & 10 & 10 \\ \midrule
Population size & 50 & 72 & 110 & 166 & 252 & 384 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{rMPIPD, increasing pop. (alternative 2), strategy evolution through repetitions}
\label{tab:ripdmp-incrA2}
\centering
\begin{tabular}{l|cccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 & 5 \\ \midrule
\textbf{GrimTrigger} & 5 & 7 & 14 & 26 & 44 & 65 \\
TitFor2Tat & 7 & 12 & 17 & 22 & 27 & 34 \\
\textbf{TitForTat} & 8 & 14 & 24 & 35 & 42 & 51 \\
Nice & 6 & 7 & 7 & 8 & 9 & 10 \\
MainlyNice (k=3) & 1 & 1 & 1 & 1 & 2 & 3 \\
MainlyNice (k=8) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=17) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=42) & 1 & 2 & 2 & 2 & 3 & 4 \\
Indifferent & 7 & 7 & 9 & 11 & 15 & 19 \\
MainlyBad (k=70) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=78) & 1 & 2 & 2 & 2 & 2 & 3 \\
MainlyBad (k=81) & 2 & 3 & 4 & 5 & 5 & 5 \\
MainlyBad (k=85) & 1 & 1 & 2 & 2 & 3 & 3 \\
MainlyBad (k=97) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=98) & 1 & 1 & 2 & 3 & 4 & 4 \\
MainlyBad (k=99) & 1 & 2 & 2 & 3 & 4 & 4 \\
Bad & 5 & 9 & 11 & 13 & 15 & 19 \\ \midrule
Population size & 50 & 72 & 101 & 137 & 179 & 228 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt2/ripdmp-evolution-increasing-pop-50}
\caption{Evolution of an increasing pop., from 50 players (alternative 2)}
\label{fig:incrRa2}
\end{figure}
%\begin{figure}
% \centering
% \includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt2/ripdmp-scores-increasing-pop-50-r0}
% \caption{First iteration scores ($it=0$) (alternative 2)}
% \label{fig:incrFIa2}
%\end{figure}
%\begin{figure}
% \centering
% \includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt2/ripdmp-scores-increasing-pop-50-r2}
% \caption{Middle iteration scores ($it=2$) (alternative 2)}
% \label{fig:incrMIa2}
%\end{figure}
%\begin{figure}
% \centering
% \includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt2/ripdmp-scores-increasing-pop-50-r4}
% \caption{Last iteration scores ($it=4$) (alternative 2)}
% \label{fig:incrLIa2}
%\end{figure}
\begin{table}[ht]
\caption{rMPIPD, increasing pop. (alternative 3), strategy evolution through repetitions}
\label{tab:ripdmp-incrA3}
\centering
\begin{tabular}{l|cccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 & 5 \\ \midrule
GrimTrigger & 5 & 7 & 10 & 12 & 16 & 18 \\
\textbf{TitFor2Tat} & 7 & 14 & 26 & 42 & 70 & 113 \\
\textbf{TitForTat} & 8 & 16 & 32 & 64 & 128 & 256 \\
Nice & 6 & 11 & 20 & 31 & 42 & 56 \\
MainlyNice (k=3) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=8) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=17) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=42) & 1 & 1 & 1 & 1 & 1 & 1 \\
Indifferent & 7 & 11 & 17 & 26 & 31 & 36 \\
MainlyBad (k=70) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=78) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=81) & 2 & 2 & 2 & 2 & 2 & 2 \\
MainlyBad (k=85) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=97) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=98) & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=99) & 1 & 1 & 1 & 1 & 1 & 1 \\
Bad & 5 & 10 & 16 & 21 & 26 & 32 \\ \midrule
Population size & 50 & 81 & 133 & 208 & 325 & 523 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt3/ripdmp-evolution-increasing-pop-50}
\caption{Evolution of an increasing pop., from 50 players (alternative 3)}
\label{fig:incrRa3}
\end{figure}
%\begin{figure}
% \centering
% \includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt3/ripdmp-scores-increasing-pop-50-r0}
% \caption{First iteration scores ($it=0$) (alternative 3)}
% \label{fig:incrFIa3}
%\end{figure}
%\begin{figure}
% \centering
% \includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt3/ripdmp-scores-increasing-pop-50-r2}
% \caption{Middle iteration scores ($it=2$) (alternative 3)}
% \label{fig:incrMIa3}
%\end{figure}
%\begin{figure}
% \centering
% \includegraphics[width=.7\columnwidth]{../img/ripdmp-incr/alt3/ripdmp-scores-increasing-pop-50-r4}
% \caption{Last iteration scores ($it=4$) (alternative 3)}
% \label{fig:incrLIa3}
%\end{figure}
\begin{table}[ht]
\caption{rMPIPD, changing strategies (alternative 1), strategy evolution through repetitions.\\
Blank cell means player not present. At the end, strategy changes count for each iteration.}
\label{tab:cripdmp}
\centering
\begin{minipage}{.55\textwidth}
\begin{tabular}{l|cccccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \midrule
\textbf{GrimTrigger} & 5 & 9 & 15 & 22 & 31 & 41 & 54 & 72 \\
\textbf{TitFor2Tat} & 7 & 12 & 17 & 23 & 34 & 50 & 87 & 129 \\
\textbf{TitForTat} & 8 & 14 & 16 & 20 & 32 & 56 & 82 & 132 \\
Nice & 6 & 6 & 8 & 9 & 12 & 13 & 14 & 15 \\
MainlyNice (k=3) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=8) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=17) & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 3 \\
MainlyNice (k=42) & 1 & 2 & 3 & 6 & 9 & 12 & 12 & 12 \\
Indifferent & 7 & 7 & 9 & 11 & 12 & 14 & 18 & 24 \\
MainlyBad (k=70) & 1 & 1 & 1 & 1 & 3 & 4 & 6 & 9 \\
MainlyBad (k=78) & 1 & 2 & 2 & 2 & 3 & 3 & 3 & 4 \\
MainlyBad (k=81) & 2 & 3 & 4 & 5 & 5 & 5 & 5 & 6 \\
MainlyBad (k=85) & 1 & 1 & 1 & 2 & 3 & 3 & 3 & 3 \\
MainlyBad (k=97) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=98) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=99) & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\
Bad & 5 & 8 & 10 & 11 & 12 & 13 & 14 & 14 \\
MainlyNice (k=11) & & & 1 & 2 & 2 & 2 & 2 & 3 \\
MainlyNice (k=20) & & & 1 & 2 & 3 & 4 & 5 & 7 \\
MainlyNice (k=24) & & & 1 & 1 & 2 & 2 & 5 & 6 \\
MainlyNice (k=29) & & & 1 & 1 & 1 & 1 & 3 & 5 \\
MainlyNice (k=38) & & & 1 & 2 & 2 & 3 & 4 & 4 \\
MainlyNice (k=40) & & & 1 & 3 & 5 & 8 & 9 & 14 \\
MainlyNice (k=43) & & & 1 & 1 & 1 & 3 & 6 & 14 \\
MainlyBad (k=51) & & & 1 & 3 & 5 & 6 & 9 & 13 \\
MainlyBad (k=56) & & & 1 & 2 & 3 & 4 & 5 & 8 \\
MainlyBad (k=62) & & & 1 & 2 & 2 & 3 & 3 & 4 \\
MainlyBad (k=63) & & & 1 & 2 & 4 & 4 & 5 & 7 \\
MainlyBad (k=68) & & & 1 & 1 & 1 & 5 & 8 & 13 \\
MainlyBad (k=69) & & & 1 & 3 & 4 & 5 & 6 & 11 \\
MainlyBad (k=76) & & & 1 & 1 & 1 & 1 & 1 & 4 \\
MainlyBad (k=87) & & & 1 & 1 & 3 & 4 & 4 & 4 \\
MainlyNice (k=25) & & & & 1 & 1 & 1 & 1 & 3 \\
MainlyNice (k=27) & & & & 1 & 1 & 2 & 3 & 3 \\
MainlyNice (k=28) & & & & 1 & 1 & 1 & 2 & 5 \\
MainlyNice (k=30) & & & & 1 & 1 & 2 & 4 & 7 \\
MainlyNice (k=32) & & & & 1 & 1 & 4 & 5 & 7 \\
MainlyNice (k=35) & & & & 1 & 1 & 2 & 2 & 3 \\
MainlyNice (k=36) & & & & 1 & 2 & 2 & 2 & 4 \\
MainlyBad (k=52) & & & & 1 & 2 & 2 & 6 & 10 \\
MainlyBad (k=55) & & & & 1 & 2 & 2 & 5 & 8 \\
MainlyBad (k=59) & & & & 1 & 3 & 4 & 6 & 7 \\
MainlyBad (k=60) & & & & 1 & 2 & 5 & 10 & 15 \\
MainlyBad (k=61) & & & & 1 & 4 & 7 & 8 & 14 \\
MainlyBad (k=71) & & & & 1 & 1 & 1 & 4 & 5 \\
MainlyBad (k=72) & & & & 1 & 1 & 1 & 2 & 5 \\
MainlyBad (k=84) & & & & 1 & 1 & 2 & 4 & 6 \\
MainlyBad (k=86) & & & & 2 & 4 & 4 & 4 & 4 \\ \midrule
Population size & 50 & 72 & 108 & 164 & 257 & 385 & 575 & 868 \\
To more cooperative & 19 & 27 & 46 & 70 & 105 & 139 & 245 & 365 \\
To less cooperative & 19 & 26 & 46 & 63 & 101 & 160 & 216 & 342 \\ \bottomrule
\end{tabular}
\end{minipage} \quad
\begin{minipage}{.35\textwidth}
\begin{tabular}{l|cccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 4 & 5 & 6 & 7 \\ \midrule
MainlyNice (k=7) & 1 & 1 & 1 & 2 \\
MainlyNice (k=23) & 1 & 3 & 4 & 5 \\
MainlyNice (k=34) & 1 & 2 & 4 & 6 \\
MainlyNice (k=37) & 2 & 3 & 5 & 6 \\
MainlyNice (k=39) & 3 & 4 & 8 & 10 \\
MainlyNice (k=46) & 1 & 2 & 3 & 6 \\
MainlyNice (k=47) & 2 & 5 & 9 & 16 \\
MainlyNice (k=49) & 3 & 8 & 11 & 16 \\
MainlyBad (k=54) & 1 & 1 & 2 & 7 \\
MainlyBad (k=66) & 1 & 2 & 3 & 6 \\
MainlyBad (k=67) & 1 & 1 & 2 & 3 \\
MainlyBad (k=73) & 2 & 2 & 3 & 6 \\
MainlyBad (k=74) & 1 & 1 & 3 & 5 \\
MainlyBad (k=77) & 1 & 3 & 5 & 7 \\
MainlyBad (k=80) & 1 & 2 & 4 & 6 \\
MainlyBad (k=90) & 1 & 2 & 2 & 3 \\
MainlyBad (k=94) & 1 & 1 & 2 & 3 \\
MainlyBad (k=95) & 2 & 4 & 4 & 4 \\
MainlyNice (k=15) & & 2 & 2 & 3 \\
MainlyNice (k=16) & & 1 & 1 & 4 \\
MainlyNice (k=22) & & 1 & 1 & 2 \\
MainlyNice (k=26) & & 1 & 2 & 4 \\
MainlyNice (k=31) & & 1 & 2 & 4 \\
MainlyNice (k=33) & & 1 & 2 & 3 \\
MainlyNice (k=41) & & 1 & 1 & 3 \\
MainlyNice (k=44) & & 2 & 5 & 7 \\
MainlyNice (k=45) & & 1 & 3 & 5 \\
MainlyNice (k=48) & & 1 & 5 & 9 \\
MainlyBad (k=57) & & 1 & 1 & 1 \\
MainlyBad (k=64) & & 1 & 4 & 6 \\
MainlyBad (k=65) & & 3 & 8 & 13 \\
MainlyBad (k=82) & & 1 & 2 & 2 \\
MainlyBad (k=88) & & 1 & 1 & 2 \\
MainlyNice (k=5) & & & 1 & 1 \\
MainlyNice (k=13) & & & 1 & 1 \\
MainlyNice (k=18) & & & 1 & 1 \\
MainlyBad (k=53) & & & 4 & 10 \\
MainlyBad (k=58) & & & 1 & 3 \\
MainlyBad (k=75) & & & 1 & 3 \\
MainlyBad (k=89) & & & 1 & 2 \\
MainlyBad (k=91) & & & 1 & 1 \\
MainlyNice (k=10) & & & & 1 \\
MainlyNice (k=12) & & & & 2 \\
MainlyNice (k=19) & & & & 1 \\
MainlyBad (k=79) & & & & 1 \\
MainlyBad (k=83) & & & & 2 \\
MainlyBad (k=92) & & & & 1 \\
MainlyBad (k=93) & & & & 1 \\ \bottomrule
\end{tabular}
\end{minipage}
\end{table}
\begin{table}[ht]
\caption{rMPIPD, changing strategies (alternative 2), strategy evolution through repetitions.\\
Blank cell means player not present. At the end, strategy changes count for each iteration.}
\label{tab:cripdmpA2}
\centering
\begin{minipage}{.55\textwidth}
\begin{tabular}{l|cccccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \midrule
GrimTrigger & 5 & 9 & 12 & 18 & 21 & 27 & 34 & 43 \\
\textbf{TitFor2Tat} & 7 & 12 & 23 & 32 & 36 & 38 & 48 & 61 \\
\textbf{TitForTat} & 8 & 14 & 22 & 26 & 34 & 44 & 59 & 79 \\
Nice & 6 & 6 & 9 & 9 & 10 & 10 & 10 & 10 \\
MainlyNice (k=3) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=8) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=17) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyNice (k=42) & 1 & 2 & 2 & 3 & 4 & 5 & 5 & 6 \\
Indifferent & 7 & 7 & 7 & 9 & 12 & 18 & 25 & 35 \\
MainlyBad (k=70) & 1 & 1 & 1 & 2 & 3 & 4 & 6 & 12 \\
MainlyBad (k=78) & 1 & 2 & 2 & 2 & 3 & 4 & 5 & 6 \\
MainlyBad (k=81) & 2 & 3 & 4 & 6 & 8 & 10 & 13 & 15 \\
MainlyBad (k=85) & 1 & 1 & 1 & 2 & 4 & 5 & 6 & 8 \\
MainlyBad (k=97) & 1 & 1 & 2 & 3 & 5 & 6 & 7 & 9 \\
MainlyBad (k=98) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
MainlyBad (k=99) & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\
\textbf{Bad} & 5 & 8 & 9 & 14 & 25 & 41 & 66 & 110 \\
MainlyNice (k=34) & & & 1 & 2 & 2 & 4 & 5 & 7 \\
MainlyNice (k=35) & & & 1 & 1 & 4 & 6 & 8 & 11 \\
MainlyNice (k=36) & & & 1 & 2 & 5 & 8 & 10 & 14 \\
MainlyNice (k=44) & & & 1 & 2 & 3 & 4 & 5 & 7 \\
MainlyNice (k=48) & & & 2 & 2 & 4 & 7 & 9 & 14 \\
MainlyBad (k=57) & & & 1 & 2 & 4 & 8 & 11 & 16 \\
MainlyBad (k=60) & & & 1 & 2 & 3 & 5 & 6 & 11 \\
MainlyBad (k=61) & & & 1 & 1 & 2 & 4 & 6 & 11 \\
MainlyBad (k=71) & & & 1 & 2 & 4 & 5 & 8 & 12 \\
MainlyNice (k=30) & & & & 1 & 2 & 2 & 2 & 2 \\
MainlyNice (k=39) & & & & 3 & 5 & 8 & 11 & 15 \\
MainlyNice (k=41) & & & & 1 & 2 & 3 & 4 & 6 \\
MainlyNice (k=43) & & & & 1 & 2 & 3 & 4 & 5 \\
MainlyNice (k=45) & & & & 1 & 3 & 5 & 8 & 10 \\
MainlyNice (k=47) & & & & 1 & 2 & 4 & 8 & 13 \\ \midrule
Population size & 50 & 72 & 110 & 161 & 245 & 373 & 553 & 833 \\
To more cooperative & 11 & 16 & 19 & 19 & 39 & 72 & 97 & 155 \\
To less cooperative & 11 & 24 & 38 & 55 & 63 & 93 & 139 & 204 \\ \bottomrule
\end{tabular}
\end{minipage} \quad
\begin{minipage}{.35\textwidth}
\begin{tabular}{l|ccccc} \toprule
$\downarrow$ Strategy -- Iter $\rightarrow$ & 3 & 4 & 5 & 6 & 7 \\ \midrule
MainlyBad (k=55) & 1 & 1 & 1 & 4 & 11 \\
MainlyBad (k=56) & 1 & 2 & 8 & 15 & 23 \\
MainlyBad (k=62) & 1 & 2 & 5 & 9 & 14 \\
MainlyBad (k=64) & 1 & 3 & 6 & 11 & 19 \\
MainlyBad (k=72) & 1 & 4 & 6 & 10 & 15 \\
MainlyNice (k=37) & & 1 & 2 & 4 & 6 \\
MainlyNice (k=38) & & 1 & 2 & 3 & 5 \\
MainlyNice (k=46) & & 1 & 3 & 4 & 5 \\
MainlyNice (k=49) & & 1 & 2 & 3 & 4 \\
MainlyBad (k=52) & & 1 & 6 & 14 & 19 \\
MainlyBad (k=53) & & 3 & 6 & 9 & 15 \\
MainlyBad (k=54) & & 1 & 2 & 4 & 10 \\
MainlyBad (k=58) & & 1 & 1 & 3 & 6 \\
MainlyBad (k=59) & & 1 & 3 & 6 & 10 \\
MainlyBad (k=67) & & 1 & 5 & 10 & 17 \\
MainlyBad (k=68) & & 1 & 4 & 7 & 13 \\
MainlyBad (k=69) & & 1 & 2 & 5 & 10 \\
MainlyBad (k=74) & & 1 & 2 & 4 & 6 \\
MainlyNice (k=20) & & & 1 & 1 & 1 \\
MainlyNice (k=27) & & & 1 & 2 & 2 \\
MainlyBad (k=51) & & & 2 & 4 & 9 \\
MainlyBad (k=63) & & & 1 & 3 & 7 \\
MainlyBad (k=65) & & & 2 & 7 & 14 \\
MainlyBad (k=66) & & & 3 & 4 & 8 \\
MainlyBad (k=75) & & & 1 & 3 & 6 \\
MainlyBad (k=77) & & & 1 & 3 & 6 \\
MainlyBad (k=80) & & & 1 & 3 & 3 \\
MainlyNice (k=26) & & & & 1 & 2 \\
MainlyNice (k=29) & & & & 1 & 2 \\
MainlyBad (k=79) & & & & 1 & 3 \\
MainlyNice (k=31) & & & & & 1 \\
MainlyNice (k=40) & & & & & 2 \\
MainlyBad (k=73) & & & & & 2 \\
MainlyBad (k=82) & & & & & 1 \\
MainlyBad (k=86) & & & & & 1 \\
MainlyBad (k=92) & & & & & 1 \\ \bottomrule
\end{tabular}
\end{minipage}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{../img/cipdmp-incr/alt2/cipdmp-evolution-increasing-pop-50}
\caption{Evolution of an increasing pop., from 50 players, with changing strategies (alternative 2)}
\label{fig:incrC2}
\end{figure}
%\begin{figure}[!ht]
% \centering
% \includegraphics[width=.7\columnwidth]{../img/cipdmp-incr/alt2/cipdmp-scores-increasing-pop-50-r0}
% \caption{First iteration scores ($it=0$) (alternative 2)}
% \label{fig:incrCFI2}
%\end{figure}
%\begin{figure}[!ht]
% \centering
% \includegraphics[width=.7\columnwidth]{../img/cipdmp-incr/alt2/cipdmp-scores-increasing-pop-50-r2}
% \caption{Middle iteration scores ($it=2$) (alternative 2)}
% \label{fig:incrCMI2}
%\end{figure}
%\begin{figure}[!ht]
% \centering
% \includegraphics[width=.7\columnwidth]{../img/cipdmp-incr/alt2/cipdmp-scores-increasing-pop-50-r4}
% \caption{Last iteration scores ($it=4$) (alternative 2)}
% \label{fig:incrCLI2}
%\end{figure}
\end{document}
| {
"alphanum_fraction": 0.5881357422,
"avg_line_length": 78.5803108808,
"ext": "tex",
"hexsha": "22b9d96276a1762608ac5b3178b2674bd5a0875e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-09-13T08:17:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-13T08:17:49.000Z",
"max_forks_repo_head_hexsha": "8234b3f7756780b56b3ea39658d3b479abc59d04",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "eliabntt/iterative_prisoner_dilemma",
"max_forks_repo_path": "report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8234b3f7756780b56b3ea39658d3b479abc59d04",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "eliabntt/iterative_prisoner_dilemma",
"max_issues_repo_path": "report/report.tex",
"max_line_length": 1155,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "8234b3f7756780b56b3ea39658d3b479abc59d04",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "eliabntt/iterative_prisoner_dilemma",
"max_stars_repo_path": "report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-23T16:40:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-19T08:39:34.000Z",
"num_tokens": 30339,
"size": 90996
} |
\documentclass[9pt]{beamer}
\usepackage[utf8]{inputenc}
\usepackage{txfonts}
\usepackage[english]{babel}
\usepackage{xcolor}
\usepackage{iwona}
\usetheme{CambridgeUS}
\usecolortheme{beaver}
\setbeamertemplate{headline}{}
% \setbeamertemplate{frametitle}{\insertframetitle}
\setbeamertemplate{navigation symbols}{}
\setbeamertemplate{itemize item}{\scriptsize\raise1.25pt\hbox{\donotcoloroutermaths$\blacktriangleright$}}
\setbeamertemplate{itemize subitem}{\tiny\raise1.5pt\hbox{\donotcoloroutermaths$\blacktriangleright$}}
\setbeamertemplate{itemize subsubitem}{\tiny\raise1.5pt\hbox{\donotcoloroutermaths$\blacktriangleright$}}
\setbeamertemplate{enumerate item}{\insertenumlabel.}
\setbeamertemplate{enumerate subitem}{\insertenumlabel.\insertsubenumlabel}
\setbeamertemplate{enumerate subsubitem}{\insertenumlabel.\insertsubenumlabel.\insertsubsubenumlabel}
\setbeamertemplate{enumerate mini template}{\insertenumlabel}
% \setbeamertemplate{itemize items}[square]
% \setbeamertemplate{items}[square]
% \setbeamertemplate{footline}[page number]{}
\newcommand{\bluemph}[1]{\structure{\emph{#1}}}
\newcommand{\redemph}[1]{\alert{\emph{#1}}}
\newcommand{\bluebf}[1]{\structure{\textbf{#1}}}
\newcommand{\redbf}[1]{\alert{\textbf{#1}}}
\newcommand\denote[1]{\llbracket #1 \rrbracket}
\newcommand\fesi{Fe-Si}
% Structure
\newenvironment{remark}{\footnotesize \begin{description}\item[\emph{Remark}:]}{\end{description}}
\title{Formal verification of hardware synthesis}%
\author[T. Braibant]{Thomas Braibant$^1$ \and Adam Chlipala$^2$}
\institute[Inria]{Inria$^1$ (Gallium) \qquad MIT CSAIL$^2$}
\date[05/2013]{SPADES's seminar}
\setbeamercovered{transparent}
\setbeamerfont{frametitle}{size={\normalsize}}
% \usepackage[T1]{fontenc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{mathpartir}
\usepackage{listings}
\usepackage{graphicx}
\definecolor{ltblue}{rgb}{0,0.4,0.4}
\definecolor{dkblue}{rgb}{0,0.1,0.6}
\definecolor{dkgreen}{rgb}{0,0.4,0}
\definecolor{dkviolet}{rgb}{0.3,0,0.5}
\definecolor{dkred}{rgb}{0.5,0,0}
\usepackage{lstcoq}
\usepackage{lstocaml}
\newenvironment{twolistings}%
{\noindent\begin{tabular*}{\linewidth}{@{}c@{\extracolsep{\fill}}c@{}}}%
{\end{tabular*}}
% \AtBeginSection[]
% {
% \begin{frame}
% \frametitle{Outline}
% \tableofcontents[currentsection,currentsubsection]
% \end{frame}
% }
\begin{document}
% \newcommand \blue[1]{{\color{red!80!black}{#1}}}
\newcommand \orange[1]{{\color{orange}{#1}}}
% \newcommand \red[1]{{\color{red}{#1}}}
% \newcommand \grey[1]{{\color{gray}{#1}}}
% \newcommand \green[1]{{\color{violet}{#1}}}
% \newcommand \white[1]{{\color{white}{#1}}}
\newcommand\parenthesis[1] {
\begin{flushright}
{\scriptsize \redemph{{{{ #1}}}}}
\end{flushright}
}
\begin{frame}
\center
\titlepage
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\frametitle{Context: formal verification of hardware}
\begin{itemize}
% \item Formally verified everything:
% \begin{itemize}
% \item Compilers (CompCert [2006])
% \item Operating Systems (Gypsy [1989]; seL4 [2009])
% \item Static analysers
% \item \alert<2->{Hardware}
% \end{itemize}
% \pause
\item Verifying hardware with theorem provers:
\begin{itemize}
\item many formalizations of hardware description languages (ACL2 , HOL, PVS)
% ACL2 :DE2
% Hol : Experience with embedding hardware description languages in HOL (1992)
% PVS : Bluespec
\item many models of hardware designs (ACL2, HOL, PVS, Coq)
\begin{itemize}
\item[-] Floating-point operations verified at AMD using ACL2
\item[-] VAMP [2003] (a pipelined micro-processor verified in PVS)
\end{itemize}
\item high-level formalization of the ARM architecture in HOL
\item ...
\end{itemize}
\pause
% This verification does not match what is really done
\item Shift toward \alert{hardware synthesis}:
\begin{itemize}
\item generates low-level code (RTL) from high-level HDLs
\item argue (in)formally that this synthesis is correct
\end{itemize}
\parenthesis{Esterel, Lustre, System-C, Bluespec, \dots}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\frametitle{This project}
\begin{itemize}
\item Investigate verified hardware synthesis in Coq
\begin{center}
\includegraphics[height= 2cm ]{figs/compilation.pdf}
\end{center}
\pause
\item Source language: \alert{\fesi{}} (Featherweight Synthesis)
\begin{itemize}
\item Stripped down and simplified version of \alert{Bluespec}
\item Semantics based on ``guarded atomic actions'' (with a flavour of transactional memory)
\end{itemize}
\pause
\item Target language: RTL
\begin{itemize}
\item Combinational logic and next-state assignments for registers
\item No currents, single-clock, unit delays
\end{itemize}
\end{itemize}
\end{frame}
% \begin{frame}[fragile]
% \frametitle{A few circuits}
% \begin{coq}
% Definition hadd (a b: Var B) : action [] (B $\otimes$ B) :=
% $\quad$do carry <- ret (andb a b);
% $\quad$do sum $\,$<- ret (xorb a b);
% $\quad$ret (carry, sum).
% \end{coq}
% \end{frame}
\begin{frame}
\frametitle{Outline}
\tableofcontents
\end{frame}
\section{Preliminaries}
\begin{frame}
\frametitle{Synthesis from the perspective of verification}
\begin{center}
\only<1>{\includegraphics[width=8cm]{figs/analogy-1.pdf}}
\only<2>{\includegraphics[width=8cm]{figs/analogy-2.pdf}}
\only<3>{\includegraphics[width=8cm]{figs/analogy-3.pdf}}
\only<4>{\includegraphics[width=8cm]{figs/analogy-4.pdf}}
\end{center}
\end{frame}
\begin{frame}[fragile]
\frametitle{A certified compiler in a nutshell}
\newcommand\behaviors[1]{{\mathcal B}\left(#1\right)}
\begin{itemize}
\item Define \alert{deep-embeddings}
\begin{itemize}
\item Define data-structures to represent programs (a prerequisite to write a compiler)
\item Define what is a program's semantics
\end{itemize}
\pause
\item Implement the compiler
\pause
\item Pick a phrasing for semantic preservation:
\begin{displaymath}
\behaviors{P_1} \mathbin{\square} \behaviors{P_2}
\qquad \begin{cases}
\square \in \left\{\subseteq, \supseteq, \equiv \right\} \\
\text{deterministic}~P_2 ? \\
\text{safe}~P_1 ? \\
\end{cases}
\end{displaymath}
\pause
\item Prove semantic preservation for your compiler.
\pause \parenthesis{easier said than done}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{A problem with binders}
\alert{Extra goals:}
\begin{itemize}
\item make it easy to write source programs inside Coq;
\item make it relatively easy to reason about them.
\end{itemize}
\alert{Problem: abstract syntax}
\begin{center}
\vspace{-1em}
\begin{columns}
\column{0.4\linewidth}
\begin{ocaml}
let x = foo in
let y = f x in
let z = g x y in
z + x
\end{ocaml}
\column{0.4\linewidth}
\begin{ocaml}
let foo in
let f #1 in
let g #2 #1 in
#1 + #3
\end{ocaml}
\end{columns}
\end{center}
\pause
\begin{itemize}
\item A complicated problem, many solutions, no clear winner;
\item Here, hijack Coq binders using Parametric Higher-Order
Abstract Syntax (PHOAS)
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{A PHOAS primer}
\begin{itemize}
\item Use Coq bindings to represent the bindings of the object language.
\newcommand\arrow{\ulcorner \to \urcorner}
\begin{coq}
Section t.
Variable var: T -> Type.
Inductive term : T -> Type :=
| Var: forall t, var t -> term t
| Abs: forall $\alpha$ $\beta$, (var $\alpha$ -> term $\beta$) -> term ($\alpha$ $\arrow$ $\beta$)
| App: ...
End t.
Definition Term := forall (var: T -> Type), term var.
Example K $\alpha$ $\beta$ : Term ($\alpha~\arrow{}~\beta{}~\arrow{}~\alpha$):= fun V =>
$\quad$Abs (fun x => Abs (fun y => Var x)).
\end{coq}
\pause
\item An \alert{intrinsic approach} (strongly typed syntax vs. syntax + typing judgement)
\end{itemize}
\end{frame}
% \begin{frame}[fragile]
% \frametitle{A PHOAS primer}
% \framesubtitle{Face-off with Dependent de Bruijn indices}
% \begin{columns}
% \column{0.07 \linewidth}
% \column{0.5 \linewidth}
% \newcommand\env{\Gamma}
% \begin{coq}
% Inductive var : list T -> T -> Type :=
% | 0 : forall $\env$ t , var (t::$\env$) t
% | S : forall $\env$ t u , var $\env$ u -> var (t::$\env$) u.
% Inductive term : list T -> T -> Type :=
% | Var: forall $\env$ t, var $\env$ t -> term $\env$ t
% | Abs: forall $\env$ $\alpha$ $\beta$, term ($\alpha$:: $\env$) $\beta$ ->
% term $\env$ ($\alpha$ $\ulcorner \to \urcorner$ $\beta$)
% | App: ...
% Example K :=
% $\quad$Abs (Abs (Var (S 0))).
% \end{coq}
% \column{0.5 \linewidth}
% \only<2->{\phoasprimer}
% \end{columns}
% \begin{itemize}
% \item<3> Two alternative \alert{intrinsic approaches}
% \begin{itemize}
% \item strongly typed syntax
% \item alternative to syntax + typing judgement
% \end{itemize}
% \end{itemize}
% \end{frame}
\begin{frame}
\frametitle{To sum up}
\begin{itemize}
\item High-level languages have more structure.
\parenthesis{easier for verification.}
\item Certified compilers are semantics preserving.
\parenthesis{transport verification to low-level languages. }
\item Extra difficulty in our case: nested binders.
\parenthesis{solved using PHOAS.}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{A glimpse of the languages and the compiler}
\begin{frame}
\frametitle{Outline}
\tableofcontents[currentsection]
\end{frame}
\begin{frame}
\frametitle{The big picture}
\begin{center}
\includegraphics[height= 2cm ]{figs/compilation.pdf}
\end{center}
\end{frame}
\begin{frame}[fragile]
\frametitle{Fe-Si, informally}
\begin{itemize}
\item Based on a \alert{monad}
\item Base constructs: bind and return
\begin{coq}
Definition hadd (a b: Var B) : action [] (B $\otimes$ B) :=
$\quad$do carry <- ret (andb a b);
$\quad$do sum $\,$<- ret (xorb a b);
$\quad$ret (carry, sum).
\end{coq}
\pause
\item Set of memory elements to hold mutable state
\begin{coq}
Definition count n : action [Reg (Int n)] (Int n) :=
$\quad$do x <- !member_0;
$\quad$do _ <- member_0 ::= x + 1;
$\quad$ret x.
\end{coq}
\pause
\item Control-flow constructions
\begin{coq}
Definition count n (tick: Var B) : action [Reg (Int n)] (Int n) :=
$\quad$do x <- !member_0;
$\quad$do _ <- if tick then {member_0 ::= x + 1} else {ret ()};
$\quad$ret x.
\end{coq}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Fe-Si's semantics}
\fesi{} programs:
\begin{itemize}
\item update a set of \alert{memory elements} $\Phi$;
\\
\parenthesis{registers, register files, inputs, \dots}
\item are based on \alert{guarded atomic actions}
\\
\begin{center}
\coqe{do n <- !x + 1; (y ::= 1; assert (n = 0)) orElse (y ::= 2)}
\end{center}
\item are endowed with a (simple) \alert{synchronous semantics}
\\
\begin{center}
\coqe{do n <- !x; x ::= n + 1; do m <- !x; assert (n = m)}
\end{center}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{\fesi{}, formally}
\begin{columns}
\column{0.05\linewidth}
\column{0.7 \linewidth}
\begin{coq}
Variable var: ty -> Type.
Inductive expr: ty -> Type := ...
Inductive action: ty -> Type:=
| Return: forall t, expr t -> action t
| Bind: forall t u, action t -> (var t -> action u) -> action u
(** control-flow **)
| OrElse: forall t, action t -> action t -> action t
| Assert: expr B -> action unit
(** memory operations on registers **)
| Read: forall t, (Reg t) $\in \Phi$ -> action t
| Writ: forall t, (Reg t) $\in \Phi$ -> expr t -> action unit
| ...
\end{coq}
\end{columns}
\begin{itemize}
\item Expressions are side-effects free.
\item \vspace{-.5em}
\begin{coq}
Definition Eval $\Phi$ t (a: forall V, action V t): $\denote\Phi$ -> option ($\denote{\tt t}$ * $\denote\Phi$).
\end{coq}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]
\frametitle{RTL, informally}
An RTL circuit is abstracted as:
\begin{itemize}
\item a set of memory elements $\Phi$;
\item a combinational next-state function.
\end{itemize}
\begin{center}
\includegraphics[width=5cm]{figs/rtl.pdf}
\end{center}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]
\frametitle{RTL, formally}
\begin{columns}
\column{0.35 \textwidth}
\includegraphics[width=5cm]{figs/rtl.pdf}
\column{0.55 \textwidth}
\begin{coq}
Variable V: T -> Type.
Inductive $\mathbb T$ (A: Type): Type:=
| Bind: forall arg, expr arg -> (V arg -> $\mathbb T$ A) -> $\mathbb T$ A
| End: A -> $\mathbb T$ A.
Inductive $\mathbb E$: memory -> Type:=
| write: forall t, V t -> V Tbool -> $\mathbb E$ (R t)
| ...
Definition block t:=
$\mathbb T$ (V Tbool * V t * DList.T (option $\circ$ $\mathbb E$) $\Phi$).
\end{coq}
\end{columns}
\begin{itemize}
\item Simple synchronous semantics
\begin{coq}
Definition Eval $\Phi$ t (a: forall V, block V t): $\denote\Phi$ -> option ($\denote{\tt t}$ * $\denote\Phi$).
\end{coq}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\defverbatim[colored]\firstpass{
\begin{ocaml}
x0 <- ! r1;
x1 <- x0 <> 0;
x2 <- !r2;
x3 <- x0 - 1;
x4 <- x2 + 1;
x5 <- !r2;
x6 <- x6;
begin
if x1 then (r1 := x3; r2 := x4);
if !x1 then (r1 := x6)
end
\end{ocaml}}
\defverbatim[colored]\secondpass{
\begin{ocaml}
x0 <- ! r1;
x1 <- x0 <> 0;
x2 <- ! r2;
x3 <- x0 - 1;
x4 <- x2 + 1;
x5 <- ! r2;
x6 <- x5;
x8 <- x1;
x9 <- x1;
x10 <- not x1;
x11 <- x8 || x10;
x12 <- x8 ? x3 : x6;
begin
r1 := x12 when x11;
r2 := x4 when x9
end
\end{ocaml}}
\defverbatim[colored]\thirdpass{
\begin{ocaml}
x0 <- !r1
x1 <- 0;
x2 <- x0 = x1;
x3 <- not x2;
x4 <- !r2;
x5 <- 1;
x6 <- x0 - x5;
x7 <- x4 + x5;
x8 <- !r2;
x9 <- not x3;
x10 <- x3 || x9
x11 <-x3 ? x6 : x8
begin
r1 := x11 when x10;
r2 := x8 when x3
end
\end{ocaml}}
\defverbatim[colored]\finalpass{
\begin{ocaml}
x0 <- !r1
x1 <- 0;
x2 <- x0 = x1;
x3 <- not x2;
x4 <- !r2;
x5 <- 1;
x6 <- x0 - x5;
x7 <- x4 + x5;
x8 <-x3 ? x6 : x4
begin
r1 := x8 when true;
r2 := x4 when x3
end
\end{ocaml}}
\begin{frame}[fragile]
\frametitle{Compiling \fesi{} to RTL}
Running example:
\begin{coq}
do x <- ! r1;
if (x <> 0) then {do y <- !r2; r1 ::= x - 1; r2 ::= y + 1} else { y <- !r2; r1 ::= y}
\end{coq}
\pause
\begin{columns}
\column{0.1 \textwidth}
\column{0.5 \textwidth}
\begin{enumerate}
\item<+-> Pull out all bindings (that is, ANF)
\item<+-> Push down the nested conditions
\item<+-> Perform CSE (in 3-address code)
\item<+-> Boolean simplification
\end{enumerate}
\column{0.5 \textwidth}
\only<2>{\firstpass}
\only<3>{\secondpass}
\only<4>{\thirdpass}
\only<5>{\finalpass}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{A closer look of the first-pass}
\begin{itemize}
\item Transform control-flow into data-flow.
\end{itemize}
\only<1>{
\begin{center}
\includegraphics[width=5cm]{figs/compil-0.pdf}
\end{center}
}
\only<2-3>{
\begin{itemize}
\item Compiling \coqe{e orElse f}
\end{itemize}
\begin{center}
\includegraphics[width=7cm]{figs/compil-1.pdf}
\end{center}
}
\only<3>{
\begin{itemize}
\item The $\Phi$ blocks are trees of effects on memory elements,
that must be flattened.
\end{itemize}
}
\end{frame}
\begin{frame}[fragile]
\frametitle{Sum-up: compiling \fesi{} to RTL}
\begin{enumerate}
\item Transform control-flow into data-flow programs (in A-normal form);
\item Compute the update and commit values for each memory
element;
\item Perform syntactic common-sub-expression elimination;
\item Perform Boolean expressions reduction using BDDs;
\item Use an OCaml backend to generate Verilog code.
\end{enumerate}
\pause
\begin{itemize}
\item Steps [1-4] are proved correct in Coq.
\parenthesis{Not a single lemma about substitutions!}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Examples}
\begin{frame}
\frametitle{Outline}
\tableofcontents[currentsection]
\end{frame}
\begin{frame}
\frametitle{Circuit generators}
\begin{center}
\includegraphics[width=5cm]{figs/examples.pdf}
\end{center}
\begin{enumerate}
\item Coq's reduction (internal)
\item Fe-Si to RTL (proved)
\item RTL to Verilog (OCaml backend, trusted)
\end{enumerate}
\end{frame}
\begin{frame}[fragile]
\frametitle{First example: A recursive von Neumann adder}
\begin{center}
\only<1>{\includegraphics[height=3cm]{figs/adder-1.pdf}}
\only<2>{\includegraphics[height=6cm]{figs/adder-2.pdf}}
\end{center}
\begin{displaymath}
s = a + b \quad t = a + b + 1
\end{displaymath}
\end{frame}
\begin{frame}[fragile]
\frametitle{First example: A recursive von Neumann adder}
\framesubtitle{Meta-programming for free}
\begin{columns}
\column{0.1\linewidth}
\column{0.5\linewidth}
\begin{scoq}
Variable V : T -> Type.
Fixpoint add $\Phi$ n (a : V (Tint [2^ n])) (b : V (Tint [2^ n])) :=
match n with
| 0 => ret ( (a = 1) || (b = 1) ;
(a = 1) && (b = 1); a + b; a + b + 1)
$$
\end{scoq}
\column{0.5\linewidth}
\begin{scoq}
| S n =>
do (aL,aH) <- (low a, high a);
do (bL,bH) <- (low b, high b);
do (pL, gL, sL, tL) <- add n aL bL;
do (pH, gH, sH, tH) <- add n aH bH;
do sH' <- (gL ? tH : sH);
do tH' <- (pL ? tH : sH);
do pH' <- (gH || (pH && gH));
do gH' <- (gH || (pH && gL));
ret (pH'; gH'; sL $\otimes$ sH' ; tL $\otimes$ tH' )
end.
\end{scoq}
\end{columns}
\parenthesis{builds a 4-uple: carry-propagate, carry-generate, sum w/
carry, sum w/o carry}
\pause
\begin{itemize}
\item Proof by induction on $n$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Second example: a bitonic sorter core}
\begin{center}
\includegraphics[width=\textwidth]{figs/bitonic1.png}
\begin{itemize}
\item Bitonic sequence: $x_0 \le \dots \le x_k \ge \dots \ge x_{n-1}$ for
some $k$, or a circular shift.
\item Red: bitonic $\to$ $l_1$ bitonic, $l_2$ bitonic (and $l_1
\le l_2$)
\item Blue (resp. green): bitonic $\to$ sorted (resp. sorted in
reverse order)
\end{itemize}
\end{center}
\end{frame}
\begin{frame}[fragile]
\frametitle{Second example: a bitonic sorter code}
\newcommand\rebind{\leftsquigarrow}
\begin{columns}
\column{0.5\linewidth}
\begin{coq}
Fixpoint merge {n}: T n -> C n :=
match n with
| 0 => fun t => ret (leaf t)
| S k => fun t =>
do a,b $\rebind$ min_max_swap (left t) (right t);
do a <- merge a;
do b <- merge b;
ret (mk_N a b)
end.
$ $\end{coq}
\column{0.5\linewidth}
\begin{coq}
Fixpoint sort {n} : T n -> C n :=
match n with
| 0 => fun t => ret (leaf t)
| S n => fun t =>
do l <- sort (left t);
do r $\rebind$ sort (right t);
do r <- reverse r;
do x $\rebind$ ret (mk_N l r);
merge x
end\end{coq}
\end{columns}
\begin{itemize}
\item \coqe{merge} builds the blue/green boxes
\item \coqe{min_max_swap} builds the red boxes
\item Notations
\begin{coq}
Notation T n := tree (expr Var A) n.
Notation C n := action nil Var (domain n).
\end{coq}
\end{itemize}
\end{frame}
% \begin{frame}[fragile]
% \frametitle{Second example: a bitonic sorter core}
% \framesubtitle{Correctness}
% \begin{twolistings}
% \begin{scoq}
% (* Lists of length $2^n$ represented as trees *)
% Inductive tree (A: Type): nat -> Type :=
% | L : forall x : A, tree A 0
% | N : forall n (l r : tree A n), tree A (S n).
% $ $
% Definition leaf {A n} (t: tree A 0) : A := ...
% Definition left {A n} (t: tree A (S n)) : tree A n := ...
% Definition right {A n} (t: tree A (S n)) : tree A n := ...
% Fixpoint reverse {A} n (t : tree A n) :=
% match t with
% | L x => L x
% | N n l r =>
% let r := (reverse n r) in
% let l := (reverse n l) in
% N n r l
% end.
% $ $
% Variable cmp: A -> A -> A * A.
% Fixpoint min_max_swap {A} n :
% forall (l r : tree A n), tree A n * tree A n :=
% match n with
% | 0 => fun l r =>
% let (x,y) := cmp (leaf l) (leaf r) in (L x, L y)
% | S p => fun l r =>
% let (a,b) := min_max_swap p (left l) (left r) in
% let (c,d) := min_max_swap p (right l) (right r) in
% (N p a c, N p b d)
% end.
% ...
% Fixpoint sort n : tree A n -> tree A n := ...
% \end{scoq}
% & $\quad$
% \begin{scoq}
% Variable A : ty.
% Fixpoint domain n := match n with
% | 0 => A
% | S n => (domain n) $\otimes$ (domain n)
% end.
% Notation T n := tree (expr Var A) n.
% Notation C n := action nil Var (domain n).
% Fixpoint reverse n (t : T n) : C n :=
% match t with
% | L x => ret x
% | N n l r =>
% do r <- reverse n r;
% do l <- reverse n l;
% ret [tuple r, l]
% end.
% Notation mk_N x y := [tuple x,y].
% Variable cmp : Var A -> Var A
% $\qquad\qquad\qquad$ -> action nil Var (A $\otimes$ A).
% Fixpoint min_max_swap n :
% forall (l r : T n), C (S n) :=
% match n with
% | 0 => fun l r =>
% cmp (leaf l) (leaf r)
% | S p => fun l r =>
% do a,b <- min_max_swap p (left l) (left r);
% do c,d <- min_max_swap p (right l) (right r);
% ret ([tuple mk_N a c, mk_N b d])
% end.
% ...
% Fixpoint sort n : T n -> C n := ...
% \end{scoq}
% \end{twolistings}
% \end{frame}
\begin{frame}
\frametitle{Second example: a bitonic sorter core}
\framesubtitle{Formal proof}
\begin{itemize}
\item Step-stone: implement a purely functional version of the
sorting algorithm;
\item Prove that the sorter core and the step-stone function have
the same semantics;
\item Prove that the step-stone is correct. \\
\end{itemize}
\pause
\only<2>{
\begin{theorem}[0-1 principle]
To prove that a (parametric) sorting network is correct, it
suffices to prove that it sorts all sequences of 0 and 1.
\end{theorem}
}
\pause
\begin{theorem}
Let $I$ be a sequence of length $2^n$ of integers of size $m$. The
circuit always produces an output sequence that is a sorted permutation of $I$.
\end{theorem}
\end{frame}
\begin{frame}
\frametitle{Third example: a (family) of stack machines}
\begin{small}
\begin{columns}
\column{0.2\linewidth}
\begin{tabular}{rcll}
i & ::= & \texttt{const $n$ }\\
& $|$ & \texttt{var $x$ }\\
& $|$ & \texttt{setvar $x$ }\\
& $|$ & \texttt{add }\\
& $|$ & \texttt{sub }\\
& $|$ & \texttt{bfwd $\delta$ }\\
& $|$ & \texttt{bbwd $\delta$ }\\
& $|$ & \texttt{bcond $c$ $\delta$ }\\
\\
& $|$ & \texttt{halt }\\
\end{tabular}
\column{0.7\linewidth}
\begin{tabular}{ll}
$\vdash pc,\sigma,s \to pc+1, n :: \sigma,s$& \\
$\vdash pc,\sigma,s \to pc+1, s(x) :: \sigma,s$ & \\
$\vdash pc,v::\sigma,s \to pc+1, \sigma,s[x \leftarrow v]$ & \\
$\vdash pc,n_2::n_1::\sigma,s \to pc+1, (n_1+n_2)::\sigma,s$&\\
$\vdash pc,n_2::n_1::\sigma,s \to pc+1, (n_1-n_2)::\sigma,s$&\\
$\vdash pc,\sigma,s \to pc+1+\delta, \sigma,s$ & \\
$\vdash pc,\sigma,s \to pc+1-\delta, \sigma,s$ & \\
$\vdash pc,n_2::n_1::\sigma,s \to pc+1+\delta, \sigma,s$ & \text{if $c~n_1~n_2$} \\
$\vdash pc,n_2::n_1::\sigma,s \to pc+1, \sigma,s$ & \text{if $\neg (c~n_1~n_2)$} \\
\texttt{no reduction}
\end{tabular}
\end{columns}
\end{small}
\begin{itemize}
\item Implementation parameterized by the size of the values, the size of the
stack, \dots
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{Stack machine excerpt}
\begin{columns}
\column{0.4\linewidth}
\begin{coq}
Definition pop :=
do sp <- ! SP;
do v <- read STACK [: sp - 1];
do _ <- SP ::= sp - 1;
ret v.
\end{coq}
\column{0.4\linewidth}
\begin{coq}
Definition Isetvar pc x :=
do v <- pop;
do _ <- write REGS [: x <- v];
PC ::= pc + 1.
$ $
\end{coq}
\end{columns}
\begin{displaymath}
\vdash pc,v::\sigma,s \to pc+1, \sigma,s[x \leftarrow v]
\end{displaymath}
\begin{itemize}
\item Combine these pieces of code using a \coqe{case} construct
\item Prove the Fe-Si implementation correct w.r.t. the previous
semantics
\item Fun fact: can run binary blobs generated by e.g., an IMP
compiler
\parenthesis{ (an highly stylised way to compute Fibonacci)
} \end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]
\frametitle{Meta-programming at work}
\begin{itemize}
\item Recursive circuits.
\item \alert{Combinators and schedulers} of atomic actions.
\item In these cases, using Coq as a meta-language makes it possible
to prove things.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}
% Rant about Coq
\begin{frame}
\frametitle{Outline}
\tableofcontents [currentsection]
\end{frame}
\begin{frame}[fragile]
\frametitle{Some remarks}
\begin{itemize}
\item Stepping back
\begin{itemize}
\item Bluespec started as an HDL deeply embedded in Haskell
\item Lava [1998] is another HDL deeply embedded in Haskell
\item \fesi{} is ``just'' another HDL, deeply embedded in \alert{Coq}
\pause
\begin{itemize}
\item semantics (i.e., interpreter), compiler and programs are \alert{integrated seamlessly}
\item dependent types capture some interesting properties in
hardware
\item use of extraction to dump compiled programs
\end{itemize}
\end{itemize}
\pause
\item Take-away message
\begin{itemize}
\item Coq can be used as an embedding language for DSLs!
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{On going work}
\begin{itemize}
\item Recent work
\begin{itemize}
\item Three BDD libraries (with J.-H. Jourdan and D. Monniaux, ITP
2013): reflections on the implementation of hash-consing.
\item A better inversion tactic (with P. Boutillier)
\end{itemize}
\item Future work
\begin{itemize}
\item Improve on the language (FIFOs, references).
\item Make the compiler more realistic (automatic scheduling of
atomic actions).
\item Embed a DSL for floating-point cores?
\item More control!
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Thank you for your attention}
\begin{center}
\includegraphics[height= 2cm ]{figs/compilation.pdf}
\vspace{1cm}
If you have any questions ... \\
\end{center}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.620240928,
"avg_line_length": 26.0116054159,
"ext": "tex",
"hexsha": "c61e73f9389769f00c76611dbb18dea7e2bee6a1",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2020-10-21T22:19:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-24T20:16:16.000Z",
"max_forks_repo_head_hexsha": "922982aaddb8a7a16101ff304c45d24a6265dc2e",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "teodorov/Synthesis",
"max_forks_repo_path": "talks/grenoble-2013/talk.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "922982aaddb8a7a16101ff304c45d24a6265dc2e",
"max_issues_repo_issues_event_max_datetime": "2015-08-01T08:13:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-01T08:13:41.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "teodorov/Synthesis",
"max_issues_repo_path": "talks/grenoble-2013/talk.tex",
"max_line_length": 111,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "922982aaddb8a7a16101ff304c45d24a6265dc2e",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "braibant/Synthesis",
"max_stars_repo_path": "talks/grenoble-2013/talk.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-13T22:34:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-23T02:34:49.000Z",
"num_tokens": 9270,
"size": 26896
} |
\documentclass[11pt]{article}
\usepackage{listings}
\usepackage{tikz}
\usepackage{enumerate}
\usepackage{url}
\usepackage{amssymb}
\usetikzlibrary{arrows,automata,shapes}
\lstset{basicstyle=\ttfamily \scriptsize,
basicstyle=\ttfamily,
columns=fullflexible,
breaklines=true,
numbers=left,
numberstyle=\scriptsize,
stepnumber=1,
mathescape=false,
tabsize=2,
showstringspaces=false
}
\newtheorem{defn}{Definition}
\newtheorem{crit}{Criterion}
\newcommand{\handout}[5]{
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.78in { {\bf Software Testing, Quality Assurance and Maintenance } \hfill #2 }
\vspace{4mm}
\hbox to 5.78in { {\Large \hfill #5 \hfill} }
\vspace{2mm}
\hbox to 5.78in { {\em #3 \hfill #4} }
}
}
\end{center}
\vspace*{4mm}
}
\newcommand{\lecture}[4]{\handout{#1}{#2}{#3}{#4}{Lecture #1}}
% 1-inch margins, from fullpage.sty by H.Partl, Version 2, Dec. 15, 1988.
\topmargin 0pt
\advance \topmargin by -\headheight
\advance \topmargin by -\headsep
\textheight 8.9in
\oddsidemargin 0pt
\evensidemargin \oddsidemargin
\marginparwidth 0.5in
\textwidth 6.5in
\parindent 0in
\parskip 1.5ex
%\renewcommand{\baselinestretch}{1.25}
% http://gurmeet.net/2008/09/20/latex-tips-n-tricks-for-conference-papers/
\newcommand{\squishlist}{
\begin{list}{$\bullet$}
{ \setlength{\itemsep}{0pt}
\setlength{\parsep}{3pt}
\setlength{\topsep}{3pt}
\setlength{\partopsep}{0pt}
\setlength{\leftmargin}{1.5em}
\setlength{\labelwidth}{1em}
\setlength{\labelsep}{0.5em} } }
\newcommand{\squishlisttwo}{
\begin{list}{$\bullet$}
{ \setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\leftmargin}{2em}
\setlength{\labelwidth}{1.5em}
\setlength{\labelsep}{0.5em} } }
\newcommand{\squishend}{
\end{list} }
\lstdefinelanguage{JavaScript}{
keywords={typeof, new, true, false, catch, function, return, null, catch, switch, var, if, in, while, do, else, case, break},
keywordstyle=\color{blue}\bfseries,
ndkeywords={class, export, boolean, throw, implements, import, this},
ndkeywordstyle=\color{darkgray}\bfseries,
identifierstyle=\color{black},
sensitive=false,
comment=[l]{//},
morecomment=[s]{/*}{*/},
commentstyle=\color{purple}\ttfamily,
stringstyle=\color{red}\ttfamily,
morestring=[b]',
morestring=[b]''
}
\begin{document}
\lecture{14 --- February 4, 2015}{Winter 2015}{Patrick Lam}{version 1}
Recall that we've been discussing beliefs. Here are a couple of beliefs
that are worthwhile to check. (examples courtesy Dawson Engler.)
\paragraph{Redundancy Checking.} 1) Code ought to do something. So,
when you have code that doesn't do anything, that's suspicious. Look
for identity operations, e.g.
\[ {\tt x = x}, ~ {\tt 1 * y}, ~ {\tt x \& x}, ~ {\tt x | x}. \]
Or, a longer example:
\begin{lstlisting}[language=C]
/* 2.4.5-ac8/net/appletalk/aarp.c */
da.s_node = sa.s_node;
da.s_net = da.s_net;
\end{lstlisting}
Also, look for unread writes:
\begin{lstlisting}[language=C]
for (entry=priv->lec_arp_tables[i];
entry != NULL; entry=next) {
next = entry->next; // never read!
...
}
\end{lstlisting}
Redundancy suggests conceptual confusion.
%\section*{Inferring beliefs}
%If we are to verify beliefs about the code (either MAY or MUST), we first need to get a set of beliefs somehow.
So far, we've talked about MUST-beliefs; violations are clearly wrong (in some sense).
Let's examine MAY beliefs next. For such beliefs, we need more evidence to convict the program.
\paragraph{Process for verifying MAY beliefs.} We proceed as follows:
\begin{enumerate}
\item Record every successful MAY-belief check as ``check''.
\item Record every unsucessful belief check as ``error''.
\item Rank errors based on ``check'' : ``error'' ratio.
\end{enumerate}
Most likely errors occur when ``check'' is large, ``error'' small.
\paragraph{Example.} One example of a belief is use-after-free:
\begin{lstlisting}[language=C]
free(p);
print(*p);
\end{lstlisting}
That particular case is a MUST-belief.
However, other resources are freed by custom (undocumented) free functions.
It's hard to get a list of what is a free function and what isn't.
So, let's derive them behaviourally.
\paragraph{Inferring beliefs: finding custom free functions.}
The key idea is:
if pointer {\tt p} is not used after calling {\tt foo(p)},
then derive a MAY belief that {\tt foo(p)} frees {\tt p}.
OK, so which functions are free functions? Well, just assume all functions free all arguments:
\begin{itemize}
\item emit ``check'' at every call site;
\item emit ``error'' at every use.
\end{itemize}
(in reality, filter functions with suggestive names).
Putting that into practice,
we might observe:
\begin{center}
\begin{tabular}{l|l|l|l|l|l}
\begin{minipage}{5em}
foo(p)\\
*p = x;
\end{minipage} &
\begin{minipage}{5em}
foo(p)\\
*p = x;
\end{minipage} &
\begin{minipage}{5em}
foo(p)\\
*p = x;
\end{minipage} &
\begin{minipage}{5em}
bar(p)\\
p = 0;
\end{minipage} &
\begin{minipage}{5em}
bar(p)\\
p=0;
\end{minipage} &
\begin{minipage}{5em}
bar(p)\\
*p = x;
\end{minipage}
\end{tabular}
\end{center}
We would then rank {\tt bar}'s error first.
Plausible results might be: 23 free errors, 11 false positives.
\paragraph{Inferring beliefs: finding routines that may return {\tt NULL}.}
The situation: we want to know which routines may return {\tt NULL}.
Can we use static analysis to find out?
\squishlist
\item sadly, this is difficult to know statically (``{\tt return p->next;}''?) and,
\item we get false positives: some functions return {\tt NULL} under special cases only.
\squishend
Instead, let's observe what the programmer does.
Again, rank errors based on checks vs non-checks.
As a first approximation, assume {\bf all} functions can return {\tt NULL}.
\squishlist
\item if pointer checked before use: emit ``check'';
\item if pointer used before check: emit ``error''.
\squishend
This time, we might observe:
\begin{center}
\begin{tabular}{l|l|l|l}
\begin{minipage}{6em}
p = bar(...);\\
*p = x;
\end{minipage} &
\begin{minipage}{7em}
p = bar(...);\\
if (!p) return;\\
*p = x;
\end{minipage} &\begin{minipage}{7em}
p = bar(...);\\
if (!p) return;\\
*p = x;
\end{minipage} &\begin{minipage}{7em}
p = bar(...);\\
if (!p) return;\\
*p = x;
\end{minipage}
\end{tabular}
\end{center}
Again, sort errors based on the ``check'':``error'' ratio.
Plausible results: 152 free errors, 16 false positives.
\newpage
\subsection*{General statistical technique}
When we write ``a(); \ldots b();'', we mean a MAY-belief that a() is followed by b().
We don't actually know that this is a valid belief. It's a hypothesis, and we'll try it out.
Algorithm:
\vspace*{-1em}
\squishlist
\item assume every {\tt a}--{\tt b} is a valid pair;
\item emit ``check'' for each path with ``a()'' and then ``b()'';
\item emit ``error'' for each path with ``a()'' and no ``b()''.
\squishend
(actually, prefilter functions that look paired).
Consider:
{\small
\begin{tabular}{l|l|l}
\begin{minipage}{10em}
foo(p, \ldots);\\
bar(p, \ldots); // check
\end{minipage} &
\begin{minipage}{10em}
foo(p, \ldots);\\
bar(p, \ldots); // check
\end{minipage} &
\begin{minipage}{12em}
foo(p, \ldots);\\
// error: foo, no bar!
\end{minipage}
\end{tabular}
}
This applies to the course project as well.
\begin{tabular}{ll}
\begin{minipage}{10em}
\scriptsize
\begin{lstlisting}
void scope1() {
A(); B(); C(); D();
}
void scope2() {
A(); C(); D();
}
void scope3() {
A(); B();
}
void scope4() {
B(); D(); scope1();
}
void scope5() {
B(); D(); A();
}
void scope6() {
B(); D();
}
\end{lstlisting}
\end{minipage} &\begin{minipage}{30em}
``A() and B() must be paired'':\\
either A() then B() or B() then A().\\[2em]
\begin{tabbing}
{\bf Support} = \= \# times a pair of functions appears together.\\
\> \hspace*{2em} support(\{A,B\})=3
\end{tabbing}
~\\[1em]
{\bf Confidence(\{A,B\},\{A\}}) = \\ \hspace*{2em} support(\{A,B\})/support(\{A\}) = 3/4
\end{minipage}
\end{tabular}
Sample output for support threshold~3, confidence threshold 65\% (intra-procedural analysis):
{\small
\squishlist
\item bug:A in scope2, pair: (A B), support:~3, confidence: 75.00\%
\item bug:A in scope3, pair: (A D), support:~3, confidence: 75.00\%
\item bug:B in scope3, pair: (B D), support:~4, confidence: 80.00\%
\item bug:D in scope2, pair: (B D), support:~4, confidence: 80.00\%
\squishend
}
The point is to find examples like the one from {\tt cmpci.c}
where there's a {\tt lock\_kernel()} call, but, on an exceptional path, no
{\tt unlock\_kernel()} call.
\vspace*{-1em}
\paragraph{Summary: Belief Analysis.}
We don't know what the right spec is.
So, look for contradictions.
\squishlist
\item MUST-beliefs: contradictions = errors!
\item MAY-beliefs: pretend they're MUST, rank by confidence.
\squishend
(A key assumption behind this belief analysis technique: most of the code is correct.)
\paragraph{Further references.}
Dawson~R. Engler, David Yu Chen, Seth Hallem, Andy Chou and Benjamin Chelf.
``Bugs as Deviant Behaviors: A general approach to inferring errors in systems code''.
In SOSP '01.
Dawson~R. Engler, Benjamin Chelf, Andy Chou, and Seth Hallem.
``Checking system rules using system-specific, programmer-written
compiler extensions''.
In OSDI '00 (best paper).
\url{www.stanford.edu/~engler/mc-osdi.pdf}
Junfeng Yang, Can Sar and Dawson Engler.
``eXplode: a Lightweight, General system for Finding Serious Storage System Errors''.
In OSDI'06.
\url{www.stanford.edu/~engler/explode-osdi06.pdf}
\section*{Using Linters}
We will also talk about linters in this lecture, based on Jamie Wong's blog post \url{jamie-wong.com/2015/02/02/linters-as-invariants/}.
\paragraph{First there was C.}
In statically-typed languages, like C,
\begin{lstlisting}[language=C]
#include <stdio.h>
int main() {
printf("%d\n", num);
return 0;
}
\end{lstlisting}
the compiler saves you from yourself.
The guaranteed invariant:
\begin{center}
``if code compiles, all symbols resolve.''
\end{center}
\paragraph{Less-nice languages.}
OK, so you try to run that in JavaScript and it crashes right away.
Invariant?
\begin{center}
``if code runs, all symbols resolve?''
\end{center}
But what about this:
\begin{lstlisting}[language=JavaScript]
function main(x) {
if (x) {
console.log("Yay");
} else {
console.log(num);
}
}
main(true);
\end{lstlisting}
Nope! The above invariant doesn't work.
OK, what about this invariant:
\begin{center}
``if code runs without crashing, all symbols referenced in the code path executed resolve?''
\end{center}
Nope!
\begin{lstlisting}[language=JavaScript]
function main() {
try {
console.log(num);
} catch (err) {
console.log("nothing to see here");
}
}
main();
\end{lstlisting}
So, when you're working in JavaScript and maintaining old code, you always have to
deduce:
\squishlist
\item is this variable defined?
\item is this variable always defined?
\item do I need to load a script to define that variable?
\squishend
We have computers. They're powerful. Why is this the developer's problem?!
\paragraph{Solution: Linters.}
\begin{lstlisting}[language=JavaScript]
function main(x) {
if (x) {
console.log("Yay");
} else {
console.log(num);
}
}
main(true);
\end{lstlisting}
Now:
\begin{verbatim}
$ nodejs /usr/local/lib/node_modules/jshint/bin/jshint --config jshintrc foo.js
foo.js: line 5, col 17, 'num' is not defined.
1 error
\end{verbatim}
\vspace*{-1em}
\paragraph{Invariant:}~\\
\begin{center}
``If code passes JSHint, all top-level symbols resolve.''
\end{center}
\paragraph{Strengthening the Invariant.} Can we do better? How about adding a pre-commit hook?
\begin{center}
``If code is checked-in and commit hook ran,\\ all top-level symbols resolve.''
\end{center}
Of course, sometimes the commit hook didn't run. Better yet:
\squishlist
\item Block deploys on test failures.
\squishend
\paragraph{Better invariant.}~\\[1em]
\begin{center}
``If code is deployed,\\ all top-level symbols resolve.''
\end{center}
\paragraph{Even better yet.}
It is hard to tell whether code is deployed or not.
Use git feature branches, merge when deployed.
\begin{center}
``If code is in master,\\ all top-level symbols resolve.''
\end{center}
\end{document}
| {
"alphanum_fraction": 0.6910758832,
"avg_line_length": 26.4404255319,
"ext": "tex",
"hexsha": "7cc2c1d96a7b572e540e5f85ad9c1a192c88523c",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2018-04-14T20:06:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-09T18:29:15.000Z",
"max_forks_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "patricklam/stqam-2017",
"max_forks_repo_path": "2015-notes/L14.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "patricklam/stqam-2017",
"max_issues_repo_path": "2015-notes/L14.tex",
"max_line_length": 136,
"max_stars_count": 30,
"max_stars_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "patricklam/stqam-2017",
"max_stars_repo_path": "2015-notes/L14.tex",
"max_stars_repo_stars_event_max_datetime": "2018-04-15T22:27:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-17T01:10:11.000Z",
"num_tokens": 3849,
"size": 12427
} |
\subsection{Hash functions}
Hash functions (take input and return fixed length output) (h=hash(m))
\subsubsection{Data integrity checks}
Needs to be very different for small changes. so typo has different hash for example. corrput data needs to be noticed.
\subsubsection{Checksums}
if two files are the same then hashes the same
\subsubsection{Introduction}
Want following properties for a hash function
Deterministic, so the same hash is always created.
Quick to compute hash
Cannot generate input from hash, except for brute forcing inputs
Small changes to document should cause large charges to hash, such that the two hashes appear uncorrelated
Can't find multiple documents with the same hash, practically.
Can be used to verify files, check passwords.
So possible vulernabilities are:
Given hash, find message (Pre-image resistence)
Given input, find another input with the same hash (second pre-image resistance)
Collission resistance (find two inputs with same hash)
We want to prevent accidental changes to file, and deliberate changes to file. Vulerabilites are more importnat for latter.
| {
"alphanum_fraction": 0.7983942908,
"avg_line_length": 28.025,
"ext": "tex",
"hexsha": "a6742607541c5afddfe975fb3ff7ae944601acd5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/hashes/01-01-hashing.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/hashes/01-01-hashing.tex",
"max_line_length": 123,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/hashes/01-01-hashing.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 232,
"size": 1121
} |
\documentclass[titlepage]{book}
\usepackage[]{graphicx}\usepackage[]{color}
% maxwidth is the original width if it is less than linewidth
% otherwise use linewidth (to make sure the graphics do not exceed the margin)
\makeatletter
\def\maxwidth{ %
\ifdim\Gin@nat@width>\linewidth
\linewidth
\else
\Gin@nat@width
\fi
}
\makeatother
\definecolor{fgcolor}{rgb}{0.345, 0.345, 0.345}
\newcommand{\hlnum}[1]{\textcolor[rgb]{0.686,0.059,0.569}{#1}}%
\newcommand{\hlstr}[1]{\textcolor[rgb]{0.192,0.494,0.8}{#1}}%
\newcommand{\hlcom}[1]{\textcolor[rgb]{0.678,0.584,0.686}{\textit{#1}}}%
\newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}%
\newcommand{\hlstd}[1]{\textcolor[rgb]{0.345,0.345,0.345}{#1}}%
\newcommand{\hlkwa}[1]{\textcolor[rgb]{0.161,0.373,0.58}{\textbf{#1}}}%
\newcommand{\hlkwb}[1]{\textcolor[rgb]{0.69,0.353,0.396}{#1}}%
\newcommand{\hlkwc}[1]{\textcolor[rgb]{0.333,0.667,0.333}{#1}}%
\newcommand{\hlkwd}[1]{\textcolor[rgb]{0.737,0.353,0.396}{\textbf{#1}}}%
\let\hlipl\hlkwb
\usepackage{framed}
\makeatletter
\newenvironment{kframe}{%
\def\at@end@of@kframe{}%
\ifinner\ifhmode%
\def\at@end@of@kframe{\end{minipage}}%
\begin{minipage}{\columnwidth}%
\fi\fi%
\def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep
\colorbox{shadecolor}{##1}\hskip-\fboxsep
% There is no \\@totalrightmargin, so:
\hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}%
\MakeFramed {\advance\hsize-\width
\@totalleftmargin\z@ \linewidth\hsize
\@setminipage}}%
{\par\unskip\endMakeFramed%
\at@end@of@kframe}
\makeatother
\definecolor{shadecolor}{rgb}{.97, .97, .97}
\definecolor{messagecolor}{rgb}{0, 0, 0}
\definecolor{warningcolor}{rgb}{1, 0, 1}
\definecolor{errorcolor}{rgb}{1, 0, 0}
\newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX
\usepackage{alltt}
\newcommand{\SweaveOpts}[1]{} % do not interfere with LaTeX
\newcommand{\SweaveInput}[1]{} % because they are not real TeX commands
\newcommand{\Sexpr}[1]{} % will only be parsed by R
\usepackage{bwstyle}
\title{\huge \textbf{ R- a tool for statistical analysis }\vspace{+1cm} }
\subtitle{A modern introduction using tidyverse}
\author{{\LARGE \emph{Brian Williams}}\vspace{+1cm} \\
Epidemiology and Global Health Unit,\\
Department of Public Health and Medicine,\\
Umeå University \vspace{+8cm}}
\date{\today}
\begin{document}
%Template for children
% Change chunk name to chapter title
% ALl chunk names to be preceded by Tn or Ln (where n is chapter or tutorial number)
\chapter{Preface}
\section{Original Preface by Brian Williams}
\author{Brian Williams $<$\href{mailto:[email protected]}%
{[email protected]}$>$}
This set of notes is intended as an introduction to the use of R for Public Health Students. These notes are themselves an example of what can be achieved using one of the R User interfaces (RStudio) making use of the R package \texttt{knitr}.
This is \textsl{not} a course on statistics - the expectation is that students will already have undertaken statistics courses (or be about to do so). The emphasis here is on using a consistent subset of tools for preparing data for analysis and taking a preliminary look at it using tabulation of aggregates and visualization. The course will largely work with Hadley Wickham's 'tidyverse' packages.
The course is limited to 13 Lectures of around one to two hours duration and 13 tutorials of similar length and as a consequence, there are limitations on the amount of material which can be covered. I have tried to include appropriate references for further reading wherever discussion has been limited.
Apart from the broader basics of R, I have included a view of the future of data science where it might be of interest to researchers in the Public Health.
I have introduced the idea of Markdown in earlier courses, when RStudio converted it to nice HTML documents, but now newer versions of RStudio have made conversion to MS Word documents straightforward.
To my way of thinking, the ease with which documentation of analytical processes can be maintained now, mandates that it should always be done. Doing so aids the researcher in on-going recording of his or her own thinking, methodology, experimentation etc, and it is especially valuable when there are several approaches to choose from, requiring comparisons across perhaps complex procedures. It is also an excellent way of providing fully accessible information for supervisors or collaborators. Ultimately, this on-going, easy-to-do rigorous documentation provides a ready made basis for publication of completed research projects.
That said, with the rapidly changing experimental and analytical technology (changing data bases, analytical software etc), \emph{maintenance} of research in reproducible form is a significant issue. My own view is that given the rapid development of research in general, and the potential maintenance work-load, that we should aim for 'reproducibility' to be maintained for around five years. In some very actively developing areas, three years may be more sensible, while in other more stable areas, a longer time-frame may be manageable without undue effort. Maintaining reproducibility of software in cutting edge development will always require additional time and often may appear to be time spent for no productive outcome.
Bearing in mind that the intended audience of this course is not a group of computer scientists and some may have no experience in programming, I have taken a rather infomal approach trying to introduce examples early in the course which would motivate the audience, rather than beginning formally with an extensive set of definitions of language as one might find in a computer science text book. The \href{https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf}{'An Introduction to R'} on the CRAN web page is an excellent more formal presentation if you prefer that approach. I would like to think that these notes and that document complement one another. The draft 'R Language Definition' on the CRAN site is still more formal.
For those with no experience in R or programming, there may seem to be an overwhelming amount of information to comprehend. I can only advise you to not panic, relax, and don't worry about things you don't yet fully understand. \emph{You cannot hope to be fully conversant in R in two weeks}! (I'm certainly not after dabbling for more than 10 years!)
One of the most important lessons in the course will be to learn how to find answers to your R questions. Modern programmers work with multiple languages, editors, protocols, operating systems etc and cannot be expected to remember function arguments, specialised syntax etc. The real skill is being able to efficiently find out how to do something - not remember how to do everything!
R too, has many thousands of functions. Google usage is fundamental. There are a number of forums on the web which are generally very friendly and supportive if you are struggling with a problem. (Nabble, Stack Overflow are a good start.) You do need to follow forum protocols - almost all require a minimal example of your problem.
There are many other sources of examples of R code, most of which you will find using Google. Don't forget the CRAN \href{https://cran.r-project.org/}{pages} - there are a lot of resources there!
The notes are a 'work-in-progress'. They have largely been derived from my own work using R over the past ten years or so, and then cast into a teaching form in workshops and classes in the Epidemiology and Global Health Unit in the Department of Public Health and Medicine at Umeå University in the last couple of years.
I apologise for 'glitches' in the text (there will be many), but at the same time, I am hopeful that the notes (and classes) will provide a good basis for your venturing into R.
The notes will probably continue development in the next couple of years - look out for updates in the GitHub repo. \\
Brian Williams
March 2019.
\section{2nd Preface by Kaspar Meili}
Sadly, due to other commitments, Brian is no longer able to continue the development of the course files to the same extend as he used to. Luckily he agreed to let me help him maintaining the course and as a first step I uploaded the material to GitHub. We decided to publish the under the Creative Commons Attribution 4.0 International License CC BY. \url{https://creativecommons.org/licenses/by/4.0/}.
\end{document}
| {
"alphanum_fraction": 0.7683801906,
"avg_line_length": 65.8992248062,
"ext": "tex",
"hexsha": "c264912f17253b5a28066177de9316274e3c501e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8b6a3f4af502649c9a0c91ffad0fec18e7995d48",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "kdevkdev/tidyrcourse",
"max_forks_repo_path": "Book/children/Preface.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8b6a3f4af502649c9a0c91ffad0fec18e7995d48",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "kdevkdev/tidyrcourse",
"max_issues_repo_path": "Book/children/Preface.tex",
"max_line_length": 739,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8b6a3f4af502649c9a0c91ffad0fec18e7995d48",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "kdevkdev/tidyrcourse",
"max_stars_repo_path": "Book/children/Preface.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2180,
"size": 8501
} |
\section{Symbiosis}\label{sec:symbiosis}
Symbiosis brings Mitosis-Core and Mitosis-Stream together and demonstrates how to build a very basic but usable application.
This sample application shall allow one user to broadcast a stream that is recorded with the camera of their device and $\ n $ other users shall be able to watch the video stream in real–time.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{graphics/implementation/symbiosis.pdf}
\caption{Symbiosis}
\label{fig:symbiosis-implementation}
\end{figure}
The \gls{ui} of the application has an \gls{html} VideoElement and a record button. When the user presses the record button the device starts the camera and waits for the stream. When the camera is ready the stream is set on the ChannelManager (\vref{lst:symbiosis-start-stream}).
\begin{Listing}
\begin{lstlisting}
navigator.mediaDevices.getUserMedia({
video: true,
audio: false
}).then(
(stream) => {
mitosis.getStreamManager().setLocalStream(stream);
});
\end{lstlisting}
\caption{Access user camera and set stream}
\label{lst:symbiosis-start-stream}
\end{Listing}
The application is also listening for incoming streams. Users that have already started the application will be notified of the stream and it will start playback. When a user joins later and another user has already started broadcasting they will get the stream as well.
To allow the application to display the stream, it has to observe the ChannelManger. When a channel is added it has to observe the channel for a stream. When the stream is available it can set the stream on the HTML VideoElement and start playing (\vref{lst:symbiosis-observe-stream}).
\begin{Listing}
\begin{lstlisting}
mitosis
.getStreamManager()
.observeChannelChurn()
.subscribe(
channelEv => channelEv.value
.observeStreamChurn()
.subscribe(
streamEv => videoEl.srcObject = streamEv.stream;
)
);
\end{lstlisting}
\caption{Observe ChannelManager for incoming streams}
\label{lst:symbiosis-observe-stream}
\end{Listing}
The source-code for this minimal application can be seen in \vref{sec:symbiosis-soruce-code}.
| {
"alphanum_fraction": 0.7705146036,
"avg_line_length": 44.0204081633,
"ext": "tex",
"hexsha": "73bc63639c358f3efad18c5c654b476e72ee1465",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "962809e03db2b91c30301aa8238b09d256d1569a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "auxdotapp/report",
"max_forks_repo_path": "content/4-implementation/5-symbiosis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "962809e03db2b91c30301aa8238b09d256d1569a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "auxdotapp/report",
"max_issues_repo_path": "content/4-implementation/5-symbiosis.tex",
"max_line_length": 285,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "962809e03db2b91c30301aa8238b09d256d1569a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "auxdotapp/report",
"max_stars_repo_path": "content/4-implementation/5-symbiosis.tex",
"max_stars_repo_stars_event_max_datetime": "2019-02-21T18:26:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-21T18:26:56.000Z",
"num_tokens": 516,
"size": 2157
} |
\documentclass[a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{textcomp}
\usepackage[english]{babel}
\usepackage{amsmath, amssymb}
\usepackage[]{amsthm} %lets us use \begin{proof}
\usepackage[]{amssymb} %gives us the character \varnothing
% figure support
\usepackage{import}
\usepackage{xifthen}
\pdfminorversion=7
\usepackage{pdfpages}
\usepackage{transparent}
\newcommand{\incfig}[1]{%
\def\svgwidth{\columnwidth}
\import{./figures/}{#1.pdf_tex}
}
\pdfsuppresswarningpagegroup=1
\begin{document}
\section*{Section 2.3}
\subsection*{Exercise 2.3.1}
Prove Lemma 2.3.2. (Hint: modify the proofs of Lemmas 2.2.2, 2.2.3 and Proposition 2.2.4.)
\begin{proof}
$ $\newline
We need to show $m \times 0 = m$ and $n \times \left( m\text{++} \right) = \left( n\times m \right) + n$ before proving the multiplication is commutative.
To show $m\times 0 = m$ we use induction. For the base case $0\times 0 = 0$ follows since we know that $0\times m = m$ by definition. Now suppose inductively that $m\times 0 = m$, we need to show $\left( m\text{++} \right) \times 0 = 0$. By definition $\left( m\text{++} \right) \times 0 = \left( m\times 0 \right) + 0 = 0$, hence the induction is closed.
To show $n \times \left( m\text{++} \right) = \left( n\times m \right) + n$, we induct on $n$. For the base case $0 \times \left( m\text{++} \right) = \left( 0 \times m \right) + 0$ follows by the definitions of addition and multiplication. Now suppose inductively $n \times \left( m\text{++} \right) = \left( n\times m \right) + n$, we need to show $n\text{++} \times \left( m\text{++} \right) = \left( n\text{++}\times m \right) + n\text{++}$. The left-hand side is $n\times \left( m\text{++} \right) + \left( m\text{++} \right) = n\times m + n + m + 1$ by definition of multiplication and the hypothesis. The right-hand side is $\left( n\times m + m \right) + n\text{++} = n\times m + n + m + 1$ by definition of multiplication. Thus both sides are equal to each other, and we have closed the induction.
To show multiplication is commutative, we induct on $n$. For the base case $0\times m = m\times 0$ follows since both sides euqal to zero by definition of multiplication and $m\times 0 = 0$. Now suppose inductively that $n \times m = m\times n$, we need to show $n\text{++}\times m = m\times n\text{++}$. The left-hand side is $n\times m + m$ by definition. The right-hand side is $m\times n + m$ by the lemma we've just proved. And by the hypothesis the right-hand then equals to $n\times m + m$. Thus both sides are equal to each other, and we have closed the induction.
\end{proof}
\subsection*{Exercise 2.3.2}
Prove Lemma 2.3.3. (Hint: prove the second statement first.)
\begin{proof}
$ $\newline
The statement is equivalent to "$n\times m$ is positive iff both $n$ and $m$ are positive".
First we need to show $n\times m$ is positive implies both $n$ and $m$ are positive. For the sake of contradiction that $n$ equals to zero, by definition of multiplication $0\times m = 0$, a contradiction. Similar contradiction holds for $m$ with Lemma 2.3.2. Thus we have proved the statement.
Then we need to show both $n$ and $m$ are positive implies $n\times m$ is positive. For the sake of contradiction that $n\times m = 0$, by Lemma 2.2.10 there exists extractly one natural nunmber $a$ such that $a\text{++} = n$, thus by definition of multiplication we have $n\times m = \left( a\text{++} \right) \times m = \left( a\times m \right) \text{++}$. Since $a\times m$ is a natural number, by Axiom 2.3 $\left( a\times m \right) \text{++} \neq 0$, a contradiction. Thus $n\times m$ is positive.
Thus we have proved the original statement.
\end{proof}
\subsection*{Exercise 2.3.3}
Prove Proposition 2.3.5. (Hint: modify the proof of Proposition
2.2.5 and use the distributive law.)
\begin{proof}
We use induction on $a$. For the base case $\left( 0\times b \right) \times c = 0 \times \left( b\times c \right) $ follows, since the left-hand side equals to $0\times c = 0$ by definition of multiplication, and the right-hand side equals to $0$ since $b\times c$ is a natural number and $0$ times a natural number equals to $0$. Now suppose inductively $\left( a\times b \right) \times c = a\times \left( b\times c \right) $, we need to show $\left( a\text{++}\times b \right) \times c = a\text{++}\times \left( b\times c \right) $. The left-hand side equals to $\left( a\times b + b \right) \times c$ by definition of multiplication, then equals to $\left( a\times b \right) \times c + b\times c$ by the distributive law. The right-hand side equals to $a\times \left( b\times c \right) + b\times c$ by definition of multiplication. By the hypothesis both sides are equal to each other, thus we have closed the induction.
\end{proof}
\subsection*{Exercise 2.3.4}
Prove the identity $\left( a + b \right) ^2 = a^2 + 2ab + b^2$ for all natural
numbers $a$, $b$.
\begin{proof}
$\left( a+b \right) ^2 = \left( a+b \right) ^1 \left( a + b \right) = \left( a + b \right) \left( a + b \right) $ by definition of exponentiation. Thus equals to $a\times \left( a + b \right) + b\times \left( a+b \right) = a\times a + a\times b + b\times a + b\times b$ by the distributive law. $a\times a = a^2$ and $b\times b = b^2$ by definition of exponentiation. $a\times b + b\times a = a\times b + a\times b = 2ab$ since multiplication is commutative and the definition of multiplication. Thus we have proved the statement.
\end{proof}
\subsection*{Exercise 2.3.5}
Prove Proposition 2.3.9. (Hint: fix $q$ and induct on $n$.)
\begin{proof}
We use induction on $n$ (keep $q$ fixed). For the base case there exists natural numbers $m = 0$, $r = 0$ such that $0 \le r < q$ and $0 = mq + r$. Now suppose inductively there exists $m$, $r$ such that $0 \le r < q$ and $n = mq + r$, we need to show there exists $m'$, $r'$ such that $0 \le r' < q$ and $n = m'q + r'$. Thus $r\text{++} \le q$ by Proposition 2.2.12 (e). If $r\text{++} < q$, we can simply set $m' = m$ and $r' = r + 1$. Otherwise if $r\text{++} = q$, then $n\text{++} = mq + q = \left( m\text{++} \right) \times q + 0$, thus we can set $m' = m\text{++}$ and $r' = 0$. Thus we have close the induction.
\end{proof}
\end{document}
| {
"alphanum_fraction": 0.6747316135,
"avg_line_length": 63.6836734694,
"ext": "tex",
"hexsha": "e378f4bb6add621700deb32f1cc5c7bc39f70d35",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ddddabd9082d996c2d441721304160004b32836c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "huntzhan/solution-analysis-i-terence-tao",
"max_forks_repo_path": "chap-2.3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ddddabd9082d996c2d441721304160004b32836c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "huntzhan/solution-analysis-i-terence-tao",
"max_issues_repo_path": "chap-2.3.tex",
"max_line_length": 924,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ddddabd9082d996c2d441721304160004b32836c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "huntzhan/solution-analysis-i-terence-tao",
"max_stars_repo_path": "chap-2.3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2092,
"size": 6241
} |
\title{(some) LaTeX environments \par for Jupyter notebook}\author{@jfbercher}
\section{Introduction}\label{introduction}
This extension for IPython 4.x or Jupyter enables to use some LaTeX
commands and environments in the notebook's markdown cells.
\begin{enumerate} \item \textbf{LaTeX commands and environments}
\begin{itemize} \item support for some LaTeX commands within
markdown cells, \emph{e.g.} \texttt{\textbackslash{}textit},
\texttt{\textbackslash{}textbf}, \texttt{\textbackslash{}underline},
\texttt{author}, \texttt{\textbackslash{}title}, LaTeX comments \item
support for \textbf{theorems-like environments}, support for labels and
\textbf{cross references} \item support for \textbf{lists}:
\emph{enumerate, itemize},\\
\item limited support for a \textbf{figure environment}, \item support
for an environment \emph{listing}, \item additional \emph{textboxa}
environment \end{itemize} \item \textbf{Citations and
bibliography} \begin{itemize} \item support for
\texttt{\textbackslash{}cite} with creation of a References section
\end{itemize} \item it is possible mix markdown and LaTeX
markup \item \textbf{Document-wide numbering of equations and
environments, support for \texttt{\textbackslash{}label} and
\texttt{\textbackslash{}ref}} \item \textbf{Configuration toolbar} \item
\textbf{LaTeX\_envs dropdown menu for a quick insertion of environments}
\item Support for \textbf{User \LaTeX definitions file} \item
Environments title/numbering can be customized by users in
\texttt{user\_envs.json} config file \item \textbf{Export to HTML and
LaTeX with a customized exporter} \item Styles can be customized in the
\texttt{latex\_env.css} stylesheet \end{enumerate}
A simple illustration is as follows: one can type the following in a
markdown cell
\begin{listing}
The dot-product is defined by equation (\ref{eq:dotp}) in theorem \ref{theo:dotp} just below:
\begin{theorem}[Dot Product] \label{theo:dotp}
Let $u$ and $v$ be two vectors of $\mathbb{R}^n$. The dot product can be expressed as
\begin{equation}
\label{eq:dotp}
u^Tv = |u||v| \cos \theta,
\end{equation}
where $\theta$ is the angle between $u$ and $v$ ...
\end{theorem}
\end{listing}
and have it rendered as
The dot-product is defined by equation (\ref{eq:dotp}) in theorem
\ref{theo:dotp} just below: \begin{theorem}[Dot Product]
\label{theo:dotp} Let \(u\) and \(v\) be two vectors of
\(\mathbb{R}^n\). The dot product can be expressed as
\begin{equation}
\label{eq:dotp}
u^Tv = |u||v| \cos \theta,
\end{equation}
where \(\theta\) is the angle between \(u\) and \(v\) \ldots{}
\end{theorem}
\subsection{** What's new **}\label{whats-new}
\textbf{November 2, 2016} - version 1.3.1
\begin{itemize}
\tightlist
\item
Support for \textbf{user environments configuration} file
(\texttt{user\_envs.json} in nbextensions/latex\_envs directory). This
file is included by the html export template.\\
\item
Support for \textbf{book/report style numbering} of environments, e.g.
``Example 4.2'' is example 2 in section 4.
\item
Support for \texttt{\textbackslash{}author},
\texttt{\textbackslash{}title}, and \texttt{maketitle}. Author and
title are saved in notebook metadata, used in html/latex exports. The
maketitle command also formats a title in the LiveNotebook.\\
\item
Added a Toogle menu in the config toolbar to:
\begin{itemize}
\tightlist
\item
toggle use of user's environments config file
\item
toggle \texttt{report-style} numbering
\end{itemize}
\end{itemize}
\textbf{September 18, 2016} - version 1.3
\begin{itemize}
\tightlist
\item
Support for \textbf{user personal LaTeX definitions} file
(\texttt{latexdefs.tex} in current directory). This file is included
by the html and latex export templates.\\
\item
Style for nested enumerate environments added in
\texttt{latex\_envs.css}
\item
Added a Toogle menu in the config toolbar to:
\begin{itemize}
\tightlist
\item
toggle the display of the LaTeX\_envs dropdown menu,
\item
toggle the display of labels keys,
\item
toggle use of user's LaTeX definition file
\end{itemize}
\item
\textbf{Cross references now use the true environment number instead
of the reference//label key}. \textbf{References are updated
immediately}. This works \textbf{document wide} and works for pre and
post references
\item
Support for optional parameters in theorem-like environments
\item
Support for spacings in textmode, eg \texttt{\textbackslash{}par},
\texttt{\textbackslash{}vspace,\ \textbackslash{}hspace}
\item
Support for LaTeX comments % in markdown cells
- Reworked
loading and merging of system/document configuration parameters
\end{itemize}
\textbf{August 28, 2016} - version 1.2
\begin{itemize}
\item
\textbf{Added support for nested environments of the same type}.
Nesting environments of different type was already possible, but there
was a limitation for nesting environments of the same kind; eg itemize
in itemize in itemize. This was due to to the fact that regular
expressions are not suited to recursivity. I have developped a series
of functions that enable to extract nested environments and thus cope
with such situations.
\item
Corrected various issues, eg
\href{https://github.com/ipython-contrib/jupyter_contrib_nbextensions/issues/731}{\#731},
\href{https://github.com/ipython-contrib/jupyter_contrib_nbextensions/issues/720}{\#720}
where the content of nested environments was incorrectly converted to
markdown.
\item
Completely reworked the configuration toolbar. Re-added tips.
\item
Added a toggle-button for the LaTeX\_envs menu
\item
Added system parameters that can be specified using the
\href{https://github.com/Jupyter-contrib/jupyter_nbextensions_configurator/tree/master/src/jupyter_nbextensions_configurator/static/nbextensions_configurator}{nbextensions\_configurator}.
Thus reworked the configuration loading/saving.
\item
Reworked extension loading. It now detects if the notebook is fully
loaded before loading itself.
\end{itemize}
\textbf{August 03, 2016} - version 1.13
\begin{itemize}
\item
Added a template to also keep the toc2 features when exporting to
html:
\begin{verbatim}
jupyter nbconvert --to html_toclenvs FILE.ipynb
\end{verbatim}
\item
Added a dropdown menu that enables to insert all main LaTeX\_envs
environments using a simple click. Two keybards shortcuts (Ctrl-E and
Ctrl-I) for equations and itemize are also provided. More environments
and shortcuts can be added in the file \texttt{envsLatex.js}.
\item
Added a link in the general help menu that points to the
documentation.
\end{itemize}
\textbf{July 27, 2016} - version 1.1
\begin{itemize}
\item
In this version I have reworked \textbf{equation numbering}. In the
previous version, I used a specialized counter and detected equations
rendering for updating this counter. Meanwhile, this feature has been
introduced in \texttt{MathJax} and now we rely on MathJax
implementation. rendering is significantly faster. We still have keep
the capability of displaying only equation labels (instead of
numbers). The numbering is automatically updated and is document-wide.
\item
I have completely reworked the \textbf{notebook conversion} to plain
\LaTeX and html. We provide specialized exporters, pre and post
processors, templates. We also added entry-points to simplify the
conversion process. It is now as simple as
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{jupyter} \NormalTok{nbconvert --to html_with_lenvs FILE.ipynb}
\end{Highlighting}
\end{Shaded}
to convert \texttt{FILE.ipynb} into html while keeping all the
features of the \texttt{latex\_envs} notebook extension in the
converted version.
\end{itemize}
\section{Main features}\label{main-features}
\subsection{Implementation principle}\label{implementation-principle}
The main idea is to override the standard Markdown renderer in order to
add a \emph{small} parsing of LaTeX expressions and environments. This
heavily uses regular expressions. The LaTeX expression are then rendered
using an html version. For instance
\texttt{\textbackslash{}underline\ \{something\}} is rendered as
\texttt{\textless{}u\textgreater{}\ something\ \textless{}/u\textgreater{}},
that is \underline{something}. The environments are replaced by an html
tag with a class derived from the name of the environment. For example,
a \texttt{definition} denvronment will be replaced by an html rendering
corresponding to the class \texttt{latex\_definition}. The styles
associated with the different classes are specified in
\texttt{latex\_env.css}. These substitutions are implemented in
\texttt{thsInNb4.js}.
\subsection{Support for simple LaTeX
commands}\label{support-for-simple-latex-commands}
We also added some LaTeX commands (e.g. \texttt{\textbackslash{}textit},
\texttt{\textbackslash{}textbf}, \texttt{\textbackslash{}underline}) --
this is useful in the case of copy-paste from a LaTeX document. The
extension also supports some textmode spacings, namely
\texttt{\textbackslash{}par},
\texttt{\textbackslash{}vspace,\ \textbackslash{}hspace} as well as
\texttt{\textbackslash{}title}, \texttt{\textbackslash{}author},
\texttt{maketitle} and LaTeX comments \% in markdown cells. Labels and
cross-references are supported, including for equations.
\subsection{Available environments}\label{available-environments}
\begin{itemize}
\tightlist
\item
\textbf{theorems-like environments}: \emph{property, theorem, lemma,
corollary, proposition, definition,remark, problem, exercise,
example},
\item
\textbf{lists}: \emph{enumerate, itemize},\\
\item
limited support for a \emph{figure} environment,
\item
an environment \emph{listing},
\item
\emph{textboxa}, wich is a \texttt{textbox} environment defined as a
demonstration (see below).
\end{itemize}
More environments can be added easily in the user\_envs config file
\texttt{user\_envs.json} or directly in the javascript source file
\texttt{thmsInNb4.js}. The rendering is done according to the stylesheet
\texttt{latex\_env.css}, which can be customized.
\begin{remark} When exporting to html, the
\texttt{latex\_env.css} file honored is the file on the github CDN.
However, customized css can be added in a \texttt{custom.css} file that
must reside in the same directory as the notebook itself. The reason for
that is that the \texttt{css} file must be in the same directory as the
notebook file for inclusion, which means copying it in each working
directory. As the rendering of the html file obtained is done using the
original javascript code, the same is true for the source files;
therefore it is better to curstomize environments in
\texttt{user\_envs.json} which is taken into account when exporting to
html.\\
\end{remark}
\subsection{Automatic numerotation, labels and
references}\label{automatic-numerotation-labels-and-references}
Several counters for numerotation are implemented: counters for problem,
exercise, example, property, theorem, lemma, corollary, proposition,
definition, remark, and figure are available. Mathjax-equations with a
label are also numbered document-wide. An anchor is created for any
label which enables to links things within the document:
\texttt{\textbackslash{}label} and \texttt{\textbackslash{}ref} are both
supported. A limitation was that numbering was updated (incremented)
each time a cell is rendered. Document-wide automatic updating is
implemented since version 1.3. A toolbar button is provided to reset the
counters and refresh the rendering of the whole document (this is still
useful for citations and bibliography refresh).
\label{example:mixing} A simple example is as follows, featuring
automatic numerotation, and the use of labels and references. Also note
that standard markdown can be present in the environment and is
interpreted. \emph{The rendering is done according to the stylesheet
\texttt{latex\_env.css}, which of course, can be customized to specific
uses and tastes}.
\begin{listing}
\begin{definition} \label{def:FT}
Let $x[n]$ be a sequence of length $N$. Then, its **Fourier transform** is given by
\begin{equation}
\label{eq:FT}
X[k]= \frac{1}{N} \sum_{n=0}^{N-1} x[n] e^{-j2\pi \frac{kn}{N}}
\end{equation}
\end{definition}
\end{listing}
\begin{definition} \label{def:FT} Let \(x[n]\) be a sequence of
length \(N\). Then, its \textbf{Fourier transform} is given by
\begin{equation}
\label{eq:FT2}
X[k]= \frac{1}{N} \sum_{n=0}^{N-1} x[n] e^{-j2\pi \frac{kn}{N}}
\end{equation}
\end{definition}
It is now possible to refer to the definition and to the equation by
their labels, as in:
\begin{listing}
As an example of Definition \ref{def:FT}, consider the Fourier transform (\ref{eq:FT2}) of a pure cosine wave given by
$$
x[n]= \cos(2\pi k_0 n/N),
$$
where $k_0$ is an integer.
\end{listing}
As an example of Definition \ref{def:FT}, consider the Fourier transform
(\ref{eq:FT2}) of a pure cosine wave given by \[
x[n]= \cos(2\pi k_0 n/N),
\] where \(k_0\) is an integer. Its Fourier transform is given by \[
X[k] = \frac{1}{2} \left( \delta[k-k_0] + \delta[k-k_0] \right),
\] modulo \(N\).
\subsection{Bibliography}\label{bibliography}
\subsubsection{Usage}\label{usage}
It is possible to cite bibliographic references using the standard LaTeX
\texttt{\textbackslash{}cite} mechanism. The extension looks for the
references in a bibTeX file, by default \texttt{biblio.bib} in the same
directory as the notebook. The name of this file can be modified in the
configuration toolbar. It is then possible to cite works in the
notebook, e.g.
\begin{listing}
The main paper on IPython is definitively \cite{PER-GRA:2007}. Other interesting references are certainly \cite{mckinney2012python, rossant2013learning}. Interestingly, a presentation of the IPython notebook has also be published recently in Nature \cite{shen2014interactive}.
\end{listing}
The main paper on IPython is definitively \cite{PER-GRA:2007}. Other
interesting references are certainly
\cite{mckinney2012python, rossant2013learning}. Interestingly, a
presentation of the IPython notebook has also be published recently in
Nature \cite{shen2014interactive}.
\subsubsection{Implementation}\label{implementation}
The implemention uses several snippets from the nice
\href{https://bitbucket.org/ipre/calico/downloads/}{icalico-document-tools}
extension that also considers the rendering of citations in the
notebook. We also use a modified version of the
\href{https://code.google.com/p/bibtex-js/}{bibtex-js} parser for
reading the references in the bibTeX file. The different functions are
implemented in \texttt{bibInNb4.js}. The rendering of citations calls
can adopt three styles (Numbered, by key or apa-like) -- this can be
selected in the configuration toolbar. It is also possible to customize
the rendering of references in the reference list. A citation template
is provided in the beginning of file \texttt{latex\_envs.js}:
\begin{verbatim}
var cit_tpl = {
// feel free to add more types and customize the templates
'INPROCEEDINGS': '%AUTHOR:InitialsGiven%, ``_%TITLE%_\'\', %BOOKTITLE%, %MONTH% %YEAR%.',
... etc
\end{verbatim}
The keys are the main types of documents, eg inproceedings, article,
inbook, etc. To each key is associated a string where the \%KEYWORDS\%
are the fields of the bibtex entry. The keywords are replaced by the
correponding bibtex entry value. The template string can formatted with
additional words and effects (markdown or LaTeX are commands are
supported)
\subsection{Figure environment}\label{figure-environment}
Finally, it is sometimes useful to integrate a figure within a markdown
cell. The standard markdown markup for that is
\texttt{!{[}link{]}(image)}, but a limitation is that the image can not
be resized, can not be referenced and is not numbered. Furthermore it
can be useful for re-using existing code. Threfore we have added a
limited support for the \texttt{figure} environment. This enables to do
something like
\begin{listing}
\begin{figure}
\centerline{\includegraphics[width=10cm]{example.png}}
\caption{\label{fig:example} This is an example of figure included using LaTeX commands.}
\end{figure}
\end{listing}
which renders as
\begin{figure}
\centerline{\includegraphics[width=10cm]{example.png}}
\caption{\label{fig:example} This is an example of figure included using LaTeX commands.}
\end{figure}
Of course, this Figure can now be referenced:
\begin{listing}
Figure \ref{fig:example} shows a second filter with input $X_2$, output $Y_2$ and an impulse response denoted as $h_2(n)$
\end{listing}
Figure \ref{fig:example} shows a second filter with input \(X_2\),
output \(Y_2\) and an impulse response denoted as \(h_2(n)\)
\subsection{figcaption}\label{figcaption}
For Python users, we have added in passing a simple function in the
\texttt{latex\_envs.py} library.
This function can be imported classically, eg
\texttt{from\ latex\_envs.latex\_envs\ import\ figcaption} (or
\texttt{from\ jupyter\_contrib\_nbextensions.nbconvert\_support.latex\_envs\ import\ figcaption}
if you installed from the jupyter\_contrib repo).
Then, this function enables to specify a caption and a label for the
next plot. In turn, when exporting to \LaTeX, the corresponding plot is
converted to a nice figure environement with a label and a caption.
%
\begin{lstlisting}
%matplotlib inline
import matplotlib.pyplot as plt
from jupyter_contrib_nbextensions.nbconvert_support.latex_envs import figcaption
from numpy import pi, sin, arange
plt.plot(sin(2*pi*0.01*arange(100)))
\end{lstlisting}%
\begin{verbatim}
[<matplotlib.lines.Line2D at 0x7f8a1872ba20>]
\end{verbatim}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{latex_env_doc_files/latex_env_doc_23_2.png}
\caption{This is a nice sine wave}
\label{fig:mysin}
\end{figure}
% { \hspace*{\fill} \\}
\subsection{Other features}\label{other-features}
\begin{itemize}
\item
As shown in the examples, eg \ref{example:mixing} (or just below),
\textbf{it is possible to mix LaTeX and markdown markup in
environments}
\item
Support for \textbf{line-comments}: lines beginning with a % will
be masked when rendering
- Support for \textbf{linebreaks}:
\texttt{\textbackslash{}par\_}, where \_ denotes any space, tab,
linefeed, cr, is replaced by a linebreak
\item
Environments can be nested. egg:
\begin{listing}
This is an example of nested environments, with equations inside\\
\begin{proof} Demo
% This is a comment
\begin{enumerate}
\item $$ \left\{ p_1, p_2, p_3 \ldots p_n \right\} $$
\item A **nested enumerate**
\item second item
\begin{enumerate}
\item $ \left\{ p_1, p_2, p_3 \ldots p_n \right\} $
\item And *another one*
\item second item
\begin{enumerate}
\item $$ \left\{ p_1, p_2, p_3 \ldots p_n \right\} $$
\item second item
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{proof}
\end{listing}
which results in
\end{itemize}
This is an example of nested environments, with equations inside\\
\begin{proof} Demo % This is a
comment
\begin{enumerate} \item
\begin{equation}\label{eq:}
\left\{ p_1, p_2, p_3 \ldots p_n \right\}
\end{equation}
\[ \left\{ p_1, p_2, p_3 \ldots p_n \right\} \] \item A \textbf{nested
enumerate} \item second item \begin{enumerate} \item
\(\left\{ p_1, p_2, p_3 \ldots p_n \right\}\) \item And \emph{another
one} \item second item \begin{enumerate} \item
\[ \left\{ p_1, p_2, p_3 \ldots p_n \right\} \] \item second item
\end{enumerate} \end{enumerate} \end{enumerate}
\end{proof}
\subsection{User interface}\label{user-interface}
\subsubsection{Buttons on main toolbar}\label{buttons-on-main-toolbar}
On the main toolbar, the extension provides three buttons
\includegraphics{main_toolbar.png} The first one can be used to refresh
the numerotation of equations and references in all the document. The
second one fires the reading of the bibliography bibtex file and creates
(or updates) the reference section. Finally the third one is a toogle
button that opens or closes the configuration toolbar.
\subsubsection{Configuration toolbar}\label{configuration-toolbar}
The configuration toolbar \includegraphics{config_toolbar.png} enables
to enter some configuration options for the extension.
First, the \texttt{LaTeX\textbackslash{}\_envs} title links to this
documentation. Then, the bibliography text input can be used to indicate
the name of the bibtex file. If this file is not found and the user
creates the reference section, then this section will indicate that the
file was not found. The \texttt{References} drop-down menu enables to
choose the type of reference calls. The Equations input box enables to
initiate numbering of equations at the given number (this may be useful
for complex documents in several files/parts). The \texttt{Equations}
drop-down menu let the user choose to number equation or to display
their label instead. The two next buttons enable to toogle display of
the LaTeX\_envs environments insertion menu or to toggle the displau of
LaTeX labels. Finally The \texttt{Toogles} dropdown menu enable to
toogle the state of several parameters. All these configuration options
are then stored in the notebook's metadata (and restored on reload).
The \texttt{Toggles} dropdown menu \includegraphics{Toggles.png}
enables to toggle the state of several configuration options:
\begin{itemize}
\tightlist
\item
display the \texttt{LaTeX\_envs} insertion menu or not,
\item
show labels anchors,
\item
use \LaTeX user own LaTeX defintions (loads \texttt{latexdefs.tex}
file from current document directory),
\item
load user's environments configuration (file \texttt{user\_envs.json}
in \texttt{nbextensions/latex\_envs} directory),
\item
select ``report style'' numbering of environments
\end{itemize}
\subsection{\texorpdfstring{The \texttt{LaTeX\_envs} insertion
menu}{The LaTeX\_envs insertion menu}}\label{the-latexux5fenvs-insertion-menu}
The \texttt{LaTeX\_envs} insertion menu
\includegraphics{LaTeX_envs_menu.png} enables a quick insertion of LaTeX
environments, some with a keyboard shorcut (this can be customized in
\texttt{envsLatex.js}). Besides, selected text will be inserted in the
environment.
\section{Conversion to LaTeX and
HTML}\label{conversion-to-latex-and-html}
The extension works in the live-notebook. Since it relies on a bunch of
javascript, the notebook does not render as is in services such as
\texttt{nbviewer} or \texttt{github} viewer. Similarly,
\texttt{nbconvert} does not know of the LaTeX constructs which are used
here and therefore does not fully convert notebooks using this
extension.
Therefore, we provide specialized templates and exporters to achieve
these conversions.
\subsection{Conversion to html}\label{conversion-to-html}
We provide a template \texttt{latex\_envs.tpl} and an exporter class
\texttt{LenvsHTMLExporter} (in library \texttt{latex\_envs.py}). Using
that class, conversion simply amounts to
\begin{verbatim}
jupyter nbconvert --to latex_envs.LenvsHTMLExporter FILE.ipynb
\end{verbatim}
A shortcut is also provided
\begin{verbatim}
jupyter nbconvert --to html_with_lenvs FILE.ipynb
\end{verbatim}
It should be noted that the rendering is done exactly in the same way as
in the livenotebook. Actually, it is the very same javascript which is
run in the html file. The javascript functions are available on the
extension github as well as in the
\texttt{jupyter\_notebook\_extensions} CDN, which means that the
rendering of the html files requires an internet connection (this is
also true for the rendering of equations with MathJax).
Another template \texttt{latex\_envs\_toc.tpl} is provided which enables
to also keep the toc2 features when exporting to html (\emph{it even
works if you do not have the \texttt{toc2} extension!}):
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{jupyter} \NormalTok{nbconvert --to html_with_toclenvs FILE.ipynb}
\end{Highlighting}
\end{Shaded}
\textbf{If you use the version included in the
jupyter\_notebook\_extensions collection}, the entry-points (conversion
shortcuts) are a little different: use instead
\begin{itemize}
\item
\begin{verbatim}
jupyter nbconvert --to html_lenvs FILE.ipynb
\end{verbatim}
\item
\begin{verbatim}
jupyter nbconvert --to html_toclenvs FILE.ipynb
\end{verbatim}
\end{itemize}
\subsection{Conversion to LaTeX}\label{conversion-to-latex}
We provide two templates \texttt{thmsInNb\_article.tplx} and
\texttt{thmsInNb\_report.tplx} for article and report styles
respectively. Anyway one can also use the standard article, report, book
templates provided with nbconvert. Simply, we have improved some of the
internals styles. More importantly, we provide an exporter class
\texttt{LenvsLatexExporter} (also in library \texttt{latex\_envs.py}).
Using that class, conversion simply amounts to
\begin{verbatim}
jupyter nbconvert --to latex_envs.LenvsLatexExporter FILE.ipynb
\end{verbatim}
A shortcut is also provided
\begin{verbatim}
jupyter nbconvert --to latex_with_lenvs FILE.ipynb
\end{verbatim}
In addition, we provide several further options:
\begin{itemize}
\tightlist
\item
\textbf{removeHeaders}: Remove headers and footers, (default false)
\item
\textbf{figcaptionProcess}: Process figcaptions, (default true)
\item
\textbf{tocrefRemove} Remove tocs and ref sections, + some cleaning,
(default true),
\end{itemize}
These options can be specified on the command line as, eg,
\begin{verbatim}
jupyter nbconvert --to latex_with_lenvs --LenvsLatexExporter.removeHeaders=True -- LenvsLatexExporter.tocrefRemove=False FILE.ipynb
\end{verbatim}
\textbf{If you use the version included in the
jupyter\_notebook\_extensions collection}, the entry-points (conversion
shortcuts) are a little different: use instead
\begin{verbatim}
jupyter nbconvert --to latex_lenvs FILE.ipynb
\end{verbatim}
\begin{example} As for an example, the present document has
been converted using
\begin{verbatim}
jupyter nbconvert --to latex_with_lenvs --LenvsLatexExporter.removeHeaders=True latex_env_doc.ipynb
\end{verbatim}
Then the resulting file (without header/footer) has been included in the
main file \texttt{documentation.tex}, where some LaTeX definitions of
environments are done (namely listings, colors, etc) and compiled using
\begin{itemize}
\tightlist
\item
\texttt{xelatex\ -interaction=nonstopmode\ documentation}
\item
\texttt{bibTeX\ documentation}
\item
\texttt{xelatex\ -interaction=nonstopmode\ documentation}
\end{itemize}
The output can be consulted \href{documentation.pdf}{here}.\\
\end{example}
\section{Installation}\label{installation}
The extension consists in a package that includes a javascript notebook
extension. Since Jupyter 4.2, this is the recommended way to distribute
nbextensions. The extension can be installed
\begin{itemize}
\tightlist
\item
from the master version on the github repo (this will be always the
most recent version)
\item
via pip for the version hosted on Pypi
\item
as part of the great
\href{https://github.com/ipython-contrib/Jupyter-notebook-extensions}{Jupyter-notebook-extensions}
collection. Follow the instructions there for installing. Once this is
done, you can open a tab at
\texttt{http://localhost:8888/nbextensions} to enable and configure
the various extensions.
\end{itemize}
From the github repo or from Pypi,
\begin{itemize}
\tightlist
\item
\textbf{step 1}: install the package
\begin{itemize}
\tightlist
\item
\texttt{pip3\ install\ https://github.com/jfbercher/jupyter\_latex\_envs/archive/master.zip\ {[}-\/-user{]}{[}-\/-upgrade{]}}
\item
{ or}
\texttt{pip3\ install\ jupyter\_latex\_envs\ {[}-\/-user{]}{[}-\/-upgrade{]}}
\item
{ or} clone the repo and install
\texttt{git\ clone\ https://github.com/jfbercher/jupyter\_latex\_envs.git\ \ \ \ python3\ setup.py\ install}
\end{itemize}
\end{itemize}
With Jupyter \textgreater{}= 4.2,
\begin{itemize}
\item
\textbf{step 2}: install the notebook extension
\begin{verbatim}
jupyter nbextension install --py latex_envs [--user]
\end{verbatim}
\item
\textbf{step 3}: and enable it
\begin{verbatim}
jupyter nbextension enable latex_envs [--user] --py
\end{verbatim}
\end{itemize}
For Jupyter versions before 4.2, the situation is more tricky since you
will have to find the location of the source files (instructions from
@jcb91 found
\href{https://github.com/jcb91/jupyter_highlight_selected_word}{here}):
execute
\begin{verbatim}
python -c "import os.path as p; from jupyter_highlight_selected_word import __file__ as f, _jupyter_nbextension_paths as n; print(p.normpath(p.join(p.dirname(f), n()[0]['src'])))"
\end{verbatim}
Then, issue
\begin{verbatim}
jupyter nbextension install <output source directory>
jupyter nbextension enable latex_envs/latex_envs
\end{verbatim}
where \texttt{\textless{}output\ source\ directory\textgreater{}} is the
output of the python command.
\section{Customization}\label{customization}
\subsection{Configuration parameters}\label{configuration-parameters}
Some configuration parameters can be specified system-wide, using the
\texttt{nbextension\_configurator}. For that, open a browser at
\url{http://localhost:8888/nbextensions/} -- the exact address may
change eg if you use jupyterhub or if you use a non standard port. You
will then be able to change default values for the boolean values -
LaTeX\_envs menu (insert environments) present - Label equation with
numbers (otherwise with their \label{} key) - Number environments as
section.num - Use customized environments as given in `user\_envs.json'
(in the extension directory) and enter a default filename for the bibtex
file (in document directory).
All these values can also be changed per documents and these values are
stored in the notebook's metadata.
\subsection{User environments
configuration}\label{user-environments-configuration}
Environments can be customized in the file \texttt{user\_envs.json},
located in the \texttt{nbextensions/latex\_envs} directory. It is even
possible to add \emph{new} environments. This file is read at startup
(or when using the corresponding toggle option in the \texttt{Toggles}
menu) and merged with the standard configuration. An example is provided
as \texttt{example\_user\_envs.json}. For each (new/modified)
environment, one has to provide (i) the name of the environment (ii) its
title (iii) the name of the associated counter for numbering it; eg
\begin{verbatim}
"myenv": {
"title": "MyEnv",
"counterName": "example"
},
\end{verbatim}
Available counters are problem, exercise, example, property, theorem,
lemma, corollary, proposition, definition, remark, and figure.
\subsection{Styling}\label{styling}
The document title and the document author (as specified by
\texttt{\textbackslash{}title} and \texttt{\textbackslash{}author} are
formatted using the \texttt{maketitle} command according to the
\texttt{.latex\_maintitle} and \texttt{.latex\_author} div styles.
Each environment is formatted according to the div style
\texttt{.latex\_environmentName}, e.g. \texttt{.latex\_theorem},
\texttt{.latex\_example}, etc. The titles of environments are formatted
with respect to \texttt{.latex\_title} and the optional parameter wrt
\texttt{.latex\_title\_opt}. Images are displayed using the style
specified by \texttt{.latex\_img} and thir caption using
\texttt{.caption}. Finally, enumerate environments are formatted
according to the \texttt{.enum} style. Similarly, itemize environments
are formatted using \texttt{.item} style.
These styles can be customized either in the \texttt{latex\_envs.css}
file, or better in a \texttt{custom.css} in the document directory.
\section{Usage and further examples}\label{usage-and-further-examples}
\subsection{First example (continued)}\label{first-example-continued}
We continue the first example on fthe Fourier transform definition
\ref{def:FT} in order to show that, of course, we can illustrate things
using a simple code. Since the Fourier transform is an essential tool in
signal processing, We put this in evidence using the \texttt{textboxa}
environment -- which is defined here in the css, and that one should
define in the LaTeX counterpart:
\begin{listing}
\begin{textboxa}
The Fourier transform is an extremely useful tool to have in your toolbox!
\end{textboxa}
\end{listing}
\begin{textboxa}
The Fourier transform is an extremely useful tool to have in your toolbox!
\end{textboxa}
The Fourier transform of a pure cosine is given by \[
X[k] = \frac{1}{2} \left( \delta[k-k_0] + \delta[k-k_0] \right),
\] modulo \(N\). This is illustrated in the following simple script:
%
\begin{lstlisting}
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft
k0=4; N=128; n=np.arange(N); k=np.arange(N)
x=np.sin(2*np.pi*k0*n/N)
X=fft(x)
plt.stem(k,np.abs(X))
plt.xlim([0, 20])
plt.title("Fourier transform of a cosine")
_=plt.xlabel("Frequency index (k)")
\end{lstlisting}%
\begin{center}
\adjustimage{max size={0.6\linewidth}{0.6\paperheight}}{latex_env_doc_files/latex_env_doc_46_0.png}
\end{center}
% { \hspace*{\fill} \\}
\subsection{Second example}\label{second-example}
This example shows a series of environments, with different facets;
\textbf{links, references, markdown or/and LaTeX formatting within
environments}. The listing of environments below is typed using the
environment \emph{listing}\ldots{}
\begin{listing}
\begin{definition} \label{def:diffeq}
We call \textbf{difference equation} an equation of the form
$$
\label{eq:diffeq}
y[n]= \sum_{k=1}^{p} a_k y[n-k] + \sum_{i=0}^q b_i x[n-i]
$$
\end{definition}
\begin{property}
If all the $a_k$ in equation (\ref{eq:diffeq}) of definition \ref{def:diffeq} are zero, then the filter has a **finite impulse response**.
\end{property}
\begin{proof}
Let $\delta[n]$ denote the Dirac impulse. Take $x[n]=\delta[n]$ in (\ref{eq:diffeq}). This yields, by definition, the impulse response:
$$
\label{eq:fir}
h[n]= \sum_{i=0}^q b_i \delta[n-i],
$$
which has finite support.
\end{proof}
\begin{theorem}
The poles of a causal stable filter are located within the unit circle in the complex plane.
\end{theorem}
\begin{example} \label{ex:IIR1}
Consider $y[n]= a y[n-1] + x[n]$. The pole of the transfer function is $z=a$. The impulse response $h[n]=a^n$ has infinite support.
\end{example}
In the following exercise, you will check that the filter is stable iff $a$<1.
\begin{exercise}\label{ex:exofilter}
Consider the filter defined in Example \ref{ex:IIR1}. Using the **function** `lfilter` of scipy, compute and plot the impulse response for several values of $a$.
\end{exercise}
\end{listing}
The lines above are rendered as follows (of course everything can be
tailored in the stylesheet): %
\begin{definition}
\label{def:diffeq} We call \textbf{difference equation} an equation of
the form
\begin{equation}
\label{eq:diffeq}
y[n]= \sum_{k=1}^{p} a_k y[n-k] + \sum_{i=0}^q b_i x[n-i]
\end{equation}
\end{definition} %
Properties of the filter are linked to
the coefficients of the difference equation. For instance, an immediate
property is %
% this is a comment
\begin{property}
If all the \(a_k\) in equation (\ref{eq:diffeq}) of definition
\ref{def:diffeq} are zero, then the filter has a \textbf{finite impulse
response}. \end{property} %
\begin{proof} Let
\(\delta[n]\) denote the Dirac impulse. Take \(x[n]=\delta[n]\) in
(\ref{eq:diffeq}). This yields, by definition, the impulse response:
\begin{equation}
\label{eq:fir}
h[n]= \sum_{i=0}^q b_i \delta[n-i],
\end{equation}
which has finite support. \end{proof}
%
\begin{theorem} The poles of a causal stable filter are
located within the unit circle in the complex plane.
\end{theorem} %
\begin{example} \label{ex:IIR1}
Consider \(y[n]= a y[n-1] + x[n]\). The pole of the transfer function is
\(z=a\). The impulse response \(h[n]=a^n\) has infinite support.
\end{example}
In the following exercise, you will check that the filter is stable iff
\(a\)\textless{}1. %
\begin{exercise}\label{ex:exofilter}
Consider the filter defined in Example \ref{ex:IIR1}. Using the
\textbf{function} \texttt{lfilter} of scipy, compute and plot the
impulse response for several values of \(a\). \end{exercise}
\begin{listing}
The solution of exercise \ref{ex:exofilter}, which uses a difference equation as in Definition \ref{def:diffeq}:
\end{listing}
The solution of exercise \ref{ex:exofilter}, which uses a difference
equation as in Definition \ref{def:diffeq}:
%
\begin{lstlisting}
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import lfilter
d=np.zeros(100); d[0]=1 #dirac impulse
alist=[0.2, 0.8, 0.9, 0.95, 0.99, 0.999, 1.001, 1.01]
for a in alist:
h=lfilter([1], [1, -a],d)
_=plt.plot(h, label="a={}".format(a))
plt.ylim([0,1.5])
plt.xlabel('Time')
_=plt.legend()
\end{lstlisting}%
\begin{center}
\adjustimage{max size={0.6\linewidth}{0.6\paperheight}}{latex_env_doc_files/latex_env_doc_52_0.png}
\end{center}
% { \hspace*{\fill} \\}
\subsection{Third example:}\label{third-example}
This example shows that environments like itemize or enumerate are also
available. As already indicated, this is useful for copying text from a
TeX file. Following the same idea, text formating commands
\texttt{\textbackslash{}textit}, \texttt{\textbackslash{}textbf},
\texttt{\textbackslash{}underline}, etc are also available.
\begin{listing}
The following \textit{environments} are available:
\begin{itemize}
\item \textbf{Theorems and likes}
\begin{enumerate}
\item theorem,
\item lemma,
\item corollary
\item ...
\end{enumerate}
\item \textbf{exercises}
\begin{enumerate}
\item problem,
\item example,
\item exercise
\end{enumerate}
\end{itemize}
\end{listing}
which gives\ldots{}
The following \textit{environments} are available:
\begin{itemize} \item \textbf{Theorems and likes}
\begin{enumerate} \item theorem, \item lemma, \item corollary \item
\ldots{} \end{enumerate} \item \textbf{exercises}
\begin{enumerate} \item problem, \item example, \item exercise
\end{enumerate} \end{itemize}
\section{Disclaimer, sources and
thanks}\label{disclaimer-sources-and-thanks}
Originally, I used a piece of code from the nice online markdown editor
\texttt{stackedit}
\url{https://github.com/benweet/stackedit/issues/187}, where the authors
also considered the problem of incorporating LaTeX markup in their
markdown.
I also studied and used examples and code from
\url{https://github.com/ipython-contrib/IPython-notebook-extensions}.
\begin{itemize}
\item
This is done in the hope it can be useful. However there are many
impovements possible, in the code and in the documentation.
\textbf{Contributions will be welcome and deeply appreciated.}
\item
If you have issues, please post an issue at
\texttt{https://github.com/jfbercher/jupyter\_latex\_envs/issues}
\href{https://github.com/jfbercher/jupyter_latex_envs/issues}{here}.
\end{itemize}
\textbf{Self-Promotion --} Like \texttt{latex\_envs}? Please star and
follow the
\href{https://github.com/jfbercher/jupyter_latex_envs}{repository} on
GitHub.
| {
"alphanum_fraction": 0.7581953747,
"avg_line_length": 36.4990774908,
"ext": "tex",
"hexsha": "7f70c81d235568b8e748648274d8e4b181572aa9",
"lang": "TeX",
"max_forks_count": 31,
"max_forks_repo_forks_event_max_datetime": "2022-02-14T23:11:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-10T09:51:27.000Z",
"max_forks_repo_head_hexsha": "1985d4c73fabd5f08f54b922e73a9306e09c77a5",
"max_forks_repo_licenses": [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
],
"max_forks_repo_name": "EnjoyLifeFund/Debian_py36_packages",
"max_forks_repo_path": "latex_envs/static/doc/latex_env_doc.tex",
"max_issues_count": 242,
"max_issues_repo_head_hexsha": "1985d4c73fabd5f08f54b922e73a9306e09c77a5",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T22:09:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-29T15:48:27.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
],
"max_issues_repo_name": "EnjoyLifeFund/Debian_py36_packages",
"max_issues_repo_path": "latex_envs/static/doc/latex_env_doc.tex",
"max_line_length": 276,
"max_stars_count": 445,
"max_stars_repo_head_hexsha": "5668b5785296b314ea1321057420bcd077dba9ea",
"max_stars_repo_licenses": [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
],
"max_stars_repo_name": "EnjoyLifeFund/macHighSierra-py36-pkgs",
"max_stars_repo_path": "latex_envs/static/doc/latex_env_doc.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T05:17:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-26T13:50:26.000Z",
"num_tokens": 10849,
"size": 39565
} |
\documentclass[11pt, letterpaper, twoside]{article}
\usepackage{risk_price_inference}
\usepackage[autopunct=true, hyperref=true, doi=false, isbn=false, natbib=true,
url=false, eprint=false, style=chicago-authordate]{biblatex}
\addbibresource{risk_bibliography.bib}
\author{Xu Cheng\thanks{University of Pennsylvania, The Perelman Center for Political Science and Economics, 133 South 36\textsuperscript{th} Street, Philadelphia, PA 19104, \href{mailto:[email protected]}{[email protected]}} \and Eric Renault\thanks{Brown University, Department of Economics -- Box B, 64 Waterman Street, Providence, RI 02912, \href{mailto:[email protected]}{eric\[email protected]}} \and Paul Sangrey\thanks{University of Pennsylvania, The Perelman Center for Political Science and Economics, 133 South 36\textsuperscript{th} Street, Philadelphia, PA 19104, \href{mailto:[email protected]}{[email protected]}}}
\title{Identification Robust Inference for Risk Prices in Structural Stochastic Volatility Models}
\date{\href{http://sangrey.io/risk_price_inference.pdf}{Current Version} \protect\\ This Version: \today}
\begin{document}
\begin{titlepage}
\maketitle
\thispagestyle{empty}
\addtocounter{page}{-1}
\begin{abstract}
\singlespacing \noindent
In structural stochastic volatility asset pricing models, changes in volatility affect risk premia through two channels: (1) the investor's willingness to bear high volatility in order to get high expected returns as measured by the market return risk price, and (2) the investor’s direct aversion to changes in future volatility as measured by the volatility risk price. Disentangling these channels is difficult and poses a subtle identification problem that invalidates standard inference. We adopt the discrete-time exponentially affine model of \textcite{han2018leverage}, which links the identification of the volatility risk price to the leverage effect. In particular, we develop a minimum distance criterion that links the market return risk price, the volatility risk price, and the leverage effect to well-behaved reduced-form parameters that govern the return and volatility's joint distribution. The link functions are almost flat if the leverage effect is close to zero, making estimating the volatility risk price difficult. We translate the conditional quasi-likelihood ratio test that \textcite{andrews2016conditional} develop in a nonlinear GMM framework to a minimum distance framework. The resulting conditional quasi-likelihood ratio test is uniformly valid. We invert this test to derive robust confidence sets that provide correct coverage for the risk prices regardless of the leverage effect's magnitude.
%
\end{abstract}
\medskip
\jelcodes{C12, C14, C38, C58, G12}
\medskip
\keywords{weak identification, robust inference, stochastic volatility, leverage, market return, risk premium, volatility risk premium, risk price, confidence set, asymptotic size}
\end{titlepage}
\clearpage
\section{Introduction}
A fundamental question in finance is how investors optimally trade off risk and return. Economic theory predicts investors demand a higher return as compensation for bearing more risk. Hence, we should expect a positive relationship between the mean and volatility of returns. Some seminal early papers proposed a static trade-off between risk and expected return, most notably the capital asset pricing model (CAPM) of \textcites{sharpe1964capital,lintner1965security}. In practice, volatility varies over time. Consequently, a significant strand of the recent literature examines the dynamic tradeoff between volatility and returns, including structural stochastic volatility models such as \textcites{christoffersen2013capturing, bansal2014volatility, dewbecker2017price}. In nonlinear models like these, investors care not just about how an asset's returns co-move with the volatility but also care how they co-move with changes in volatility.
In these structural stochastic volatility models, changes in volatility affect risk premia through two channels: (1) the investor's willingness to tolerate high volatility in order to get high expected returns as measured by the market return risk price, and (2) the investor’s direct aversion to changes in future volatility as measured by the volatility risk price. We adopt the discrete-time exponentially affine model of \textcite{han2018leverage}, who represent the market return risk price and the volatility risk price by two structural parameters. In this model, \textcite{han2018leverage} establish the significant result that the identification of the volatility risk price depends on a substantial leverage effect, which is the negative contemporaneous correlation between returns and volatility.
Although the leverage effect is theoretically less than zero, it is difficult to quantify empirically, and its estimate usually is small \parencites{aitsahalia2013leverage}. When the leverage effect is small, the data only provide a limited amount of information about the volatility risk, compared to the finite-sample noise in the data. This low signal-to-noise ratio, as modeled by weak identification, invalidates standard inference based on the generalized method of moments (GMM) estimator; see \textcites{stock2000GMM,andrews2012estimation}.
We provide an identification-robust confidence set for the structural parameters that measure the market return risk price, the volatility risk price, and the leverage effect.
The robust confidence set provides correct asymptotic coverage, uniformly over a large set of models and allows for any magnitude of the leverage effect. This uniform validity is crucial for the confidence set to have good finite-sample coverage \parencites{mikusheva2007uniform, andrews2010asymptotic}. In contrast, standard confidence sets based on the GMM estimator and its asymptotic normality do not have uniform validity in the presence of a small leverage effect. This issue affects all of the structural parameters because they are estimated simultaneously.
We achieve robust inference in two steps. First, we establish a minimum distance criterion using link functions between the structural parameters and a set of reduced-form parameters that determine the joint distribution of the return and volatility. The structural model implies that the link functions are zero when evaluated at the true values of the structural parameters and the reduced-form parameters. Identification and estimation of these reduced form parameters are standard and are not affected by the presence of a small leverage effect. However, the link functions are almost flat in one of the structural parameters when the leverage effect is small, resulting in weak identification. Second, given this minimum distance criterion, we invert the conditional quasi-likelihood ratio (QLR) test by \textcite{andrews2016conditional} to construct a robust confidence set. The key feature of this test is that it treats the flat link functions as an infinite-dimensional nuisance parameter. The critical value is constructed by conditioning on a sufficient statistic for this nuisance parameter, and it is known to yield a valid test regardless of the nuisance parameter's value. \Textcite{andrews2016conditional} develop this test in a GMM framework. We show it works in minimum distance contexts such as the one considered here and provide conditions for its asymptotic validity. For practitioners, we provide a detailed algorithm for the construction of this simulation-based robust confidence set.
Our empirical results relates to the empirical analysis of the effect of volatility on risk premia. As \textcite{lettau2010measuring} mention, the evidence here is inconclusive. \textcites{bollerslev1988capital, harvey1989timevarying, ghysels2005there, bali2006there, ludvigson2007empirical} find a positive relationship, while \textcites{campbell1987stock, breen1989economic, pagan1991nonparametric, whitelaw1994time, brandt2004relationship} find a negative relationship. Also, some papers use both a market return risk factor and a variance risk factor to explain the risk premia dynamics, including \textcites{christoffersen2013capturing, feunou2014risk, dewbecker2017price}. In related strand of the literature, \textcite{bollerslev2008risk, drechsler2011whats} document a substantial positive variance risk premium. We contribute to this literature by providing the first method for making valid inference for the market return risk price and the volatility risk price. This new confidence set not only allows for both effects but also takes into account the potential identification issue.
To have a non-linear relationship between changes in volatility and expected returns, we need either volatility of volatility (as used by \textcite{drechsler2011whats}) or jumps (as used by \textcite{drechsler2014uncertainty}). Since we are working in discrete-time, it is far more natural to work volatility than with jumps. This is because discontinuities are not well-defined in discrete-time; all functions are continuous in the discrete-topology. The most straightforward models that allow for closed-form expressions for the risk prices are exponentially-affine (not affine) models. This is why they are frequently used in the option-pricing literature. In order to avoid complicating the analysis, we use such a model. We focus on the time-series behavior of the index because as a complement to, not a subtitle to, estimating the risk prices from cross-sectional or options pricing data. Using market-level variation over time to examine this non-linear relationship is a common approach in the literature, used by both the variance-premium literature and \textcite{han2018leverage}.
%QUESTION: does the validity of cross-sectional asset pricing risk prices rely upon linearity?
%It is not obvious how to map a non-linear model for risk in the market return into a cross-sectional model of risk prices. In addition, since these risk prices are so fundamental to investors' preferences, there is certainly room for both approaches.
The weak identification issue studied in this paper is relevant in many economic applications, ranging from linear instrumental variable models \parencite{staiger1997instrumental} to nonlinear structural models \parencites{mavroeidis2014empirical, andrews2015maximum}. This paper is the first one to study this issue in structural asset pricing models with stochastic volatility.
\Textcite{moreira2003conditional} introduces the conditional inference approach to the linear instrumental variable model creating the conditional likelihood-ratio (CLR) test, and \textcite{kleibergen2005testing} applies it to the nonlinear GMM problem. \Textcites{magnusson2010identification, magnusson2010inference} extend \gentextcite{kleibergen2005testing} results to the minimum-distance case. The key issue with these papers is that they rely exclusively upon local behavior of the moment-conditions. This is inherently under-powered in some environments. \Textcite{andrews2016conditional} resolve this issue by developing a global approach in the GMM case by proposing conditional inference for nonlinear GMM problems with an infinite-dimensional nuisance parameter. Their method is known to be the most-powerful in some special cases. We develop a global weak-identification robust inference method for minimum distance estimation by extending \textcite{andrews2016conditional}. We bear the same relationship to \textcites{magnusson2010identification, magnusson2010inference} that \textcite{andrews2016conditional} bears to \textcite{kleibergen2005testing}. We also extend the scope of the application of these weak-identification robust methods to a new type of asset pricing model with substantial non-linearity and heteroskedasticity.
The rest of the paper is organized as follows. \Cref{sec:model} provides the model and its parameterization. \Cref{sec:ilnk functions} provides model-implied restrictions and use them to derive the link function. \Cref{sec:robust inference} provides the asymptotic distribution of the reduced-form parameter and robust confidence sets for the structural parameter. A detailed algorithm to construct the robust confidence set is given in \cref{sec:conditional QLR}.
\Cref{sec:simulation} show that the method works well in simulation, and \Cref{sec:empirical} applies the methods to data on the S\&P 500 providing estimates of the risk prices. \Cref{sec:risk_conclusion} concludes.
Proofs are given in the appendix.
\section{Model}\label{sec:model}
This section provides a parametric structural model with stochastic volatility, following \textcite{han2018leverage}. They extend the discrete-time exponentially-affine model of \textcite{darolles2006structural}, and their model is a natural discrete-time analog of the \textcite{heston1993closedform} model. We specify this model using a stochastic discount factor (SDF), also called the pricing kernel, and the physical measure, which gives the joint distribution of the return and volatility dynamics.\footnote{The risk-neutral measure is unobserved due to the lack of option data.} We first define the SDF and parameterize it as an exponential affine function with unknown parameters. We then provide parametric distribution for the physical measure.
Let $P_t$ be the price of the asset under consideration. Let $r_{t+1}=\log(P_{t+1}/P_t)-r_f$ denote the log excess return minus the risk-free rate and $\sigma^2_{t+1}$ denote the associated volatility. The observed data is $W_t=(r_t,\sigma^2_{t})$ for $t=1,\ldots,T$.
Let $\F_t$ be the representative investor's information set at time $t$ .
\subsection{Stochastic Discount Factor and Its Parameterization}\label{sec:deriving_sdf_functions}
The prices of all assets satisfy the following asset pricing equation in terms of the SDF:
%
\begin{equation}
P_t = \E\left[M_{t,t+1} \exp\left(-r_f\right) P_{t+1} \mvert \F_t \right].
\end{equation}
%
Following the definition of $r_{t+1}$, the pricing equation implies that for all assets
%
\begin{equation}
1 = \E\left[M_{t,t+1} \exp\left(r_{t+1}\right) \mvert \F_t \right].
\end{equation}
We start by parameterizing the SDF by the exponential affine model. Let $\pi$ be the price of volatility risk and $\kappa$ be the market return risk price. They are both considered as structural parameters.
\begin{definition}{Parameterizing the Stochastic Discount Factor}
\label{defn:SDF}
%
\begin{equation}
M_{t,t+1}(\pi, \kappa) = \exp\left(m_{0} + m_1 \sigma_t^2 - \pi \sigma^2_{t+1} - \kappa r_{t+1}\right).
\end{equation}
\end{definition}
Throughout we assume that the two risks that command nonzero prices are the market return risk price and the volatility risk price. These two risks are closely related to the first two moments of $r_{t+1}$. Consequently, we only use variation in the first two moments of the data to estimate these parameters.
\subsection{Parameterizing the Volatility and Return Dynamics}
Next, we parameterize the joint distribution of $\left\lbrace W_t:t=1,\ldots, T\right\rbrace $.
Following \textcite{han2018leverage}, we make the following assumptions. First, the return $r_t$ and volatility $\sigma^2_t$ are first-order Markov. Second, there is no Granger-causality from the return to the volatility. Third, returns are independent across time given the volatility. We do allow $\sigma^2_{t}$ and $r_{t}$ to be contemporaneously correlated, as they are in the data.
Under these assumptions, the volatility drives all of the dynamics of the process. The only relevant information in the information set $\F_{t}$ for time $t+1$-measurable variables is contained in $\sigma^2_t$. In general, $\sigma^2_t$, $\sigma^2_{t+1}$, and $r_{t+1}$ form a sufficient statistic for $\F_{t+1}$.
We adopt the conditional autoregressive gamma process as in \textcite{gourieroux2006autoregressive, han2018leverage} for the volatility process. The model is parameterized in terms of the Laplace transform:
%
\begin{equation}
\E\left[\exp(-x \sigma^2_{t+1}) \mvert \F_{t}\right] = \exp\left(- A(x) \sigma^2_{t} - B(x)\right)
\label{eqn:vol_laplace_transform}
\end{equation}
%
for all $x \in \R$. The function $A(x)$ and $B(x)$ are parameterized as follows.
\begin{definition}{Parameterize the Volatility Dynamics}
\label{defn:physical_vol_dynamics}
\begin{align}
\label{defn:a_PP}
A(x) &\coloneqq \frac{\rho x}{1 + c x}, \\
\label{defn:b_PP}
B(x) &\coloneqq \delta \log(1 + c x),
\end{align}
with $\rho \in [0,1-\epsilon],$ $c > \epsilon$, $\delta > \epsilon$ for some $\epsilon > 0$.
\end{definition}
In this specification, $\rho$ is a persistence parameter, $c$ is a scale parameter, and $\delta$ is a level parameter. We can see this clearly in the following conditional mean and variance formulas for $\sigma^2_{t+1}$.
\begin{remark}[Volatilty Moment Conditions]
\label{remark:vol_moment_conditions}
\begin{align}
\E\left[\sigma^2_{t+1} \mvert \sigma^2_t \right] &= \rho \sigma^2_t + c \delta,\\
%
\Var\left[\sigma^2_{t+1} \mvert \sigma^2_t \right] &= 2 c \rho \sigma^2_t + c^2 \delta.
%
\end{align}
\end{remark}
Next, we model the return dynamics. Similar to the volatility dynamics, the distribution of $r_t$ given both $\sigma^2_{t+1}$ and $\sigma^2_{t}$ is specified in terms of the Laplace transform:
%
\begin{equation}
\label{eqn:return_laplace_transform}
\E\left[\exp(- x r_{t+1}) \mvert \F_{t}, \sigma^2_{t+1} \right] = \exp\left(- C(x) \sigma^2_{t+1} - D(x) \sigma^2_t - E(x)\right)
\end{equation}
%
for all $x \in \R$. The function $C(x)$, $D(x)$, and $E(x)$ are parameterized as follows such that the return has a conditional Gaussian distribution.
\begin{definition}{Parameterizing the Return Dynamics}
\label{defn:physical_return_dynamics}
\begin{align}
C(x) &\coloneqq \psi x - \frac{1 - \phi^2}{2} x^2,\\
D(x) &\coloneqq \beta x, \\
E(x) &\coloneqq \gamma x,
\end{align}
with $\phi \in [-1+\epsilon, 0]$ for some $\epsilon>0$.
\end{definition}
Under this specification, we have the following representation of the conditional mean and variance for $r_{t+1}$.
\begin{remark}[Return Moment Conditions]
\label{remark:return_moment_conditions}
\begin{align}
\label{eqn:rtn_cond_mean}
\E\left[r_{t+1} \mvert \sigma^2_t, \sigma^2_{t+1}\right] = \psi \sigma^2_{t+1} + \beta \sigma^2_t + \gamma, \\
%
\label{eqn:rtn_cond_vol}
\Var\left[r_{t+1} \mvert \sigma^2_t, \sigma^2_{t+1}\right] = (1 - \phi^2) \sigma^2_{t+1}.
%
\end{align}
\end{remark}
The parameter $\phi$ represents the leverage effect because it measures the reduction in return's volatility caused by conditioning on the volatility path.
\section{Link Functions}\label{sec:ilnk functions}
So far, we have introduced the following parameters: $(m_{0},m_{1},\kappa ,\pi )$ in SDF, $(\rho ,c,\delta)$ for the volatility dynamics, and $(\psi ,\beta ,\gamma ,\phi )$ for the return dynamics. Next, we explore restrictions among these parameters that are consistent with this model. In other words, not all of these parameters can change freely under the structural model.
We use these restrictions to construct link functions between a set of reduced-form parameters and a set of structural parameters. These link functions play an important role in separating the regularly behaved reduced-form parameters from the structural parameters. They also are used to conduct identification robust inference for the structural parameters based on a minimum distance criterion.
All of these restrictions are also imposed in the GMM estimation in \textcite{han2018leverage}. However, because the volatility risk price is weakly identified, they calibrate it instead of estimating it. Given this calibrated value, they proceed to estimate all other parameters with GMM.
\subsection{Pricing Equation Restrictions}
We first explore restrictions implied by the pricing equation $\E[ M_{t,t+1}\exp (r_{t+1}) \ivert \F_{t}]=1$. We start with a simple result stating that the constants $m_{0}$ and $m_{1}$ are normalization constants implied by all the other parameters. Thus, $m_{0}$ and $m_{1}$ are not free parameters to be estimated. Instead, they should take the values given below, once other parameters are specified. These restrictions on $m_{0}$ and $m_{1}$ are obtained by applying the restriction $\mathbb{E[} M_{t,t+1}\exp (r_{t+1})|\mathcal{F}_{t}]=1$ to the risk free asset. Applying the same argument to any other asset, we also obtain another set of two restrictions, which can be written in terms of the coefficients $\beta $ and $ \gamma $ under the linear form of $D(x)$ and $E(x)$.
\begin{lemma}
\label{Lemma m0 and m1}
Given the parameterization in the model, the pricing equation \newline $\E[M_{t,t+1}\exp (r_{t+1}) \ivert \F_{t}]=1$ implies that\footnote{\Cref{proof:lemma_m0_and_m1}}
%
\begin{align*}
m_{0} &= E(\kappa )+B\left( \pi +C\left( \kappa \right) \right) , \\
%
m_{1} &= D\left( \kappa \right) +A\left( \pi +C\left( \kappa \right) \right) ,
\end{align*}
%
and
%
\begin{align*}
\gamma &= B\left( \pi +C\left( \kappa -1\right) \right) -B\left( \pi +C\left( \kappa \right) \right), \\
\beta &= A\left( \pi +C\left( \kappa -1\right) \right) -A\left( \pi +C\left( \kappa \right) \right).
\end{align*}
\end{lemma}
The two equalities on $\beta $ and $\gamma $ link them to the market return risk price, $\kappa$, and the volatility risk price, $\pi$, through the functions $A(\cdot),B(\cdot ),C(\cdot ),$ which also involve the parameters $(\rho, c,\delta, \psi, \phi).$ We treat these two equalities as link functions in the minimum distance criterion specified below.
\subsection{Leverage Effect Restrictions}\label{sec:leverage effect restrict}
Following \textcite{han2018leverage}, we parameterize $\psi$ as
%
\begin{equation}
\label{eqn:leverage restriction}
\psi = \frac{\phi}{\sqrt{2c}} - \frac{1 - \phi^2}{2} + (1-\phi^2) \kappa.
\end{equation}
%
The first part $\phi / \sqrt{2 c}$ measures the leverage effect arising from the instantaneous correlation between $r_{t+1}$ and $\sigma^2_{t+1}$.
The second part is the traditional Jensen effect term that arises from taking expectation of a log-Gaussian random variable. The third term arises from risk-aversion, which is why it is proportional to $\kappa$.
%By noting that
% In particular, it measures the reduction in the return
% We first show that both parameter $\psi $ and $\phi $ are linked to the leverage effect. Given the variance of $r_{t+1}$ conditional on $(\sigma _{t+1}^{2},\sigma _{t}^{2}),$ specified in \cref{eqn:rtn_cond_vol}, we have
% %
% \begin{equation*}
% \phi ^{2}=\sigma _{t+1}^{2}-\Var[r_{t+1}|\sigma _{t+1}^{2},\sigma _{t}^{2}].
% \end{equation*}
% %
% This shows that $\phi $ is linked to the leverage effect because it measures the return volatility reduction after conditioning on the volatility path. On the other hand, given the mean of $r_{t+1}$ conditional on $(\sigma _{t+1}^{2},\sigma _{t}^{2}),$ specified in \cref{eqn:rtn_cond_mean}, we have\footnote{To see this result, note that the mean of $r_{t+1}-\psi \sigma _{t+1}^{2}$ given $(\sigma _{t+1}^{2},\sigma _{t}^{2})$ does not depend on $\sigma
% _{t+1}^{2}$.}
%
%\begin{equation}
% \E[r_{t+1}|\sigma _{t+1}^{2},\sigma _{t}^{2}]-E[r_{t+1}|\sigma _{t}^{2}]=\psi \left \{ \sigma _{t+1}^{2}-E[\sigma _{t+1}^{2}|\sigma _{t}^{2}]\right \},
% \label{eqn:vol_versus_psi}
%\end{equation}
%%
%we can solve for $k$ using our parametric model, assuming it is time-invariant.
% In addition, \textcite{han2018leverage} show that $k$ is the value under which $\mathbb{C}\mathrm{orr}[r_{t+1},\sigma _{t+1}^{2}|\sigma _{t}^{2}]=\phi $ if this correlation is indeed time invariant. Guided by this condition, they show that $k=1/(2c)^{1/2}$ should be used for the volatility dynamic specified in \cref{eqn:vol_versus_psi} and \cref{eqn:leverage restriction}.
\subsection{Structural and Reduced-Form Parameters}
Because $\phi$ is the leverage effect parameter, we group it together with market return risk price, $\kappa$, and the volatility risk price, $\pi$, and call $ \theta \coloneqq (\kappa ,\pi ,\phi )^{\prime}$ structural parameters. These structural parameters are estimated by restrictions from this structural model. In contrast, the other parameters in the conditional mean and variance of the return and volatility, see \cref{remark:vol_moment_conditions} and \cref{remark:return_moment_conditions}, are simply estimated using these moments, without any model restrictions. As such, we call them the reduced-form parameters. Because $1-\phi ^{2}$ shows up in the conditional variance of $r_{t+1},$ see \cref{eqn:rtn_cond_vol}, we define $\zeta =1-\phi ^{2}$ as a reduced-form parameter and link it to the structural parameter $\phi$ through this relationship. To sum up, the reduced-form parameters are $\omega \coloneqq (\rho, c,\delta, \psi, \beta, \gamma, \zeta )^{\prime }$.
Using $\zeta $ as a reduced-form parameter has the additional benefit of avoiding estimating $\phi$ directly. Estimating $\phi $ when its true value is close to 0 results in an estimator with a non-standard asymptotic distribution due to the boundary constraint. The inference procedure below does not require estimation of $\phi$ and is uniform over $\phi$ even if its true value is on or close to the boundary $0$. It is worth noting that this boundary condition gives us additional information in estimating $\phi$ in some cases. The estimator for $\phi$ may converge quite rapidly; however, it is almost certainly not asymptotically approximately Gaussian. In addition, we cannot recover asymptotic Gaussianity by removing this constraint. Even though, $\phi$ could conceivably be greater than $0$, $\phi^2$ cannot conceivably be less than $0$. The $\phi$ parameter enters the last link in \cref{eqn:link_function_g} through $\phi^2$.
This is where the non-standard behavior arises. Economically, we saying that the variance of $r_{t+1}$ must reduce when we condition on more information. Although, this is clearly in population, it may not hold for the sample variances.
The link functions between the structural parameter $\theta$ and the reduced-form parameter $\omega $ are collected together in
%
\begin{equation}
\label{eqn:link_function_g}
g(\theta, \omega) =
%
\begin{pmatrix}
\gamma - [B\left( \pi +C\left( \kappa -1\right) \right) -B\left( \pi +C\left( \kappa \right) \right)] \\
\beta - [A\left( \pi +C\left( \kappa -1\right) \right) -A\left( \pi +C\left( \kappa \right) \right)] \\
\psi -(1-\phi ^{2})\kappa +\frac{1}{2}(1-\phi ^{2})-1/(2c)^{1/2}\phi \\
\zeta -\left( 1-\phi ^{2}\right)
\end{pmatrix}.
\end{equation}
%
For the inference problem studied below, we know $g(\theta _{0},\omega_{0})=0$ when evaluated at the true value of $\theta $ and $\omega .$
\subsection{Identification}
One of the important contributions of \textcite{han2018leverage} is to establish the relationship between the identification of the volatility risk price and the leverage effect. In particular, they show that when the leverage effect parameter $\phi =0,$ the volatility risk price $\pi $ is not identified. To see this result, note that the only source of identification information on $\pi $ are the first two link functions in $g(\theta _{0},\omega _{0})=0$, which come from \cref{Lemma m0 and m1}. Clearly, these two equations are independent of $\pi$ if $C(\kappa )=C(\kappa -1)$. Using the definition of $C(\cdot)$ and \cref{eqn:leverage restriction}, we have
%
\begin{equation}
C(\kappa )-C(\kappa -1)=\psi -(1-\phi ^{2})\left( \kappa -\frac{1}{2}\right) = \frac{\phi}{\sqrt{2 c}}.
\end{equation}
%
Clearly, the strength of identification is governed by the strength of the leverage effect.
In other words, we need $\phi \neq 0$ to identify the volatility risk price $\pi$.
Even if $\phi \neq 0$, we do not know it. In practice, with a finite-sample size and different types of noise in the data, such as measurement errors and omitted variables, a substantial leverage effect is required to obtain a standard identification situation where the noise in the data is negligible compared to the information to identify $\pi$. However, if only a small leverage effect is found, as in \textcites{bandi2012timevarying, aitsahalia2013leverage}, or the magnitude of the leverage effect is completely unknown, an identification robust procedure is needed to conduct inference in this problem. In addition, standard minimum-distance estimators do not provide valid inference when some of the first-stage parameters are either asymptotically non-Gaussian or the link functions are ill-behaved. In our case, we should not expect $\phi$ to be asymptotically Gaussian even though it is well identified. We provide a procedure that is robust to both non-standard issues now.
\section{Robust Inference for Risk Prices}\label{sec:robust inference}
\subsection{Asymptotic Distribution of the Reduced-Form Parameter}
Write $\omega = (\omega _{1},\omega _{2},\omega _{3})^{\prime },$ where $\omega _{1}=(\rho ,c,\delta)' \in O_{1},$ $\omega _{2}=(\gamma ,\beta ,\psi)' \in O_{2}$, and $\omega _{3}=\zeta \in O_{3}.$ The parameter space for $ \omega $ is $O=O_{1}\times O_{2}\times O_{3}\subset R^{d_{\omega }}$. The true value of $\omega $ is assumed to be in the interior of the parameter
space.
Below we describe the estimator $\widehat{\omega } \coloneqq (\widehat{\omega }_{1}, \widehat{\omega }_{2},\widehat{\omega }_{3})^{\prime }$ and provide its asymptotic distribution. We estimate these parameters separately because $\omega _{1}$ only shows up in the conditional mean and variance of $\sigma _{t+1}^{2}$, $\omega_{2}$ only shows up in the conditional mean of $r_{t+1}$, and $\omega _{3}$ only shows up in the conditional variance of $
r_{t+1}.$
We first estimate $\omega _{1}=(\rho,c)'$ based on the conditional mean and variance of $\sigma _{t+1}^{2}$, which can be equivalently written as
%
\begin{align}
E[\sigma _{t+1}^{2}|\sigma _{t}^{2}] &= A\text{ and }E[\sigma _{t+1}^{4}|\sigma _{t}^{2}]=B,\text{ where } \nonumber \\
%
A &= \rho \sigma _{t}^{2}+c\delta \text{ and }B=A^{2}+\left( 2c\rho \sigma _{t}^{2}+c^{2}\delta \right) .
\end{align}
%
Because the conditional mean of $\sigma _{t+1}^{2}$ and $\sigma _{t+1}^{4}$ are linear and quadratic functions, respectively, of the conditioning variable $\sigma _{t}^{2}$, they can be transformed to the unconditional moments
%
\begin{equation}
E[h_{t}(\omega _{10})]=0,\text{ where }h_{t}(\omega _{1})=[(1,\sigma _{t}^{2})\otimes (\sigma _{t+1}^{2}-A),(1,\sigma _{t}^{2},\sigma _{t}^{4})\otimes (\sigma _{t+1}^{4}-B)]^{\prime },
\end{equation}
%
and $\omega _{10}$ represents the true value of $\omega _{1}$. The two-step GMM estimator of $\omega _{1}$ is%
%
\begin{equation}
\label{omega 1 est}
\widehat{\omega }_{1}=\underset{\omega _{1}\in O_{1}}{\arg \min }\left( T^{-1}\sum_{t=1}^{T}h_{t}(\omega _{1})\right) ^{\prime }\widehat{V}_{1}\left( T^{-1}\sum_{t=1}^{T}h_{t}(\omega _{1})\right) ,
\end{equation}%
%
where $\widehat{V}_{1}$ is a consistent estimator of $V_{1} \coloneqq \sum_{m=-\infty }^{\infty }\Cov[h_{t}(\omega _{10}),h_{t+m}(\omega _{10})].$
We estimate $\omega _{2}$ by the generalized least squares (GLS) estimator because the conditional mean of $r_{t+1}$ is a linear function of the conditioning variable $\sigma _{t}^{2}$ and $\sigma _{t+1}^{2}$ and the conditional variance is proportional to $\sigma _{t+1}^{2}.$ The GLS estimator of $\omega _{2}$ is
%
\begin{align}
\widehat{\omega }_{2} &= \left( \sum_{t=1}^{T}x_{t}x_{t}^{\prime }\right) ^{-1}\sum_{t=1}^{T}x_{t}y_{t},\text{ where } \notag \\
%
x_{t} &= \sigma _{t+1}^{-1}(1,\sigma _{t}^{2},\sigma _{t+1}^{2})^{\prime } \text{ and }y_{t}=\sigma _{t+1}^{-1}r_{t+1}. \label{omega 2 est}
\end{align}
%
We estimate $\omega_{3}$ by the sample variance estimator:
%
\begin{equation}
\label{omega 3 est}
\widehat{\omega }_{3}=T^{-1}\sum_{t=1}^{T}\left( y_{t}-\widehat{y}_{t}\right) ^{2},\text{ where }\widehat{y}_{t}=x_{t}^{\prime }\widehat{ \omega }_{2}.
\end{equation}
Let $P$ denote the distribution of the data $\{W_{t}=(r_{t+1}, \sigma _{t+1}^{2}):t\geq 1\}$ and $\mathcal{P}$ denote the parameter space of $P$. Note that the true values of the structural parameter and the reduced-form parameters are all determined by $P.$ We allow $P$ to change with $T.$ For notational simplicity, the dependence on $P$ and $T$ is suppressed.
Let
%
\begin{equation}
f_{t}(\omega) =
%
\begin{pmatrix}
h_{t}(\omega _{1}) \\
x_{t}(y_{t}-x_{t}^{\prime }\omega _{2}) \\
(y_{t}-x_{t}^{\prime }\omega _{2})^{2}%
\end{pmatrix}
%
\in R^{d_{f}}\text{ and }
%
V =\sum_{m=-\infty }^{\infty }\Cov\left[f_{t}(\omega _{0}),f_{t+m}(\omega _{0})\right].
\end{equation}
%
The estimator $\widehat{\omega }$ defined above is based on the first moment of $f_{t}(\omega ).$ Thus, the limiting distribution of $\widehat{\omega }$ relates to the limiting distribution of $T^{-1/2}\sum_{t=1}^{T}(f_{t}(\omega _{0})-\E[f_{t}(\omega _{0}])$ following from the central limit theorem. Furthermore, because $\omega _{1}$ is the GMM estimator based on some nonlinear moment conditions, we need uniform convergence of the sample moments and their derivatives to show the consistency and asymptotic normality of $\widehat{\omega }_{1}.$ These uniform convergence follows from the uniform law of large numbers. Because $\widehat{\omega }_{2}$ is a simple OLS estimator by regressing $y_{t}$ and $x_{t},$ we need the regressors to not exhibit multicollinearity. We make the necessary assumptions below. All of them are easily verifiable with weakly dependent time series data.
Let $\widehat{V}$ denote a heteroskedasticity and autocorrelation consistent (HAC) estimator of $V$. The estimator $\widehat{V}_{1}$ is a submatrix of $\widehat{V}$ associate with $V_{1}.$ Let $H_{t}(\omega _{1})=\partial h_{t}(\omega _{1})/\partial \omega _{1}^{\prime }.$
\begin{assumpR}
\label{assump:R}
The following conditions hold uniformly over $P\in \mathcal{P}$, for some fixed $0 < C < \infty$.
\begin{enumerate}
\item $T^{-1}\sum_{t=1}^{T}(h_{t}(\omega_{1})-\E[ h_{t}(\omega _{1}))\rightarrow _{p}0$ and $T^{-1}\sum_{t=1}^{T}(H_{t}(\omega _{1})-\E[H_{t}(\omega _{1})])\rightarrow _{p}0,$ $\E[H_{t}(\omega _{1})]$ is continuous in $\omega _{1},$ all uniformly over the parameter space of $\omega _{1}$.
%
\item $T^{-1}\sum_{t=1}^{T}(x_{t}x_{t}^{\prime }-\E[ x_{t}x_{t}^{\prime }{])\rightarrow }_{p}0.$
%
\item $V^{-1/2}\{T^{-1/2}(\sum_{t=1}^{T}f_{t}(\omega _{0})-\E[f_{t}(\omega _{0}){]\} \rightarrow }_{d}N(0,I)$ and $\widehat{V} -V\rightarrow _{p}0.$
%
\item $C^{-1}\leq \lambda_{\min }(A)\leq \lambda_{\max }(A)\leq C$ for $A=V,\E[H_{t}\left( \omega _{1,0}\right) ^{\prime }H_{t}\left( \omega _{1,0}\right) ]),\E[x_{t}x_{t}^{\prime }],\E[ z_{t}z_{t}^{\prime }],$ where $z_{t}=(1,\sigma _{t}^{2},\sigma _{t}^{4})^{\prime}$.\footnote{We use $\lambda(\mathrm{matrix})$ to denote the eigenvalue of the matrix.}
%
\end{enumerate}
\end{assumpR}
Let $H(\omega _{1})=\mathbb{E[}H_{t}(\omega _{1})]$ and $\overline{H}(\omega _{1})=T^{-1}\sum_{t=1}^{T}H_{t}(\omega _{1}).$ Define
%
\begin{align}
%
\mathcal{B} &= \diag\{[H(\omega _{10})V_{1}^{-1}H(\omega _{10})]^{-1}H(\omega _{10})V_{1}^{-1},\mathbb{E[}x_{t}x_{t}^{\prime }]^{-1},1\}, \notag \\
%
\widehat{\mathcal{B}} &= \diag\{[\overline{H}(\widehat{\omega }_{1})^{\prime } \widehat{V}_{1}^{-1}\overline{H}(\widehat{\omega }_{1})]^{-1}\overline{H}( \widehat{\omega }_{1})^{\prime }\widehat{V}_{1}^{-1},[T^{-1}
\sum_{t=1}^{T}x_{t}x_{t}^{\prime }]^{-1},1\}.
%
\label{Fhat}
\end{align}
%
The following lemma provides the asymptotic distribution of the reduced-form parameter and a consistent estimator of its asymptotic covariance. Note, we put the asymptotic covariance on the left side of the convergence to allow the distribution of the data to change with sample size $T$.
\begin{lemma}
\label{Lemma Reduce}
Suppose \Cref{assump:R} holds. The following results hold uniformly over $P\in \mathcal{P}$.\footnote{\Cref{proof:lemma_reduce}}
\begin{equation*}
\xi _{T}:=\Omega ^{-1/2}T^{-1/2}(\widehat{\omega } -\omega _{0})\rightarrow _{d}\xi \sim N(0,I), \text{ where } \Omega =\mathcal{B}V \mathcal{B}^{\prime },
\end{equation*}
%
and
%
\begin{equation*}
\widehat{\Omega }-\Omega \rightarrow_{p}0, \text{ where } \widehat{\Omega }=\widehat{\mathcal{B}}\widehat{V}\widehat{\mathcal{B}}^{\prime}.
\end{equation*}
\end{lemma}
\subsection{Weak Identification}
The true value of the structural parameter $\theta$ and the reduced-form parameter $\omega$ satisfy the link function $g(\theta_{0},\omega _{0})=0$. In a standard problem without any identification issues, we can estimate $\theta _{0}$ by the minimum distance estimator $\widehat{\theta} =(\widehat{\kappa },\widehat{\pi },\widehat{\phi})'$, which minimizes $ Q_{T}(\theta )=g(\theta ,\widehat{\omega })^{\prime }W_{T}g(\theta , \widehat{\omega })$ for some weighting matrix $W_{T}$, and construct tests and confidence sets for $\theta _{0}$ using an asymptotically normal approximation for $T^{1/2}(\widehat{\theta }-\theta _{0})$. However, this standard method does not work in the present problem when $\pi _{0}$ is only weakly identified. In this case, $g(\theta ,\widehat{\omega })$ is almost flat in $\pi $ and the minimum distance estimator of $\widehat{\pi }$ is not even consistent. To make the problem even more complicated, the inconsistency of $\widehat{\pi }$ has a spillover effect on $\widehat{\kappa }$ and $\widehat{\phi}$, making the distribution of $\widehat{\kappa }$ and $\widehat{\phi }$ non-normal even in large samples.
Before presenting the robust confidence set, we first introduce some useful quantities and provide a heuristic discussion of the identification problem and its consequences. Let $G(\theta ,\omega )$ denote the partial derivative of $ g(\theta ,\omega )$ with respect to (w.r.t.)\@ $\omega .$ Let $g_{0}(\theta )=g(\theta ,\omega _{0})$ and $G_{0}(\theta )=G(\theta ,\omega _{0})$ be the link function and its derivative evaluated at $\omega _{0}$ and $\widehat{g}(\theta
)=g(\theta ,\widehat{\omega })$ and $\widehat{G}(\theta )=G(\theta , \widehat{\omega })$ be the same quantities evaluated at the estimator $ \widehat{\omega }.$ The delta method gives
%
\begin{equation}
\eta _{T}(\theta ):=T^{1/2}\left[ \widehat{g}(\theta )-g_{0}(\theta ) \right] =G_{0}(\theta )\Omega ^{1/2}\cdot \xi _{T}+o_{p}(1),
\label{emp pro}
\end{equation}
%
where $\xi _{T}\rightarrow _{d}N(0,I)$ following \cref{Lemma Reduce}. Thus, $\eta _{T}(\cdot )$ weakly converges to a Gaussian process $\eta (\cdot )$ with covariance function $\Sigma (\theta _{1},\theta _{2})=G_{0}(\theta _{1})\Omega G_{0}(\theta _{2})^{\prime }.$
Following \cref{emp pro}, we can write $T^{1/2}\widehat{g}(\theta )=\eta _{T}(\theta )+T^{1/2}g_{0}(\theta ),$ where $\eta _{T}(\theta )$ is the noise from the reduced-form parameter estimation and $T^{1/2}g_{0}(\theta )$ is the signal from the link function. Under weak identification, $ g_{0}(\theta )$ is almost flat in $\theta ,$ modeled as the signal $ T^{1/2}g_{0}(\theta )$ being finite even for $\theta \neq \theta _{0}$ and $T\rightarrow \infty .$ Thus, the signal and the noise are of the same order of magnitude, yielding an inconsistent minimum distance estimator $ \widehat{\theta }.$ This is in contrast with the strong identification scenario, where $T^{1/2}g_{0}(\theta )\rightarrow \infty $ for $\theta \neq \theta _{0}$ as $T\rightarrow \infty $ and $g_{0}(\theta _{0})=0.$ In this case, the signal is strong enough that the minimum distance estimator is consistent.
The identification strength of $\theta _{0}$ is determined by the function $ T^{1/2}g_{0}(\theta ).$ However, this function is unknown and cannot be consistently estimated (due to $T^{1/2}$). Thus, we take the conditional inference procedure as in \textcite{andrews2016conditional} and view $ T^{1/2}g_{0}(\theta )$ as an infinite dimensional nuisance parameter for the inference of $\theta _{0}$. The goal is to construct robust confidence set for $\theta _{0}$ that has correct size asymptotically regardless of this unknown nuisance parameter.
\subsection{Conditional QLR Test}\label{sec:conditional QLR}
We construct a confidence set for $\theta \in \Theta \coloneqq [0, M_1] \times [-M_2, 0] \times [1 - \epsilon, 0]$ by inverting the test $ H_{0}: \theta =\theta_{0}$ vs $H_{1}: \theta \neq \theta _{0}$, where $M_1$ and $M_2$ are large positive constants and $\epsilon$ is a small positive constant. The test statistic is a QLR statistic that takes the form
%
\begin{equation}
QLR(\theta _{0}) \coloneqq T\widehat{g}(\theta _{0})^{\prime}\widehat{\Sigma} (\theta _{0},\theta _{0})^{-1}\widehat{g}(\theta _{0})-\underset{\theta \in \Theta }{\min }T\widehat{g}(\theta )^{\prime }\widehat{\Sigma } (\theta ,\theta )^{-1}\widehat{g}(\theta ),
\label{QLR stat}
\end{equation}
%
where $\widehat{\Sigma }(\theta _{1},\theta _{2},)=\widehat{G}(\theta _{1})\widehat{\Omega }\widehat{G}(\theta _{2})^{\prime }$ and $\widehat{ \Omega }$ is the consistent estimator of $\Omega $ defined above.
\Textcite{andrews2016conditional} provide the conditional QLR test in a nonlinear GMM problem, where $\widehat{g}(\theta )$ is replaced by a sample moment. The same method can be applied to the present nonlinear minimum distance problem. Following \textcite{andrews2016conditional}, we first project $\widehat{g}(\theta )$ onto $\widehat{g}(\theta _{0})$ and construct a residual process
%
\begin{equation}
\widehat{r}(\theta )=\widehat{g}(\theta )-\widehat{\Sigma }(\theta ,\theta _{0})\widehat{\Sigma }(\theta _{0},\theta _{0})^{-1}\widehat{g} (\theta _{0}).
\label{red process}
\end{equation}
%
The limiting distributions of $\widehat{r}(\theta )$ and $\widehat{g} (\theta _{0})$ are Gaussian and independent. Thus, conditional on $\widehat{ r}(\theta ),$ the asymptotic distribution of $\widehat{g}(\theta )$ no longer depends on the nuisance parameter, $T^{1/2}g_{0}(\theta ),$ making the procedure robust to any identification strength.
Specifically, we obtain the $1-\alpha $ conditional quantile of the QLR statistic, denoted by $c_{1-\alpha }(r,\theta _{0}),$ as follows. For $b=1,\ldots,B$, we take independent draws $\eta _{b}^{\ast }\sim N(0,\widehat{\Sigma }(\theta _{0},\theta _{0}))$ and produce a simulated process,
%
\begin{equation}
g_{b}^{\ast }(\theta ) \coloneqq \widehat{r}(\theta ) + \widehat{\Sigma }(\theta ,\theta _{0})\widehat{\Sigma }(\theta _{0},\theta _{0})^{-1}\eta _{b}^{\ast},
\end{equation}
%
and a simulated statistic,
%
\begin{equation}
QLR_{b}^{\ast }(\theta _{0}) \coloneqq T {\eta_{b}^{\ast}}^{\prime} \widehat{\Sigma} (\theta _{0},\theta _{0})^{-1} \eta_{b}^{\ast} - \underset{\theta \in \Pi }{\min }Tg_b^{\ast }(\theta )^{\prime }\widehat{\Sigma } (\theta ,\theta )^{-1}g_{b}^{\ast }(\theta ).
\end{equation}
%
Let $b_{0}=\lceil (1-\alpha )B\rceil ,$ the smallest integer greater than or equal to $(1-\alpha )B$. Then the critical value $c_{1-\alpha }(r,\theta _{0})$ is the $b_{0}^{th}$ smallest value among $\{QLR_{b}^{\ast },b=1,\ldots,B\}$.
We execute the steps reported in \cref{alg:constructing_the_cs} to form a robust confidence set for $\theta$.
\begin{algorithm}
\caption{Construing the Confidence Set}
\label{alg:constructing_the_cs}
\begin{enumerate}
\item Estimate the reduced-form parameter $\widehat{\omega }=( \widehat{\omega }_{1},\widehat{\omega }_{2},\widehat{\omega }_{3})^{\prime }$ following the estimators defined in \cref{omega 1 est}, \cref{omega 2 est}, and \cref{omega 3 est}.
%
\item Obtain a consistent estimator of $\widehat{\omega}$'s asymptotic covariance $\widehat{ \Omega }=\widehat{\mathcal{B}}\widehat{V}\widehat{\mathcal{B}}^{\prime },$ where $\widehat{\mathcal{B}}$ is defined in \cref{Fhat} and $\widehat{V}$ is a HAC estimator of $V.$
\item For $\theta _{0}\in \Theta$,
%
\begin{enumerate}
\item Construct the QLR statistic $QLR(\theta _{0})$ in \cref{QLR stat} using $g(\theta ,\omega ),$ $G(\theta ,\omega ),$ $\widehat{\omega },$ and $\widehat{\Omega }.$
%
\item Compute the residual process $\widehat{r}(\theta )$ in \cref{red process}.
\item Given $\widehat{r}(\theta ),$ compute the critical value $c_{1-\alpha }(r,\theta _{0})$ described above.
%
\end{enumerate}
%
\item Repeat these steps for different values of $\theta _{0}$. Construct a confidence set by collecting the null values that are not rejected, i.e., the nominal level $1-\alpha $ confidence set for $\theta _{0}$ is
%
\begin{equation*}
CS_{T}=\{ \theta _{0}:QLR_{T}(\theta _{0})\leq c_{1-\alpha }(r,\theta_{0})\}.
\end{equation*}
\end{enumerate}
\end{algorithm}
To obtain confidence intervals for each element of $\theta _{0},$ one simple solution is to project the confidence set constructed above to each axis. The resulting confidence interval also has correct coverage. An alternative solution is to first concentrate out the nuisance parameters before applying the conditional inference approach above, see \textcite[Section 5]{andrews2016conditional}. However, this concentration approach only works when the nuisance parameter is strongly identified. In the present set-up, this approach does not work for $\kappa $ and $\phi $ because the nuisance parameter $\pi $ is weakly identified.
\begin{assumpS}
\label{assump:S}
The following conditions hold over $P\in \mathcal{P},$ for any $\theta $ in its parameter space, and any $\omega $ in some fixed neighborhood around its true value, for some fixed $0<C<\infty$.
%
\begin{enumerate}
\item $g(\theta ,\omega )$ is partially differentiable in $\omega ,$ with partial derivative $G(\theta ,\omega )$ that satisfies $||G(\theta _{1},\omega )-G(\theta _{2},\omega )||\leq C||\theta _{1}-\theta _{2}||$ and $||G(\theta ,\omega _{1})-G(\theta ,\omega _{2})||\leq C||\omega _{1}-\omega _{2}||.$
%
\item $C^{-1}\leq \lambda_{\min }(G(\theta ,\omega )^{\prime }G(\theta ,\omega ))\leq \lambda_{\max }(G(\theta ,\omega )^{\prime }G(\theta ,\omega ))\leq C$.
\end{enumerate}
\end{assumpS}
\begin{theorem}
\label{Lemma CS}
Suppose \cref{assump:R} and \cref{assump:S} hold. Then,
%
\begin{equation*}
\underset{T\rightarrow \infty }{\lim \inf }\underset{P\in \mathcal{P}}{\inf }\Pr \left( \theta _{0}\in CS_{T}\right) \geq 1-\alpha .\footnote{\Cref{proof:lemma_CS}}
\end{equation*}
\end{theorem}
This theorem states that the confidence set constructed by the conditional QLR\ test has correct asymptotic size. Uniformity is important for this confidence set to cover the true parameter with a probability close to $1-\alpha $ in finite-samples. This uniform result is established over a parameter space $\mathcal{P}$ that allows for weak identification of the structural parameter $\theta$.
\section{Simulations}\label{sec:simulation}
In this section, we investigate the finite-sample performance of the proposed test and show that the asymptotic approximations derived above work well in practice. We also compare it with the standard test that assumes all parameters are strongly identified. The standard test is known to be invalid under weak identification but its degree of distortion is unknown in general. We simulate the data with the parametric model above where the true values of the parameters are given in \cref{tbl:simulationParameters} based on the values used by \textcite{han2018leverage}. To investigate the robustness of the procedure with respect to various identification strengths, we vary both $\phi$ and $T$. Specifically, we consider $\phi \in \lbrace -0.40, -0.10, -0.01 \rbrace$ and $T \in \lbrace 2,000; 10,000 \rbrace$. The number of data points in the empirical section is $3,700$.
\begin{table}[htb]
\centering
\caption{Simulation Set-up}
\label{tbl:simulationParameters}
\begin{tabularx}{.75\textwidth}{X X c X X}
\toprule
$\delta$ & $\rho$ & $c$ & $\pi$ & $\kappa$ \\
\midrule
\multicolumn{5}{c}{Parameter Values used by \textcite{han2018leverage}} \\
\midrule
0.6475 & 0.50 & \num[scientific-notation=true]{.00394128} & -10 & 1.7680 \\
\bottomrule
%
\end{tabularx}
\end{table}
To avoid boundary issues with respect to the estimate of $c$ and $\delta$ in finite-sample, we reparameterize the moment conditions and link functions in terms of $\log(c)$, $\log(c) + \log(\delta)$, and $\logit(\rho)$. This reparameterization forces the scale parameters to be positive and $\rho$ to lie in $(0,1)$. We find that the resulting estimates for the transformed reduced-form parameters are better approximated by the Gaussian distribution for a given finite sample.
To show the effect of various identification strength, we first vary the true value of $\phi$ and plot the distribution of $\widehat{\pi}$ and $\widehat{\theta}$ in \cref{fig:sim_parameter_estimates}. The reported result is based on $10,000$ observations and \num{500} simulation repetitions. The black lines in the middle of the figures are the true parameter values. Clearly, the estimators sometimes pile up at the boundaries of the parameter space. As expected, this simulation shows that the Gaussian distribution is not a good approximation for the finite-sample distribution of either of the estimators.
\begin{figure}[htb]
\caption[Parameter Estimates' \textit{t}-Statistics]{Parameter Estimates' $t$-Statistics}
\label{fig:sim_parameter_estimates}
\begin{subfigure}[t]{.48\textwidth}
\caption[pi with phi = -0.40]{$\pi$ with $\phi = -0.40$}
\includegraphics[width=\textwidth, height=.7\textwidth]{pi_est_500_minus_0_point_40.pdf}
\end{subfigure}
%
\hfill
%
\begin{subfigure}[t]{.48\textwidth}
\caption[pi with phi = -0.01]{$\pi$ with $\phi = -0.01$}
\includegraphics[width=\textwidth, height=.7\textwidth]{pi_est_500_minus_0_point_01.pdf}
\end{subfigure}
%
\begin{subfigure}[b]{.48\textwidth}
\caption[theta with phi = -0.40]{$\theta$ with $\phi = -0.40$}
\includegraphics[width=\textwidth, height=.7\textwidth]{theta_est_500_minus_0_point_40.pdf}
\end{subfigure}
%
\hfill
%
\begin{subfigure}[b]{.48\textwidth}
\caption[theta with phi = -0.01]{$\theta$ with $\phi = -0.01$}
\includegraphics[width=\textwidth, height=.7\textwidth]{theta_est_500_minus_0_point_01.pdf}
\end{subfigure}
%
\end{figure}
% Increase the number of simulations
Next, we study the finite-sample size of in the standard QLR test and the proposed conditional QLR test for a joint test for the three structural parameters. The nominal level of the test is \SI{5}{\percent}. The critical value of the standard QLR test is the \SI{95}{\percent} quantile of the $\chi^2$-distribution with $3$ degree of freedom. The critical value of the conditional QLR test is obtained by the stimulation-based procedure in \cref{alg:constructing_the_cs}, with \num{250} simulation repetitions to approximate the quantile of the conditional distribution. The finite-sample size is based on \num{250} simulation repetitions.
The standard test is no longer valid under weak identification because the QLR statistic does not have a $\chi^2$-distribution in this case. However, it is not clear whether the standard QLR test under-rejects or over-rejects in finite-sample and how large is the difference from \SI{5}{\percent}.
\begin{table}[htb]
\centering
\caption{Finite-Sample Size of the Standard and Proposed Tests}
\label{tbl:test_performance}
\sisetup{
round-mode=places,
round-precision=2,
}
\begin{tabularx}{.85\textwidth}{X | S S | S S }
%
\toprule
& {Standard \%} & {Proposed \%} & {Standard \%} & {Proposed \%} \\
\midrule
$\phi$ & \multicolumn{2}{c}{$T$ = 2,000} & \multicolumn{2}{c}{$T$ = 10,000} \\
\midrule
-0.01 & 2.00 & 5.20 & 1.60 & 4.40 \\
% -0.10 & & & & \\
-0.40 & 2.40 & 5.60 & 6.00 & 6.40 \\
\bottomrule
\end{tabularx}
\end{table}
Simulation results show that the standard QLR test under-rejects in finite-sample. This is most severe when the identification is weak, e.g., for $\phi=-0.01$ and $T = 10,000$, the rejection rate is \SI{1.6}{\percent}. If we have enough data and $\phi$ is large enough in magnitude, the standard test static does okay. However, this is not the empirically relevant case. The proposed test, however, has approximately uniform coverage and, hence, is much more trustworthy.
\section{Data and Empirical Results}\label{sec:empirical}
For the empirical application, we use the daily return on the S\&P 500 for $r_{t+1}$ and the associated realized volatility computed with high-frequency data for $\sigma^2_{t+1}$. The data is obtained from SPY (SPDR S\&P 500 ETF Trust), an exchange-traded fund that mimics the S\&P 500. This gives us a market index whose risk is not easily diversifiable and can be used to estimate the prices of risk that investors face in practice. We use the procedure \textcite{sangrey2018jumps} develops to estimate the integrated total volatility, i.e, the instantaneous expectation of the price variance. This measure reduces to the integrated diffusion volatility if prices have continuous paths and it works well in the presence of market microstructure noise.
Since SPY is one of the most liquid assets traded, we can choose the frequency at which we sample the underlying price. To balance market-microstructure noise, computational cost, and efficiency of the resultant estimators, we sample at the \num{1}-second frequency. We annualize the data by multiplying $r_{t+1}$ by \num{252} and $\sigma^2_{t+1}$ by $252^2$. The data starts in 2003 and ends in September 2017. Since the asset is only traded during business hours, this leads to \num{3713} days of data with an average of approximately \num{24000} observations per day. We compute $r_{t+1}$ as the daily return from the open to the close of the market, the interval over which we can estimate the volatility. This avoids specifying the relationship between overnight and intra-day returns. We preprocess the data using the pre-averaging approach as in \textcites{podolskij2009bipower, aitsahalia2012testing}.
\begin{figure}[htb]
\centering
\caption{S\&P 500 Volatility and Log-Return}
\begin{subfigure}[t]{.54\textwidth}
\label{risk_fig:spy_dynamics}
\caption{Time Series}
\includegraphics[width=\textwidth, height=.81\textwidth]{time_series.pdf}
\end{subfigure}%
%
\hfill
%
\begin{subfigure}[t]{.44\textwidth}
\label{risk_fig:spy_static}
\caption{Joint Distribution}
\includegraphics[width=\textwidth, height=\textwidth]{joint_dist.pdf}
\end{subfigure}
\end{figure}
To see how the data move over time, we plot their time series in \cref{risk_fig:spy_dynamics}. We also plot the joint unconditional distribution in \cref{risk_fig:spy_static} to see the static relationship between the two series. The volatility has a long-right tail, a typical gamma-type distribution. The returns has a bell-shaped distribution. They are slightly negatively correlated, as shown by the regression line in the joint plot. This corroborates the work by \textcites{bandi2012timevarying, aitsahalia2013leverage}. We also report a series of summary statistics.
\begin{table}[htb]
\centering
\caption{Summary Statistics}
\label{tbl:summary_stats}
\sisetup{
table-align-text-pre=false,
table-align-text-post=false,
round-mode=places,
round-precision=2,
table-space-text-pre=\lbrack,
table-space-text-post=\rbrack,
}
\begin{tabularx}{.5\textwidth}{X | S S}
\toprule
& {$r_{t+1}$} & {$\sigma^2_{t+1}$} \\
\midrule
Mean & 0.023421 & 5.621287 \\
\rowcolor{gray!20}
Standard Deviation & 2.350165 & 14.458446\\
Skewness & -0.312 & 12.209 \\
\rowcolor{gray!20}
Kurtosis & 13.066 & 243.401 \\
Correlation & \multicolumn{2}{c}{\num{-0.024379}} \\
\bottomrule
\end{tabularx}
\end{table}
We now report the estimates and confidence intervals for the reduced-form parameters $c, \delta$, and $\rho$. The confidence intervals reported here use the Gaussian limiting theory, i.e., the point estimates $\pm 1.96$ standard errors. We first obtain confidence intervals for $\log(c)$ and $\log(c) + \log(\delta)$, and transform them into confidence intervals for $c$ and $\delta$. Similarly, we create the confidence interval for $\rho$ by inverting the interval for $\logit(\rho)$.
\begin{table}[htb]
\caption{Parameters that Govern the Volatility Process}
\label{tbl:reduced_form_parameters}
\centering
\sisetup{
detect-mode,
tight-spacing = true,
group-digits = false,
input-symbols = {(}{)},
input-open-uncertainty = ,
input-close-uncertainty = ,
round-mode = places,
round-precision = 2,
table-align-text-pre = false,
table-align-text-post = false,
table-alignment = center,
}
\begin{tabularx}{.7\textwidth}{X | S >{{(}} S[table-space-text-pre={(}] <{{,\,}}
S[table-space-text-pre={\hspace{-1.5cm}}] <{{)}}}
%
\toprule
& {Point Estimate} & \multicolumn{2}{c}{\SI{95}{\percent} Confidence Interval} \\
\midrule
$c$ & 3.0668 & 1.384 2 & 6.7946 \\
\rowcolor{gray!20}
$\delta$ & 37.981 & 17.6528 & 81.718 \\
$\rho$ & 0.77107 & 0.6747 & 0.8454 \\
\bottomrule
\end{tabularx}
\end{table}
For confidence intervals of the three structural parameters, we first compute their joint confidence set based on the conditional QLR test and then project it to each of the components. We also plot a joint confidence sets for the two risk prices, after projecting out $\phi$. We use 500 simulations to compute the quantile for the QLR statistic.
% \footnote{ We plot contour plots for both the conditional QLR quantile and the QLR statistic in the appendix to aid in interpreting \cref{fig:confidence_region}. In particular, the disconnectedness in \cref{fig:confidence_region} is coming from the multi-modality of the statistic. The conditional quantile is actually quite smooth. We do this by choosing the minimal (maximal) value of the statistic (quantile) for each ($\kappa, \pi$) pair. If the $\phi$ estimate were independent of the other parameters' estimates, \cref{fig:confidence_region} would simply be the region where the statistic exceeds the quantile. We deal with the multi-modality of the posterior by using both a random initialization, the true value under the null hypothesis, and a closed-form guess computed by using a subset of the moment conditions. We then use the minimum value computed from these three minimizations. Other initialization procedures do not qualitatively affect the results.}
\begin{table}[htb]
\caption{Structural Parameters}
\label{tbl:structural_param_estimates}
\centering
\begin{tabularx}{.4\textwidth}{X | c }
%
\toprule
& \SI{95}{\percent} Confidence Interval \\
\midrule
$\phi$ & (-0.33, -0.27) \\
\rowcolor{gray!20}
$\pi$ & (-30.97, 0.00) \\
$\kappa$ & (0.00, 2.00) \\
\bottomrule
\end{tabularx}
\end{table}
The results in \cref{tbl:structural_param_estimates}.
% and \cref{fig:confidence_region}
have a few notable features.
First, we can reject the null hypothesis $\phi = 0$. We cannot, however, reject the hypothesis that $\pi = 0$ at the \SI{5}{\percent} level.
% The standard confidence interval, which is likely not valid, has approximately the same area. Because we have both issues with weak identification and boundary issues, there is not a priori reason to expect it to be either larger or smaller than the valid confidence sets. It does allow for much larger magnitudes of $\pi$.
% \begin{figure}[htb]
%
% \caption{Confidence Set for Risk Prices}
%
% \begin{subfigure}[t]{.32\textwidth}
% \label{fig:confidence_region}
% \caption{Proposed}
% \includegraphics[width=\textwidth, height=\textwidth]{qlr_confidence_region_500.pdf}
% \end{subfigure}
% %
% \begin{subfigure}[t]{.32\textwidth}
% \caption{Anderson-Rubin}
% \includegraphics[width=\textwidth, height=\textwidth]{ar_confidence_region_500.pdf}
% \end{subfigure}
% %
% \begin{subfigure}[t]{.32\textwidth}
% \caption{Standard}
% \includegraphics[width=\textwidth, height=\textwidth]{standard_confidence_region.pdf}
% \end{subfigure}
%
% \end{figure}
We cannot reject the null hypotheses that $\kappa = 0$. This should not be particularly surprising given the difficulty in precisely estimating this parameter documented in the previous literature, \parencites{lettau2010measuring}. Although not recorded on the table, we can reject the hypothesis that both $\kappa = \pi = 0$. The procedure can determine that investors demand compensation for risk, just not what combination of risks they demand compensation for.
%
% Although the projection procedure can lead to conservative sub-vector confidence sets whose coverage could be larger than \SI{95}{\percent}, the confidence set in \cref{fig:confidence_region} appear relatively informative.
%
The risk price associated with the market return is covered by $(0.00, 2.00)$ with at least \SI{95}{\percent} probability. The volatility risk price is covered by $(-30.97, 0.00))$ with at least \SI{95}{\percent} probability. Our confidence intervals for both parameters are reasonable given other values that previous authors have found. For example, \Textcite{han2018leverage} preferred a value of $\pi = -10$, for example.
\section{Conclusion}\label{sec:risk_conclusion}
In structural stochastic volatility models as the one considered here, changes in the volatility affect returns through two channels. On the one hand, investors are willing to tolerate high volatility to get high expected returns as measured by the price of market return risk. On the other hand, investors are directly averse to changes in future volatility, as measured by the price of volatility risk. \Textcite{han2018leverage} shows how to disentangle these two channels by exploiting information arising from the leverage effect in an exponentially-affine pricing model. However, standard inference for this structural model is invalid because the volatility risk price is only weakly identified when the leverage effect is mild. This paper propose an identification robust inference procedure that provides reliable confidence sets for the risk prices regardless of the magnitude of the leverage effect. We take this procedure to the data on the S\&P 500. The robust inference procedure provides reliable yet informative confidence intervals for the risk prices associated with the market return and the volatility.
\printbibliography[heading=bibintoc]
\begin{appendices}
\section{Proofs}\label{sec:proofs}
\subsection{\texorpdfstring{Proof of \cref{Lemma m0 and m1}}{Proof of Lemma 1}}
\label[proof]{proof:lemma_m0_and_m1}
\begin{proof}
For the risk-free asset, the excess return $r_{t+1}=0.$
Therefore, we have
%
\begin{align*}
1 &= E\left[ \exp \left( m_{0}+m_{1}\sigma _{t}^{2}-\pi \sigma _{t+1}^{2}-\theta r_{t+1}\right) \mvert \F_{t}\right] \\
%
&= \exp (m_{0}+m_{1}\sigma _{t})E\left[ \exp \left( -\pi \sigma _{t+1}^{2}\right) E\left[ \exp \left( -\theta r_{t+1}\right) \mvert \F _{t},\sigma _{t+1}^{2}\right] \mvert \F_{t}\right] \\
%
&= \exp (m_{0}-E\left( \theta \right) +m_{1}\sigma _{t}-D\left( \theta \right) \sigma _{t}^{2})E\left[ \exp \left( -\pi \sigma _{t+1}^{2}-C\left( \theta \right) \sigma _{t+1}^{2}\right) \mvert \F_{t}\right] \\
%
&= \exp (m_{0}-E\left( \theta \right) +m_{1}\sigma _{t}-D\left( \theta \right) \sigma _{t}^{2}-A\left( \pi +C\left( \theta \right) \right) \sigma _{t}^{2}-B\left( \pi +C\left( \theta \right) \right) ),
%
\end{align*}
%
where the first equality follows from the pricing equation, the second equality follows from the law of iterated expectations, the third equation uses the Laplace transform for $r_{t+1}$ in \cref{eqn:return_laplace_transform}, and the last equality follows from the Laplace transform for $\sigma _{t+1}^{2}$ in \cref{eqn:vol_laplace_transform}.
Since $M_{t,t+1}$ must integrate to $1$, the constant term and coefficient for $\sigma_{t}^{2}$ must equal 0, which gives the claimed result for $m_{0}$ and $m_{1}$.
We can apply the same argument above to any asset $r_{t+1}$. This gives the same result, except $\theta$ is replaced by $\theta -1$ throughout.
This implies that the two equalities for $m_{0}$ and $m_{1}$ also hold with $\theta $ replaced by $\theta -1$.
Therefore,
%
\begin{align*}
E(\theta -1)+B\left( C\left( \theta -1\right) +\pi \right)
%
&= E(\theta)+B\left( C\left( \theta \right) +\pi \right) , \\
%
D\left( \theta -1\right) +A\left( C\left( \theta -1\right) +\pi \right)
%
&= D\left( \theta \right) +A\left( C\left( \theta \right) +\pi \right).
\end{align*}
%
The claimed results for $\gamma $ and $\beta $ follow from $\gamma = \E(\theta)-E(\theta -1)$ and $\beta =D(\theta )-D(\theta -1)$ under the linear specification of $E(x)=\gamma x$ and $D(x)=\beta x$.
\end{proof}
\subsection{\texorpdfstring{Proof of \cref{Lemma Reduce}}{Proof of Lemma 2}}
\label[proof]{proof:lemma_reduce}
\begin{proof}
Under the assumption that (i) $ \mathbb{E(}z_{t}z_{t}^{\prime })$ has the smallest eigenvalue bounded away from 0 and (ii) $c>\varepsilon $ and $\delta >\varepsilon $ for some $ \varepsilon >0,$ we not only have $\omega _{10}$ as an unique minimizer of $||\mathbb{E}[h_{t}(\omega _{1})]||$ but also have a uniform positive lower bound for $\norm{\E[h_{t}(\omega _{1})]}$ for $\norm{\omega _{1}-\omega _{10}} \geq \varepsilon$. Thus, consistency of $\widehat{\omega }_{1}$ follows from standard arguments for the consistency of a GMM estimator under an uniform convergence of the criterion under \cref{assump:R} (1) and (2).
Let $\overline{h}(\omega _{1})=T^{-1}\sum_{t=1}^{T}h_{t}(\omega _{1})$ and $ \overline{H}(\omega )=T^{-1}\sum_{t=1}^{T}H_{t}(\omega _{1}).$ By construction, the estimator satisfies the first order condition
%
\begin{align}
0 &=
\begin{pmatrix}
\overline{H}(\widehat{\omega }_{1})^{\prime }\widehat{V}_{1}^{-1}\overline{h} (\widehat{\omega }_{1}) \\
%
T^{-1}\sum_{T=1}^{T}x_{t}(y_{t}-x_{t}^{\prime }\widehat{\omega }_{2}) \\
%
\widehat{\omega }_{3}-T^{-1}\sum_{t=1}^{T}\left( y_{t}-\widehat{y} _{t}\right) ^{2}
\end{pmatrix} \nonumber \\
%
&=
%
\begin{pmatrix}
%
\overline{H}(\widehat{\omega }_{1})^{\prime }\widehat{V}_{1}^{-1}\overline{h} (\omega _{10})+\overline{H}(\widehat{\omega }_{1})^{\prime }\widehat{V} _{1}^{-1}\overline{H}(\widetilde{\omega }_{1})(\widehat{\omega }_{1}-\omega
_{10}) \\
%
T^{-1}\sum_{t=1}^{T}x_{t}(y_{t}-x_{t}^{\prime }\omega _{20})-T^{-1}\sum_{t=1}^{T}x_{t}x_{t}^{\prime }\left( \widehat{\omega } _{2}-\omega _{20}\right) \\
%
\left( \widehat{\omega }_{3}-\omega _{3}\right) +\omega _{3}-T^{-1}\sum_{t=1}^{T}\left( y_{t}-x_{t}\widehat{\omega }_{2}\right) ^{2}
\end{pmatrix},
%
\label{L-R-1}
\end{align}
%
where the second equality follows from a mean value expansion of $\overline{h }(\widehat{\omega }_{1})$ around $\omega _{10},$ with $\widetilde{\omega } _{1}$ between $\omega _{10}$ and $\widehat{\omega }_{1}$.
Let
%
\begin{equation}
%
\widetilde{\mathcal{B}} = \diag\left\lbrace[\overline{H}(\widehat{\omega }_{1})^{\prime } \widehat{V}_{1}^{-1}\overline{H}(\widetilde{\omega }_{1})]^{-1}\overline{H}( \widehat{\omega }_{1})^{\prime }\widehat{V}_{1}^{-1},[T^{-1} \sum_{t=1}^{T}x_{t}x_{t}^{\prime }]^{-1},1\right\rbrace.
%
\end{equation}
%
Then \cref{L-R-1} implies that
%
\begin{align}
T^{1/2}\left( \widehat{\omega }-\omega \right)
%
&= \widetilde{\mathcal{B}} \cdot T^{-1/2}\sum_{t=1}^{T}
%
\begin{pmatrix}
-h_{t}(\omega _{10}) \\
%
x_{t}(y_{t}-x_{t}^{\prime }\omega _{20}) \\
%
\left( y_{t}-x_{t}\widehat{\omega }_{2}\right) ^{2}-\omega _{3}%
\end{pmatrix} \nonumber \\
%
&=
%
\widetilde{\mathcal{B}}\cdot T^{-1/2}\sum_{t=1}^{T}
%
\begin{pmatrix}
-h_{t}(\omega _{10}) \\
%
x_{t}(y_{t}-x_{t}^{\prime }\omega _{20}) \\
%
\left( y_{t}-x_{t}^{\prime }\omega _{20}\right) ^{2}-\E\left[\left( y_{t}-x_{t}^{\prime }\omega _{20}\right)^{2}\right]%
%
\end{pmatrix}%
%
+
%
\begin{pmatrix}
0 \\
0 \\
\varepsilon_{T}
\end{pmatrix},
%
\label{L-R-2}
%
\end{align}%
%
where the second equality uses $\omega _{3}=\E[\left( y_{t}-x_{t}^{\prime }\omega _{20}\right) ^{2}]$ by definition and
%
\begin{align}
\varepsilon _{T}
%
&= T^{-1/2}\sum_{t=1}^{T}\left[ \left( y_{t}-x_{t}^{\prime } \widehat{\omega }_{2}\right) ^{2}-\left( y_{t}-x_{t}^{\prime }\omega _{20}\right) ^{2}\right] \nonumber \\
%
&= 2T^{-1}\sum_{t=1}^{T}\left( y_{t}-x_{t}^{\prime }\omega _{20}\right) x_{t}^{\prime }\left[ T^{1/2}\left( \widehat{\omega }_{2}-\omega _{20}\right) \right] +o_{p}(1) \nonumber \\
%
&= o_{p}(1)
%
\label{L-R-3}
\end{align}
%
because $T^{-1}\sum_{t=1}^{T}\left( y_{t}-x_{t}^{\prime }\omega _{20}\right) x_{t}^{\prime }\rightarrow _{p}0$ and $T^{1/2}(\widehat{\omega }_{2}-\omega _{20})=O_{p}(1)$ following \cref{assump:R}.
In addition,
%
\begin{equation}
\widetilde{\mathcal{B}}\rightarrow _{p}\mathcal{B}
\label{L-R-4}
\end{equation}%
%
following from the consistency of $\widehat{\omega}_{1}$ and \cref{assump:R}. Finally, the desirable result follows from \cref{L-R-2}--\cref{L-R-4} and \cref{assump:R}. The consistency of $\widehat{\Omega }$ follows from the consistency of $\widehat{\mathcal{B}}$ and $\widehat{V}$.
\end{proof}
\subsection{\texorpdfstring{Proof of \cref{Lemma CS}}{Proof of Theorem 3}}
\label[proof]{proof:lemma_CS}
\begin{proof}
We obtain this result by applying \textcite[Theorem 1]{andrews2016conditional}.
We now verify Assumptions 1--3 in \textcite{andrews2016conditional}.
To show weak convergence $\eta _{T}(\cdot )$ to $\eta (\cdot )$ uniformly over $\mathcal{P}$, note that by a second-order Taylor expansion,
%
\begin{align}
\eta _{T}(\lambda)
%
&\coloneqq T^{1/2}\left[ \widehat{g}(\lambda )-g_{0}(\lambda ) \right] = G_{0}(\lambda )\Omega ^{1/2}\xi _{T}+\delta _{T},\text{ where} \nonumber \\
%
\xi _{T}
&= \Omega ^{-1/2}T^{1/2}\left( \widehat{\omega }-\omega _{0}\right), \text{ and }
%
\delta _{T} =\left( G(\lambda , \widetilde{\omega })-G(\lambda, \omega _{0})\right) T^{1/2}(\widehat{\omega }-\omega _{0})
%
\end{align}%
%
and $\widetilde{\omega }$ is between $\widehat{\omega }$ and $\omega_{0}$.
%
Because $\norm{G(\lambda ,\widetilde{\omega })-G(\lambda ,\omega _{0})} \leq C \norm{\widetilde{\omega }-\omega_{0}}$, $\delta _{T}=o_{p}(1)$ uniformly over $ \mathcal{P}$ following Lemma \cref{Lemma Reduce}.
To show $G_{0}(\lambda)\Omega ^{1/2}\xi _{T}$ weakly converges to $\eta (\cdot ),$ it is sufficient to show (i) the pointwise convergence%
%
\begin{equation}
%
\begin{pmatrix}
G_{0}(\lambda _{1})\Omega ^{1/2}\xi _{T} \\
G_{0}(\lambda _{2})\Omega ^{1/2}\xi _{T}%
\end{pmatrix}%
%
\rightarrow _{d}(
%
\begin{pmatrix}
\eta (\lambda _{1}) \\
\eta (\lambda _{2})%
\end{pmatrix},
%
\end{equation}%
%
which follows from \cref{Lemma Reduce}, and (ii) the stochastic equicontinuity condition, i.e., for every $\varepsilon >0$ and $\xi >0,$ there exists a $\delta >0$ such that
%
\begin{equation}
\underset{T\rightarrow \infty }{\lim\sup }\Pr \left( \underset{P\in \mathcal{P}}{\sup }\underset{\norm{\lambda _{1}-\lambda _{2}}\leq \delta }{\sup }\norm*{G_{0}(\lambda _{1})\Omega ^{1/2}\xi _{T}-G_{0}(\lambda
_{2})\Omega ^{1/2}\xi _{T}} >\varepsilon \right) < \xi.
\end{equation}%
%
For some $C < \infty$, we have $\norm{G_{0}(\lambda _{1})-G(\lambda _{2})} \leq C \norm{\lambda _{1}-\lambda _{2}}$ under a uniform bound for the derivative in \cref{assump:S}, and we have $\norm{\Omega ^{1/2}} \leq C$ under \cref{assump:R} because $F$ and $V$ both have bounded largest eigenvalue.
Thus,
%
\begin{align}
&\phantom{=} \underset{T\rightarrow \infty }{\lim \sup }\Pr\left( \underset{P\in \mathcal{P}}{\sup }\underset{\norm{\lambda _{1}-\lambda _{2}}\leq \delta }{\sup }\norm*{ G_{0}(\lambda _{1})\Omega ^{1/2}\xi _{T}-G_{0}(\lambda _{2})\Omega ^{1/2}\xi _{T}} > \varepsilon \right) \nonumber \\
%
&\leq \underset{T\rightarrow \infty }{\lim \sup }\Pr \left( C^{2}\underset{ P\in \mathcal{P}}{\sup }\left \Vert \xi _{T}\right \Vert >\frac{\varepsilon }{\delta }\right).
%
\label{EC}
\end{align}
%
Because $\xi _{T}=O_{p}(1)$ uniformly over $P\in \mathcal{P},$ there exists $ \delta $ such that $\varepsilon /\delta $ is large enough to make the right hand side of the inequality in \cref{EC} smaller than $\xi$.
Assumptions 2 and 3 of \textcite[Theorem 1]{andrews2016conditional} follow from \cref{assump:R}.
\end{proof}
% \section{Additional Empirical Results}\label{app:additional_empirical_results}
% In this graph, I display contour plots of both the conditional quantile and the QLR statistic. I truncate the graphs above at \num{10} because we can clearly reject the parameters that lead to that exceed that number. The quantile of the robust QLR statistic if we do not subtract off anything is bounded above by \num{10}.
% \begin{figure}[htb]
% \caption{Contour Plots of the Quantile and Statistic}
% \begin{subfigure}[t]{.55\textwidth}
% \caption{QLR Robust Quantile}
% \includegraphics[width=\textwidth, height=2in]{qlr_quantile_contours.pdf}
% \end{subfigure}
% %
% \hfill
% %
% \begin{subfigure}[t]{.44\textwidth}
% \caption{QLR Statistic}
% \includegraphics[width=\textwidth, height=2in]{qlr_stats_contours.pdf}
% \end{subfigure}
% \end{figure}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.7111957575,
"avg_line_length": 73.7795431976,
"ext": "tex",
"hexsha": "bb355943a1026eeff7ba0e403bdf6587e74f04f1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9ec8b235e3d1f24281890a5f689840affd3f495e",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "sangrey/RiskPriceInference",
"max_forks_repo_path": "doc/risk_price_inference_3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9ec8b235e3d1f24281890a5f689840affd3f495e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "sangrey/RiskPriceInference",
"max_issues_repo_path": "doc/risk_price_inference_3.tex",
"max_line_length": 1509,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9ec8b235e3d1f24281890a5f689840affd3f495e",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "sangrey/RiskPriceInference",
"max_stars_repo_path": "doc/risk_price_inference_3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 22068,
"size": 74296
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.