Search is not available for this dataset
text
string
meta
dict
\newappendix{Gridworld example where multi-step outperforms one-step}\label{sec:app_grid} As explained in the main text, this section presents an example that is only a slight modification of the one in Figure \ref{fig:gridworld}, but where a multi-step approach is clearly preferred over just one step. The data-generating and learning processes are exactly the same (100 trajectories of length 100, discount 0.9, $ \alpha = 0.1$ for reverse KL regularization). The only difference is that rather than using a behavior that is a mixture of optimal and uniform, we use a behavior that is a mixture of maximally suboptimal and uniform. If we call the suboptimal policy $ \pi^-$ (which always goes down and left in our gridworld), then the behavior for the modified example is $ \beta = 0.2 \cdot \pi^- + 0.8 \cdot u$, where $ u $ is uniform. Results are shown in Figure \ref{fig:multi_gridworld}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/offline-rl/gridworld/multi_gridworld_flat.png} \includegraphics[width=\textwidth]{figures/offline-rl/gridworld/multi_gridworld_error.png} \caption{A gridworld example with modified behavior where multi-step is much better than one-step.} \label{fig:multi_gridworld} \end{figure} By being more likely to go to the noisy states, this behavior policy allows us to get lower variance estimates of the rewards. Essentially, the coverage of the behavior policy in this example reduces the magnitude of the evaluation errors. This allows for more aggressive planning using multi-step methods. Moreover, since the behavior is less likely to go to the good state, the behavior Q function does not propagate the signal from the rewarding state as far, harming the one-step method. \newappendix{Connection to policy improvement guarantees}\label{sec:app_improvement} The regularized or constrained one-step algorithm performs an update that directly inherits guarantees from the literature on conservative policy improvement \citep{kakade2002approximately, schulman2015trust, achiam2017constrained}. These original papers consider an online setting where more data is collected at each step, but the guarantee at each step applies to our one-step offline algorithm. The key idea of this line of work begins with the performance difference lemma of \cite{kakade2002approximately}, and then lower bounds the amount of improvement over the behavior policy. Define the discounted state visitation distribution for a policy $ \pi$ by $ d^\pi(s) := (1-\gamma) \sum_{t=0}^\infty \gamma^t \Prob_{\rho, P, \pi}(s_t = s)$. We will also use the shorthand $ Q(s, \pi) $ to denote $ \E_{a\sim\pi|s}[Q(s,a)]$. Then we have the performance difference lemma as follows. \begin{lemma}[Performance difference, \cite{kakade2002approximately}] For any two policies $ \pi$ and $ \beta$, \begin{align} J(\pi) - J(\beta) = \frac{1}{1-\gamma} \E_{\substack{s \sim d^\pi}}[ Q^\beta(s,\pi) - Q^\beta(s, \beta)]. %:= \frac{1}{1-\gamma} \E_{\substack{s \sim d_\pi }}[ A^\beta_\pi(s)]. \end{align} \end{lemma} Then, Corollary 1 from \cite{achiam2017constrained} (reproduced below) gives a guarantee for the one-step algorithm. The key idea is that when $ \pi $ is sufficiently close to $ \beta$, we can use $ Q^\beta$ as an approximation to $ Q^\pi$. \begin{lemma}[Conservative Policy Improvement, \cite{achiam2017constrained}] For any two policies $ \pi$ and $ \beta$, let $ \|A^\beta_\pi\|_{\infty} = \sup_s |Q^\beta(s,\pi) - Q^\beta(s, \beta)|$. Then, \begin{align} J(\pi) - J(\beta) \geq \frac{1}{1-\gamma} \E_{\substack{s \sim d^\beta}}\left[ \left(Q^\beta(s,\pi) - Q^\beta(s, \beta)\right) - \frac{2\gamma \|A^\beta_\pi\|_\infty}{1-\gamma} D_{TV}(\pi(\cdot|s)\|\beta(\cdot|s)) \right] \end{align} where $ D_{TV}$ denotes the total variation distance. \end{lemma} Replacing $ Q^\beta$ with $ \widehat Q^\beta$ and the TV distance by the KL, we get precisely the objective that we optimize in the one-step algorithm. This shows that the one-step algorithm indeed optimizes a lower bound on the performance difference. Of course, in practice we replace the potentially large multiplier on the divergence term by a hyperparameter, but this theory at least motivates the soundness of the approach. We are not familiar with similar guarantees for the iterative or multi-step approaches that rely on off-policy evaluation. \newappendix{Experimental setup}\label{sec:app_exp_setup} \subsection{Benchmark experiments (Tables \ref{tab:d4rl} and \ref{tab:multi}, Figure \ref{fig:learning_curves})} \paragraph{Data.} We use the datasets from the D4RL benchmark \citep{fu2020d4rl}. We use the latest versions, which are v2 for the mujoco datasets and v1 for the adroit datasets. \paragraph{Hyperparameter tuning.} \begin{table}[ht] \vspace{-0.2cm} \centering \caption{Hyperparameter sweeps for each algorithm.} \begin{small} \begin{tabular}{lc} \toprule Algorithm & Hyperparameter set \\ \midrule Reverse KL ($ \alpha$) & \{0.03, 0.1, 0.3, 1.0, 3.0, 10.0\}\\ Easy BCQ ($ M$) & \{2, 5, 10, 20, 50, 100\}\\ Exponentially weighted ($ \tau$) & \{0.1, 0.3, 1.0, 3.0, 10.0, 30.0\}\\ \bottomrule \end{tabular} \end{small} \label{tab:hyperparams} \end{table} We follow the practice of \cite{fu2020d4rl} and tune a small set of hyperparameters by interacting with the simulator to estimate the value of the policies learned under each hyperparameter setting. The hyperparameter sets for each algorithm can be seen in Table \ref{tab:hyperparams}. This may initially seem like ``cheating'', but can be a reasonable setup if we are considering applications like robotics where we can feasibly test a small number of trained policies on the real system. Also, since prior work has used this setup, it makes it easiest to compare our results if we use it too. While beyond the scope of this work, we do think that better offline model selection procedures will be crucial to make offline RL more broadly applicable. A good primer on this topic can be found in \cite{paine2020hyperparameter}. \paragraph{Models.} All of our Q functions and policies are simple MLPs with ReLU activations and 2 hidden layers of width 1024. Our policies output a truncated normal distribution with diagonal covariance where we can get reparameterized samples by sampling from a uniform distribution and computing the differentiable inverse CDF \citep{Burkhardt2014truncated}. We found this to be more stable than the tanh of normal used by e.g. \cite{fu2020d4rl}, but to achieve similar performance when both are stable. We use these same models across all experiments. \paragraph{One-step training procedure.} For all of our one-step algorithms, we train our $ \hat \beta $ behavior estimate by imitation learning for 500k gradient steps using Adam \citep{kingma2014adam} with learning rate 1e-4 and batch size 512. We train our $ \widehat Q^\beta$ estimator by fitted Q evaluation with a target network for 2 million gradient steps using Adam with learning rate 1e-4 and batch size 512. The target is updated softly at every step with parameter $ \tau = 0.005$. All policies are trained for 100k steps again with Adam using learning rate 1e-4 and batch size 512. Easy BCQ does not require training a policy network and just uses $ \hat \beta$ and $ \widehat Q^\beta$ to define it's policy. For the exponentially weighted algorithm, we clip the weights at 100 to prevent numerical instability. To estimate reverse KL at some state we use 10 samples from the current policy and the density defined by our estimated $ \hat \beta$. Each random seed retrains all three models (behavior, Q, policy) from different initializations. We use three random seeds. \paragraph{Multi-step training procedure.} For multi-step algorithms we use all the same hyperparameters as one-step. We initialize our policy and Q function from the same pre-trained $ \hat \beta$ and $ \widehat Q^\beta$ as we use for the one-step algorithm trained for 500k and 2 million steps respectively. Then we consider 5 policy steps. To ensure that we use the same number of gradient updates on the policy, each step consists of 20k gradient steps on the policy followed by 200k gradient steps on the Q function. Thus, we take the same 100k gradient steps on the policy network. Now the Q updates are off-policy so the next action $a'$ is sampled from the current policy $ \pi_i$ rather than from the dataset. \paragraph{Iterative training procedure.} For iterative algorithms we again use all the same hyperparameters and initialize from the same $ \hat \beta$ and $ \widehat Q^\beta$. We again take the same 100k gradient steps on the policy network. For each step on the policy network we take 2 off-policy gradient steps on the Q network. \paragraph{Evaluation procedure.} To evaluate each policy we run 100 trajectories in the environment and compute the mean. We then report the mean and standard deviation over three training seeds. \subsection{MSE experiment (Figure 3)} \paragraph{Data.} To get an independently sampled dataset of the same size as the training set, we use the behavior cloned policy $ \hat \beta$ to sample 1000 trajectories. The checkpointed policies are taken at intervals of 5000 gradient steps from each of the three training seeds. \paragraph{Training procedure.} The $\widehat Q^{\pi_i}$ training procedure is the same as before so we use Adam with step size 1e-4 and batch size 512 and a target network with soft updates with parameter 0.005. We train for 1 million steps. \paragraph{Evaluation procedure.} To evaluate MSE, we sample 1000 state, action pairs from the original training set and from each state, action pair we run 3 rollouts. We take the mean over the rollouts and then compute squared error at each state, action pair and finally get MSE by taking the mean over state, action pairs. The reported reverse KL is evaluated by samples during training. At each state in a batch we take 10 samples to estimate the KL at that state and then take the mean over the batch. \subsection{Gridworld experiment (Figure 4)} \paragraph{Environment.} The environment is a 15 x 15 gridworld with deterministic transitions. The rewards are deterministically 1 for all actions taken from the state in the top right corner and stochastic with distribution $ \mathcal{N}(-0.5, 1)$ for all actions taken from states on the left or bottom walls. The initial state is uniformly random. The discount is 0.9. \paragraph{Data.} We collect data from a behavior policy that is a mixture of the uniform policy (with probability 0.8) and an optimal policy (with probability 0.2). We collect 100 trajectories of length 100. \paragraph{Training procedure.} We give the agent access to the deterministic transitions. The only thing for the agent to do is estimate the rewards from the data and then learn in the empirical MDP. We perform tabular Q evaluation by dynamic programming. We initialize with the empirical rewards and do 100 steps of dynamic programming with discount 0.9. Regularized policy updates are solved for exactly by setting $ \pi_i(a|s) \propto \beta(a|s) \exp(\frac{1}{\alpha} \widehat Q^{\pi_{i-1}}(s,a))$. \subsection{Overestimation experiment (Figure 5)} This experiment uses the same setup as the MSE experiment. The main difference is we also consider the Q functions learned during training and demonstrate the overestimation relative to the Q functions trained on the evaluation dataset as in the MSE experiment. \subsection{Mixed data experiment (Figure 6)} We construct datasets with $ p_m = \{0.0, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0\}$ by mixing the random and medium datasets from D4RL and then run the same training procedure as we did for the benchmark experiments. Each dataset has the same size, but a different proportion of trajectories from the medium policy. \newappendix{Learning curves}\label{sec:app_extra_exp} In this section we reproduce the learning curves and hyperparameter plots across the one-step, multi-step, and iterative algorithms with reverse KL regularization, as in Figure \ref{fig:learning_curves}. \vspace{-0.2cm} \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-halfcheetah-medium-v2.png} \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-walker2d-medium-v2.png} \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-hopper-medium-v2.png} \vspace{-0.2cm} \caption{Learning curves on the medium datasets.} \label{fig:app_lc_medium} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-halfcheetah-medium-expert-v2.png} \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-walker2d-medium-expert-v2.png} \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-hopper-medium-expert-v2.png} \vspace{-0.2cm} \caption{Learning curves on the medium-expert datasets.} \label{fig:app_lc_medium-expert} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-halfcheetah-random-v2.png} \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-walker2d-random-v2.png} \includegraphics[width=0.85\textwidth]{figures/offline-rl/learning curves/lc-hopper-random-v2.png} \vspace{-0.2cm} \caption{Learning curves on the random datasets.} \label{fig:app_lc_random} \end{figure} \printendnotes
{ "alphanum_fraction": 0.7600378926, "avg_line_length": 84.1901840491, "ext": "tex", "hexsha": "d577743c52b0124190ee8813c55ecbb8dbd3eab3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-08-25T13:01:43.000Z", "max_forks_repo_forks_event_min_datetime": "2021-08-25T13:01:43.000Z", "max_forks_repo_head_hexsha": "a9842f84e53ca47ec849488b6cb9acb8a11336ef", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "willwhitney/doctoral-thesis", "max_forks_repo_path": "content/offline-rl-appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a9842f84e53ca47ec849488b6cb9acb8a11336ef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "willwhitney/doctoral-thesis", "max_issues_repo_path": "content/offline-rl-appendix.tex", "max_line_length": 805, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a9842f84e53ca47ec849488b6cb9acb8a11336ef", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "willwhitney/dissertation", "max_stars_repo_path": "content/offline-rl-appendix.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-20T20:31:06.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-20T20:31:06.000Z", "num_tokens": 3510, "size": 13723 }
% !TEX program = xelatex \documentclass{resume} \begin{document} \pagenumbering{gobble} % suppress displaying page number \name{Sang woo Ham} \basicInfo{ \email{[email protected]} \phone{+1-765-409-4164} \linkedin[ecosang]{https://www.linkedin.com/in/ecosang} \github[ecosang]{https://github.com/ecosang} \homepage[website]{https://ecosang.github.io/blog} } \section{Education} \section{Research Experience} \section{Publications} \section{Skills} \section{Experience} \section{Scholarsip, Awards, Certifications} \end{document}
{ "alphanum_fraction": 0.740942029, "avg_line_length": 17.8064516129, "ext": "tex", "hexsha": "aaed11434de981d3bfc80df976cc7d399bd42620", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2ddc5825b520a3b2b5c0951a2487b06c616c246f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ecosang/blog", "max_forks_repo_path": "misc/Resume-Generator-master/init.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "2ddc5825b520a3b2b5c0951a2487b06c616c246f", "max_issues_repo_issues_event_max_datetime": "2022-02-26T10:21:10.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-12T14:16:43.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ecosang/blog", "max_issues_repo_path": "misc/Resume-Generator-master/init.tex", "max_line_length": 57, "max_stars_count": null, "max_stars_repo_head_hexsha": "2ddc5825b520a3b2b5c0951a2487b06c616c246f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ecosang/blog", "max_stars_repo_path": "misc/Resume-Generator-master/init.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 167, "size": 552 }
% This is "Background.tex" % A content sub-file for Trustfull Action Suggestion Thesis Extended Abstract \section{Background} \label{sec:Background} % Reference 3 types of agents that may be applicable to the project: % - Social Agents % - Afective Agents % - Anthromorphic Agents % --- Embodied Conversational Agents (ECAs) % When talking about trust and reputation mention the differences between both. % Tell what is a trust model and what is a reputation model Before discussing related work and our solution to the problem, we will present the main concepts that will be mentioned in the rest of this report, specifically regarding trust and reputation. \subsection{Trust} \label{subsec:Trust} Trust is regarded throughout the literature as one of the fundamental components of human society, being essential in cooperative and collaborative behaviour, having been studied in a multitude of disciplines, from Psychology and Sociology, to Philosophy and Economy\cite{Rousseau1998, Jones1997, Sabater2005}. For that reason, it is no wonder that it acquired a very large number of different definitions throughout the years of study, causing the problem of not existing a consensus on a definition of trust\cite{Castelfranchi2010}. In the scope of this project, the most relevant start for our discussion is the dyadic definition of trust: `an orientation of an actor (the \textbf{truster}) toward a specific person (the \textbf{trustee}) with whom the actor is in some way interdependent' (taken from \cite{Simpson2007}), as we want to focus on interpersonal relationships. This definition has been expanded throughout the literature, often adapted to fit the context or scope of the work, but three main definitions are highlighted in computational trust: \begin{itemize} \tallitem First, Gambetta\cite{Gambetta1988} defined trust as follows: `Trust is the \textit{subjective probability} by which an individual, A, \textit{expects} that another individual, B, performs a given action on which its \textit{welfare depends}' (taken from \cite{Castelfranchi2010}). This is accepted by most authors as one of the most classical definitions of trust, but it is too restrictive with its uni-dimensionality, as it only refers to predictability of the trustor, and does not take into account competence in executing the given action. \tallitem Marsh\cite{Marsh1994} was the first author to formalize trust as a measurable Computational Concept, continuing the perspective of reducing trust to a numerical value, set by Gambetta\cite{Gambetta1988}, but also adding that: X trusts Y if, and only if, `X \textit{expects} that Y will behave according to X's best interest, and will not attempt to harm X' (taken from \cite{Castelfranchi2010}). This definition does not represent other parts of trust, such as the notion that trustor must ascertain some risk from delegating the action to the trustee. \tallitem Castelfranchi and Falcone then introduced a Cognitive aspect to Computational Trust\cite{Castelfranchi1998}. They define trust as the mental state of the trustor and the action in which the trustor refers upon the trustee to perform. This is the definition of trust that we will adopt throughout the rest of the report, as it represents a vision of trust that takes into account the trustor set of beliefs and intentions, approaching it to an agent's cognitive model, while also linking trust to the action being performed, as one might trust another for certain types of actions and not for others (e.g. I may trust my squire to polish my sword, but not to swing it). \end{itemize} \subsubsection{Castelfranchi and Falcone's Trust} \label{subsubsec:CastelfranchiTrust} More explicitly, Castelfranchi and Falcone\cite{Castelfranchi1998} state that trust is a conjunction of three concepts: \begin{itemize} \item A \textit{mental attitude} or (pre)disposition of the agent towards another agent; this is represented by beliefs about the trustees' qualities and defects; \item A \textit{decision} to rely upon another, and therefore making the trustor `vulnerable' to the possible negative actions of the trustee; \item The \textit{act} of trusting another agent and the following behaviour of counting on the trustee to perform according to plan. \end{itemize} By describing trust as a mental attitude it is also implied that: `Only a cognitive agent can trust another agent; only an agent endowed with goals and beliefs'\cite{Castelfranchi2010}. From this definition we should also address one important component, \textbf{Delegation}, which happens when an agent (X) needs or likes the action delegated to another agent (Y), so X includes it in his plans, therefore relying on Y. X plans to achieve his goal through Y. So, he formulates in his mind a multi-agent plan with a state or action goal being Y’s delegated\cite{Castelfranchi1998}. \subsection{Reputation and Image} \label{subsec:Reputation} \textit{Reputation} is also a concept that appears very often linked with trust in the literature, specially since recent models created for representing trust have been focused on \acp{MAS} (see \cite{Abdul-rahman2000, Sabater2002, Sabater2006, Huynh2006, Pinyol2009}), where more recent trust models have been developed to also include reputation as a source of trust. An agent is not influenced only by their own beliefs about the subject, the \textit{Image}, but also by what other agents say about it, its \textit{Reputation}. We describe Image and Reputation as introduced by Sabater in \cite{Sabater2006}: Image is defined as the agent's personal belief about a certain property of the target agent, be it a physical, mental or social trait. Reputation is a meta-belief about an impersonal evaluation of the target, in other words, it is the belief on the evaluation being circulated about the target. On a more concrete level, reputation is separated between \textit{shared evaluation} and \textit{shared voice}. Consider that an agent has beliefs about how other agents evaluate a certain target, if in a set of agents these beliefs converge to a value (e.g. `good' or `bad') we can say that there exists a shared evaluation of the target. It is important to note that all sharing agents are known and well defined. A shared voice is a belief that another set of agents themselves believe that an evaluation of the target exists. In other words, it is the belief that a group of agents will consistently report that a voice exists. These meta-beliefs are considered important as one is not required to believe that other's evaluation is correct, but might still believe that it exists. The mental decisions regarding reputation can be categorized as follows: \begin{itemize} \item Epistemic decisions: accepting trust beliefs to update or generate a given image or reputation; \item Pragmatic-Strategic decisions: using trust beliefs to decide how to behave towards other agents; \item Memetic decisions: transmitting trust beliefs to others. \end{itemize} This difference of possible decisions allows to describe how one may transmit reputation without having the responsibility for the credibility or truthfulness of the content transmitted, as one does not have to commit to accepting the reputation value, and just say that the rumour exists. \subsection{Game Theory} \label{subsec:GameTheory} Game Theory is the field of study that defines and analyses situations involving conflict or cooperation between multiple intelligent decision makers. These situations are called a game, and they are distilled to their core argument, by defining the limited and simple set of actions that the players may perform, and how do they affect the players. It then analyses the decision strategies for each player, by assuming that both will try to maximise their payoff (how much the player gains) with their action. To better explain the concepts we want to present, we will introduce one of the most common exemplary models of Game Theory, the Prisoner's Dilemma. \subsubsection{Prisoner's Dilemma} \label{subsubsec:PrisonersDilemma} The Prisoner's Dilemma is a two player game and is usually described as follows: Two criminal partners are arrested and locked in separate cells with no way of communicating with each other. They are then questioned separately, where they are given 2 options, betray the other prisoner by testifying against him, or remain silent, with the following outcomes: \begin{itemize} \item If both prisoners betray each other, both get 2 years in prison; \item If one of them betrays and the other remains silent, the betrayer goes free and the other gets 3 years in prison; \item If both remain silent, both get just 1 year in prison; \end{itemize} We can represent betraying as \textit{Defecting} (D), and staying silent as \textit{Cooperating} (C), and name the players \textit{player1} and \textit{player2}. So the game's possible outcomes can be represented by a payoff matrix, like the one in Table \ref{PrisonerDilemaPayoffMatrix}, where each entry represents a tuple of the form (\textit{player1} payoff, \textit{player2} payoff). As the goal is to not get years in prison, the payoffs correspond to $Max\ years\ in\ prison - years\ got\ in\ prison$. \begin{table}[] \centering \begin{tabular}{l|l|l|} \cline{2-3} & $C_2$ & $D_2$ \\ \hline \multicolumn{1}{|l|}{$C_1$} & 2,2 & 0,3 \\ \hline \multicolumn{1}{|l|}{$D_1$} & 3,0 & 1,1 \\ \hline \end{tabular} \caption{Prisoner's Dilemma Payoff Matrix} \label{PrisonerDilemaPayoffMatrix} \end{table} In the game we can say that \textit{Defecting} \textbf{dominates} \textit{Cooperating}, as for any action that the adversary player may choose, \textit{Defecting} always gives a better payoff for the individual player\cite{Nash1951}.
{ "alphanum_fraction": 0.7934560327, "avg_line_length": 106.3043478261, "ext": "tex", "hexsha": "fef961218965ee2d255b400c380d320181e7c3f8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e45a3ed523b1cb5b9bfce6d1e55ebfba1bd0fe42", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "NunoXu/Trust-Human-Agent-Interaction-MSc-Thesis-Docs", "max_forks_repo_path": "MSc-Extended-Abstract/content/Background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e45a3ed523b1cb5b9bfce6d1e55ebfba1bd0fe42", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "NunoXu/Trust-Human-Agent-Interaction-MSc-Thesis-Docs", "max_issues_repo_path": "MSc-Extended-Abstract/content/Background.tex", "max_line_length": 1081, "max_stars_count": null, "max_stars_repo_head_hexsha": "e45a3ed523b1cb5b9bfce6d1e55ebfba1bd0fe42", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "NunoXu/Trust-Human-Agent-Interaction-MSc-Thesis-Docs", "max_stars_repo_path": "MSc-Extended-Abstract/content/Background.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2327, "size": 9780 }
\documentclass[a4paper,11pt]{report} \newcommand{\linespacing}{1.5} \renewcommand{\baselinestretch}{\linespacing} \usepackage{fontawesome} \usepackage{array} \usepackage{arydshln} \usepackage{graphicx,color} \usepackage{epstopdf} \usepackage[a4paper,top=2.5cm,bottom=2.5cm,left=2.5cm,right=2.5cm,headsep=10pt]{geometry} \usepackage[colorlinks,pdfusetitle,urlcolor=blue,citecolor=blue,linkcolor=blue,bookmarksnumbered,plainpages=false]{hyperref} \usepackage[american]{babel} \usepackage{csquotes} \usepackage[backend=biber, style=apa]{biblatex} % this template uses APA bibliography style, change it here if you need \addbibresource{master.bib} % a .bib file with two example references is included; change the .bib filename here to include your own database \DeclareLanguageMapping{american}{american-apa} \renewcommand{\arraystretch}{1.4} \flushbottom \pagestyle{plain} %%%%%%%%%%%%%%%%%% % uncomment the macro below if you run into problems with both doi and url printed in the bibliography % method taken from: % https://tex.stackexchange.com/questions/154864/biblatex-use-doi-only-if-there-is-no-url % %\renewbibmacro*{doi+eprint+url}{% % \iftoggle{bbx:url} {\iffieldundef{doi}{\usebibmacro{url+urldate}}{}} {}% % \newunit\newblock \iftoggle{bbx:eprint} {\usebibmacro{eprint}} {}% % \newunit\newblock \iftoggle{bbx:doi} {\printfield{doi}} {}} % %%%%%%%%%%%%%%%%%% \begin{document} \pagenumbering{roman} \thispagestyle{empty} \begin{flushright} % https://public.univie.ac.at/en/downloads/logos/ \includegraphics[width=7cm]{uni1-eps-converted-to} \end{flushright} \vskip20mm \begin{center} \huge\textbf{MASTERARBEIT / MASTER'S THESIS} \vskip15mm \normalsize{Titel der Masterarbeit / Title of the Master's Thesis} \LARGE\textbf{``Thesis at UniWien: A template''} % Insert title here \vskip15mm \normalsize{verfasst von / submitted by} \Large\textbf{Your name, Bc.} % Insert your name in this line \vskip15mm \normalsize{angestrebter akademischer Grad / in partial fulfillment of the requirements for the degree of} \Large\textbf{Master of Science (MSc)} \small \end{center} \vskip15mm \begingroup \renewcommand{\arraystretch}{1.2} \begin{flushleft} Wien, 2016 / Vienna 2016 % Insert graduation year in this line \vfill \addtolength{\tabcolsep}{-6pt} \begin{table}[h!] \begingroup \renewcommand*{\arraystretch}{1.2} %\renewcommand*{\linespacing}{1} \renewcommand\baselinestretch{1} \small \begin{tabular}{p{7cm} p{10cm}} Studienkennzahl lt. Studienblatt / \newline degree programme code as it appears on \newline the student record sheet: & A 066 013 \\ Studienrichtung lt. Studienblatt / \newline degree programme as it appears on \newline the student record sheet:& Joint Degree Programme MEi:CogSci Cognitive Science \\ Betreut von / Supervisor: & Your supervisor's name \\ % Insert your program/supervisor here \end{tabular} \endgroup \end{table} \addtolength{\tabcolsep}{6pt} \end{flushleft} \endgroup \chapter*{} \thispagestyle{empty} \chapter*{Acknowledgements} \renewcommand{\baselinestretch}{\linespacing} \small\normalsize Thank you X, Y, and Z. % Some of the code for creating ToC id adapted from the following template: % https://www.sharelatex.com/templates/thesis/university-of-sussex-thesis \newpage \pdfbookmark[0]{Contents}{contents_bookmark} \renewcommand{\baselinestretch}{1.3}\normalsize \tableofcontents \renewcommand{\baselinestretch}{1.5}\normalsize \clearpage \chapter*{Abstract} Abstract in English. \phantomsection \addcontentsline{toc}{chapter}{Abstract} \clearpage \chapter*{Zusammenfassung} Abstract in German. \phantomsection \addcontentsline{toc}{chapter}{Zusammenfassung} \listoftables \phantomsection \addcontentsline{toc}{chapter}{List of Tables} \listoffigures \phantomsection \addcontentsline{toc}{chapter}{List of Figures} \newpage \pagenumbering{arabic} % Insert additional chapters here \include{ch1} \include{ch2} \clearpage \phantomsection \addcontentsline{toc}{chapter}{Bibliography} \printbibliography \noindent \appendix \include{appendix} \clearpage \phantomsection \addcontentsline{toc}{chapter}{Curriculum Vitae} %\chapter*{Curriculum Vitae} \newpage \renewcommand\baselinestretch{0.8} % adjust this parameter to fit your cv on one page % Adjust the cv template below for your own needs \begin{table}[h!] \begin{tabular}{ p{220pt} p{220pt} } \multicolumn{2}{c}{\Huge{\textsc{Your Name}}} \\ \hline\noalign{\vskip 1mm} {\faBuildingO} Address line 1 & {\faPhone} +43 000 000 000 \\ \hspace{4.4mm}Address line 2 & {\faEnvelopeO} \href{mailto:[email protected]}{[email protected]}\\ \hspace{4.4mm}Address line 3 & {\faGlobe} \hspace{0.5mm}\href{http://www.univie.ac.at/}{http://www.univie.ac.at/} \\ \hline\noalign{\vskip 1mm} \end{tabular} \end{table} \begin{table}[h!] \begin{tabular}{ p{60pt} p{380pt} } \large{\textsc{Education}} & \\ \hline\noalign{\vskip 1mm} 2016/07 & \textbf{MSc Cognitive Science} \\ & University of Vienna \\ 2014/07 & \textbf{Bc. Something else} \\ & University of Something else \\ \end{tabular} \end{table} \begin{table}[h!] \begin{tabular}{ p{60pt} p{380pt} } \large{\textsc{Employment}} & \\ \hline\noalign{\vskip 1mm} 2016/07 & \textbf{Your position 1} \\ & Workplace \\ & Address \\ & Responsibilities: Something interesting \\ 2015/07 & \textbf{Your position 2} \\ & Workplace \\ & Address \\ & Responsibilities: Something interesting \\ 2014/07 & \textbf{Your position 3} \\ & Workplace \\ & Address \\ & Responsibilities: Something interesting \\ \end{tabular} \end{table} \begin{table}[h!] \begin{tabular}{ p{60pt} p{380pt} } \multicolumn{2}{l}{\large{\textsc{Technical Skills}}} \\ \hline\noalign{\vskip 1mm} Languages: & English \\ Programming: & \LaTeX{} \\ Other: & \\ \end{tabular} \end{table} \noindent \normalsize \vfill \end{document}
{ "alphanum_fraction": 0.7370313303, "avg_line_length": 28.4926829268, "ext": "tex", "hexsha": "2351eca1c7898556a13f9726e681cc6e695bd070", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-02-22T16:20:56.000Z", "max_forks_repo_forks_event_min_datetime": "2018-06-18T03:42:35.000Z", "max_forks_repo_head_hexsha": "2dfe63e161464e69b7d92ec2d74780b31110e870", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ppatrzyk/UniWien-LaTeX-Thesis", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2dfe63e161464e69b7d92ec2d74780b31110e870", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ppatrzyk/UniWien-LaTeX-Thesis", "max_issues_repo_path": "main.tex", "max_line_length": 170, "max_stars_count": 9, "max_stars_repo_head_hexsha": "2dfe63e161464e69b7d92ec2d74780b31110e870", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ppatrzyk/UniWien-LaTeX-Thesis", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-22T16:20:50.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-18T22:33:00.000Z", "num_tokens": 1925, "size": 5841 }
%!TEX root = ../thesis_polimi.tex \chapter{Experimental Validation} % (fold) \label{chap:validation} \start{I}{n} this chapter we describe the results that validated the effectiveness of \thesystem. We had two goals in mind: first we wanted to confirm the effectiveness of the \important{Classifier}, which is easily quantifiable by terms of the confusion matrix produced; second, we wanted to test \thesystem's goodness when deployed in the wild. To this end we let the system analyze one week of real DNS passive data and see if it \emph{classifies} unseen domains belonging to known threats and if it \emph{detects} new threats. This latter test is far from trivial: Given the massive quantitative of data it is very hard even to give a rough estimation of false negatives, as it would require to manually check every domain discarded in the \important{Filtering Phase}. Moreover some results, though important, cannot be quantified: For instance \thesystem was able to correctly separate clusters of domains belonging to the same threat, but using different DGAs, and to merge together two clusters of domains employed by \texttt{Palevo}, brilliantly understanding that they belonged to the same botnet. Both of the aforementioned test are presented in the following of this chapter, highlighting the goodness of the obtained results. \paragraph{Chapter Organization} The remainder of the chapter is organized in the following fashion: \begin{itemize} \item in Section~\ref{sec:goals} we precisely set our goals, what we want to prove with our experiments; \item in Section~\ref{sec:dataset} we describe the dataset employed in our experiments; \item in Section~\ref{sec:the_classifier} we test the effectiveness of the classifier; \item in Section~\ref{sec:cerberus_in_the_wild} we run a one week simulation to see how the system would behave once deployed. \end{itemize} \newpage \section{Goals} % (fold) \label{sec:goals} \sectionstart{A}{s} discussed in Section~\ref{sub:phoenix_detecting_dga_based_botnets}, \phoenix suffers from two main kind of shortcomings, one \emph{conceptual} whereas the other concerns the validation. The conceptual shortcomings regard the use of \emph{ad hoc} parameters when it comes to classifying unseen domains, an approach that is prone to overfitting and, in our case, it is not able to correctly classify AGDs that feature a different TLD or a domain length that evades the previous thresholds. With our first experiment we want to validate our classifier: This test is described in Section~\ref{sec:the_classifier}. The second kind of shortcoming has to do with validation. \phoenix was not tested in the wild, whereas this is a mandatory step for a detection system that wants to be deployed in the real world. In \thesystem we have addressed this issue, and in Section~\ref{sec:cerberus_in_the_wild} we confirm this claim. % section goals (end) \section{Dataset} % (fold) \label{sec:dataset} \sectionstart{C}{erberus}' tests employed real passive DNS data collected in the wild. With the term \emph{passive}, we refer to the technique invented by~\citet{weimer2005passive} in 2004, called ``Passive DNS Replication'', to obtain Domain Name System data from production networks, and store it in a database for later reference~\cite{weimer2005passive}. The sensors, deployed as close as possible to large caching servers, collect the data and send it to an \emph{analyzer} module, where the data is filtered. After the filtering process, the data is transformed in a format that makes the querying process easier, and stored in a database. We were able to get access to approximately three months of data, from the 12th of January to the 15th of March 2013. About 609 M of DNS queries/replies were collected by the ISC/SIE monitor. Data statistics are summarized in Table~\ref{tab:stats}. % We decided % also to investigate the presence of punycode records, which resulted to be % approximately the 0.007\% of the dataset. We think that attackers that would % start employing punycode domains would get easily spotted, as there would be % peaks of NXDOMAIN DNS answers for punycode domains, a phenomenon that could % immediately identified by defenders because of its peculiarity. \begin{table}[h!tp] \centering \begin{tabular}{rl} Begin of Recording & Sat, 12 Jan 2013 18:19:57 GMT \\ End of Recording & Fri, 15 Mar 2013 23:37:11 GMT \\ Total Records & 608,958,044 \\ % Punycode Records & 41,474 \\ \end{tabular} \caption{ISC/SIE data summary statistics.} \label{tab:stats} \end{table} The data was divided into snapshots of twenty minutes on average, counting 200,000 DNS messages, of which about 50,000 were successful (i.e., non \texttt{NXDOMAIN}) DNS replies. \section{The Classifier} % (fold) \label{sec:the_classifier} \sectionstart{W}{e} tested \thesystem's classifier against the ground truth generated by \phoenix, i.e., the clusters generated during the \important{Bootstrap Phase}. We employed data \emph{automatically} labeled by \thesystem and not data \emph{manually} labeled by humans as we wanted to run a test as close as possible to the real case scenario. \thesystem in fact, will automatically produce the labeled records later to be used for classification, and we wanted to have our experiment set up to reproduce such situation. Note that, though automatically labeled, the clusters' maliciousness and quality were manually assessed by~\citet{schiavoni2013} and therefore represent a valid dataset to run our experiment. \subsection{Accuracy} % (fold) \label{sub:accuracy} We considered four clusters that counted 1,100 samples, thus we could measure the accuracy depending on the number of domains used up to 1,000 samples, and leave the remaining 100 domains for testing. We validated the classifier using \emph{repeated random sub-sampling validation}, ten times for each amount of points. This means that, for instance, for ten times we randomly selected 200 points to train the classifier and 100 points to test it: This validation method allows to reveal the effects of overfitting, if any. We collected the overall accuracies from the confusion matrix along with the computation time. We repeated this operation until we counted 1,000 points in the training set, step 100. This data is reported in Table~\ref{tab:classifier_stats}. \begin{table}[!htp] \centering \pgfplotstabletypeset[% columns={points,avg,std,min,max,time}, columns/points/.style={ column name=\textsc{Domains}, column type = {r} }, columns/max/.style={ column name=\textsc{Max}, precision=3, dec sep align, fixed, fixed zerofill }, columns/min/.style={ column name=\textsc{Min}, precision=3, dec sep align, fixed, fixed zerofill }, columns/std/.style={ column name=\textsc{Std} }, columns/avg/.style={ column name=\textsc{Avg}, precision=3, dec sep align, fixed, fixed zerofill, }, columns/time/.style={ column name=\textsc{Time} (s), dec sep align }, every head row/.style={ before row={% \toprule & \multicolumn{8}{c}{\textsc{Accuracy}} & \\ \cmidrule{2-8} }, after row=\midrule}, every last row/.style={ after row=\bottomrule}, ]{data/classifier.dat} \caption{Cerberus classifier accuracy statistics.} \label{tab:classifier_stats} \end{table} Overall accuracy grows and then stabilizes at about 93\% from 800 points on (see Figure~\ref{fig:cerberus_accuracies}). For this reason the implementation of the \important{Classifier} has a sampling upper bound of 800 points to be randomly selected from the clusters to train the SVM used for classification. \begin{figure}[!htp] \centering \begin{tikzpicture} \begin{axis}[% xlabel=Accuracy, ylabel=Points, width=.9\linewidth, axis x line*=bottom, axis y line*=left, major tick style={draw=none}, y=.5cm, ymax=10, ytick={1,2,3,4,5,6,7,8,9}, xtick={0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95}, xticklabels={0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95}, yticklabels={200, 300, 400, 500, 600, 700, 800, 900, 1000}, % boxplot/draw direction=y, boxplot/every box/.style={fill=Tufte, draw=Tufte}, boxplot/every median/.style={draw, ultra thick},] % \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.915\\ 0.9\\ 0.885\\ 0.92\\ 0.88\\ 0.895\\ 0.9\\ 0.8975\\ 0.8775\\ 0.8825\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.93\\ 0.88\\ 0.905\\ 0.925\\ 0.9325\\ 0.9175\\ 0.91\\ 0.92\\ 0.8875\\ 0.8925\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.9175\\ 0.9125\\ 0.915\\ 0.9275\\ 0.9375\\ 0.8875\\ 0.935\\ 0.9025\\ 0.9275\\ 0.9125\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.92\\ 0.92\\ 0.9175\\ 0.92\\ 0.915\\ 0.8975\\ 0.9125\\ 0.9025\\ 0.9325\\ 0.93\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.9275\\ 0.905\\ 0.92\\ 0.9275\\ 0.94\\ 0.915\\ 0.92\\ 0.9275\\ 0.925\\ 0.895\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.915\\ 0.8975\\ 0.9325\\ 0.9075\\ 0.9075\\ 0.94\\ 0.9075\\ 0.915\\ 0.93\\ 0.9125\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.9225\\ 0.9325\\ 0.91\\ 0.9375\\ 0.92\\ 0.9475\\ 0.9325\\ 0.9075\\ 0.9475\\ 0.9225\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.8925\\ 0.9325\\ 0.93\\ 0.9325\\ 0.9475\\ 0.94\\ 0.92\\ 0.92\\ 0.92\\ 0.9225\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 0.95\\ 0.925\\ 0.905\\ 0.935\\ 0.9275\\ 0.935\\ 0.9375\\ 0.91\\ 0.925\\ 0.9225\\ }; \end{axis} \end{tikzpicture} \caption{Cerberus classifier accuracies.} \label{fig:cerberus_accuracies} \end{figure} % subsection accuracy (end) \subsection{Analysis of Classification Errors} % (fold) \label{sub:analysis_of_classification_errors} We report in Figure~\ref{fig:confusion} one of the confusion matrices, obtained using 1,000 samples for training, during the validation. As you can see in the picture the best performances are obtained with cluster \texttt{a}, while the worst with cluster \texttt{d}. This holds for all the sub-sampling validations. This is due to the different lengths and distribution of characters that characterize the clusters. Cluster \texttt{a} in fact, is composed of domains that, on average, count more than thirty characters and are produced using a DGA that leverages a common hashing function over the date. This kind of algorithm generates domains that share a sort of ``pattern'' which allows the classifier to better recognize such domains. On the other hand, cluster \texttt{d}'s domains are three characters long: Therefore it is harder to find ``patterns'' shared by the domains and consequently harder for the \textbf{Classifier} to classify the domains correctly. This is confirmed by looking at the distribution of the distances among the domains in the clusters, computed using the Subsequence String Kernel and reported in Figure~\ref{fig:clusters_distribution}. It is evident that distances in cluster \texttt{a} have a normal distribution with a very low mean and variance, which indicate very high intra-cluster similarity, whereas cluster \texttt{d}'s distances are randomly distributed and exhibit higher values, which indicate low intra-cluster similarity. Note that despite of this weakness, the overall accuracy allows us to use this classifier when \thesystem is deployed in the wild. \begin{figure}[!htp] \sffamily \begin{minipage}{.5\textwidth} \centering \begin{tabular}{crcccc} & & \multicolumn{4}{c}{Predicted} \\ & & a & b & c & d \\ \cmidrule(r){2-6} \multirow{4}{*}{\rotatebox{90}{Actual}} & a & \bverb+100+ & \verb+ 0+ & \verb+ 0+ & \verb+ 0+ \\ & b & \verb+ 1+ & \bverb+92+ & \verb+ 6+ & \verb+ 1+ \\ & c & \verb+ 2+ & \verb+ 0+ & \bverb+98+ & \verb+ 0+ \\ & d & \verb+ 3+ & \verb+ 0+ & \verb+ 6+ & \bverb+91+ \\ \end{tabular} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \begin{tabular}{l} a \\ \midrule \verb+caaa89e...d4ca925b3e2.co.cc+ \\ \verb+f1e01ac...51b64079d86.co.cc+ \\ b \\ \midrule \verb+kdnvfyc.biz+ \\ \verb+wapzzwvpwq.info+ \\ c \\ \midrule \verb+jhhfghf7.tk+ \\ \verb+faukiijjj25.tk+ \\ d \\ \midrule \verb+cvq.com+ \\ \verb+epu.org+ \\ \end{tabular} \end{minipage} \caption{Cerberus SSK classifier confusion matrix.} \label{fig:confusion} \end{figure} \begin{figure}[!htp] \centering \begin{tikzpicture} \begin{groupplot}[ group style={ group name=histograms, group size=1 by 2, xlabels at=edge bottom, xticklabels at=edge bottom, vertical sep=0em }, ybar, scaled y ticks = false, width=\textwidth, height=5cm, axis x line*=bottom, axis y line*=left, major tick style={draw=none}, ymin=0, xmax=0.9, xmin=0.1, ytick={0, 5000, 10000, 15000}, yticklabels={0, {5,000}, {10,000}, {15,000}}, xtick={0.2, 0.4, 0.6, 0.6, 0.8}, xticklabels={0.2, 0.4, 0.6, 0.6, 0.8}, ] \nextgroupplot \addplot[% fill=Tufte, draw=none ] coordinates {(0.162101122991, 666) (0.179813448003, 5472) (0.197525773016, 12356) (0.215238098028, 15188) (0.23295042304, 13652) (0.250662748052, 9837) (0.268375073065, 6418) (0.286087398077, 3904) (0.303799723089, 2291) (0.321512048101, 1490) (0.339224373114, 963) (0.356936698126, 735) (0.374649023138, 661) (0.39236134815, 586) (0.410073673163, 584) (0.427785998175, 632) (0.445498323187, 582) (0.463210648199, 611) (0.480922973211, 520) (0.498635298224, 513) (0.516347623236, 527) (0.534059948248, 435) (0.55177227326, 373) (0.569484598273, 284) (0.587196923285, 235) (0.604909248297, 160) (0.622621573309, 71) (0.640333898322, 35) (0.658046223334, 15) (0.675758548346, 4)} ; \nextgroupplot \addplot[fill=Tufte,draw=none] coordinates {(0.159564628861, 17) (0.183793974484, 308) (0.208023320108, 355) (0.232252665732, 1698) (0.256482011356, 2443) (0.280711356979, 1379) (0.304940702603, 4656) (0.329170048227, 5166) (0.353399393851, 2948) (0.377628739474, 3223) (0.401858085098, 4565) (0.426087430722, 2949) (0.450316776346, 3744) (0.474546121969, 1948) (0.498775467593, 1739) (0.523004813217, 2326) (0.54723415884, 5792) (0.571463504464, 4148) (0.595692850088, 4207) (0.619922195712, 2952) (0.644151541335, 1973) (0.668380886959, 864) (0.692610232583, 601) (0.716839578207, 664) (0.74106892383, 2752) (0.765298269454, 4917) (0.789527615078, 6270) (0.813756960702, 4136) (0.837986306325, 967) (0.862215651949, 93)} ; \end{groupplot} \end{tikzpicture} \caption{From the top: cluster \texttt{a}'s and cluster \texttt{d}'s distances distributions.} \label{fig:clusters_distribution} \end{figure} % subsection analysis_of_classification_errors (end) \subsection{Training Speed} % (fold) \label{sub:training_speed} Training time grows, in a linear fashion, from 14.13 to 101.49 seconds (see Figure~\ref{fig:cerberus_training_time}). As \textsc{Cerberus} is designed to analyze a \emph{live} stream of DNS data, the training time is not negligible, as it could make the classification process too long. To address this issue we can store the trained SVM machines: For instance if an unseen domain $d$ can belong either to cluster $\alpha$ or cluster $\beta$, \thesystem trains the SVM using cluster $\alpha$ and cluster $\beta$. Then this machine is stored as a binary file using the \texttt{cPickle} \texttt{Python} library: When another unseen domain $g$ can belong either to cluster $\alpha$ or cluster $\beta$, \thesystem retrieves the stored trained SVM, loads it in memory and uses it to label $g$, thus drastically improving the performances. \begin{figure}[!htp] \centering \begin{tikzpicture}[font=\sffamily\sansmath] \begin{axis}[ width=.9\linewidth, xlabel=Training Time (s), ylabel=Points, major tick style={draw=none}, axis x line*=bottom, axis y line*=left, y=.5cm, ytick={1,2,3,4,5,6,7,8,9}, xtick={10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}, xticklabels={10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}, yticklabels={200, 300, 400, 500, 600, 700, 800, 900, 1000}, boxplot/every box/.style={fill=Tufte, draw=Tufte}, boxplot/every median/.style={draw, ultra thick}, ] \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 14.450056076\\ 14.3151428699\\ 15.0587539673\\ 14.6330120564\\ 13.1712338924\\ 13.5638680458\\ 13.8222289085\\ 14.326633215\\ 14.268102169\\ 13.6863639355\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 24.2474851608\\ 21.8814628124\\ 20.7104799747\\ 23.4123110771\\ 21.5171511173\\ 21.6167340279\\ 20.5072031021\\ 21.8328018188\\ 18.9486649036\\ 18.9871139526\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 31.860227108\\ 28.9508159161\\ 31.77709198\\ 31.860227108\\ 29.5555529594\\ 28.0266561508\\ 26.9642550945\\ 28.0694839954\\ 26.931540966\\ 28.8569948673\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 42.6774580479\\ 42.7213740349\\ 42.1484789848\\ 45.2905337811\\ 42.620041132\\ 46.0541510582\\ 42.7060930729\\ 42.0425620079\\ 39.8944818974\\ 38.2909250259\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 55.6774950027\\ 55.447537899\\ 51.9851491451\\ 47.7803220749\\ 47.7598872185\\ 47.7725110054\\ 52.6958119869\\ 57.3658189774\\ 47.0200750828\\ 45.3178730011\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 68.4127650261\\ 64.5384910107\\ 70.2506511211\\ 58.2789590359\\ 60.0601868629\\ 61.8797900677\\ 62.2854321003\\ 64.2296421528\\ 58.6308290958\\ 58.5854849815\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 76.3239531517\\ 74.2242810726\\ 70.2646820545\\ 71.9479129314\\ 69.1475520134\\ 78.8463850021\\ 78.1692609787\\ 72.8085451126\\ 71.4164860249\\ 72.7233030796\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 87.4063670635\\ 86.3149549961\\ 80.9244608879\\ 81.8638818264\\ 88.5054130554\\ 87.955684185\\ 92.3891918659\\ 89.4988970757\\ 84.9471900463\\ 85.7984118462\\ }; \addplot[boxplot] table[row sep=\\, y index=0] { data\\ 102.08713603\\ 105.802442074\\ 105.268303156\\ 103.374116182\\ 93.2224390507\\ 102.118638992\\ 109.901737928\\ 102.901645899\\ 97.8026940823\\ 92.4691579342\\ }; \end{axis} \end{tikzpicture} \caption{Cerberus classifier training time.} \label{fig:cerberus_training_time} \end{figure} % subsection training_speed (end) % section the_classifier (end) \section{Cerberus in the Wild} % (fold) \label{sec:cerberus_in_the_wild} \sectionstart{T}{his} is the main test to prove the effectiveness of \thesystem. We decided to run a batch experiment of one week and one day of deployment in the wild, using real data from the aforementioned dataset (see Section~\ref{sec:dataset}). During the first week \thesystem leveraged the ground truth generated in the \important{Bootstrap Phase} to classify those unseen domains that would share their IP address with the clusters in the ground truth. Those unseen domains that would not share the IP address with any of the clusters were considered ``suspicious'', as were not discarded by the \important{Filtering Phase} (i.e., they are likely-malicious), and stored in a database. Then, after one week time passed, the \textbf{Time Detective} ran the clustering routine on these ``suspicious'' domains: New clusters were found and added to the ground truth. The day after \thesystem used that knowledge to successfully label unseen domains belonging to previously unknown threats, drastically augmenting the number of detected malicious domains. \subsection{The Bootstrap} % (fold) \label{sub:the_bootstrap_validation} Before starting classifying data, \thesystem can be bootstrapped: If this happens the system can leverage the knowledge obtained to classify unseen domains. Otherwise \thesystem starts to function with no knowledge and will build its ground truth throughout time in an automatic fashion. We decided to bootstrap the system, in order to see how \thesystem behaves \emph{before} and \emph{after} it increases its knowledge, and to use the blacklist provided by \textsc{Exposure}~\cite{bilge2011exposure}, thus providing \thesystem with the same knowledge used to feed Phoenix~\cite{schiavoni2013}. Once \textbf{The Bootstrap} is completed, \thesystem possesses a list of eleven clusters of malicious domains likely generated by the same DGA: Among others we find clusters that \citet{schiavoni2013} confirmed referring to \texttt{Palevo} and \texttt{Conficker}. Two of the eleven clusters generated by \phoenix in \important{Bootstrap Phase} are reported in Table~\ref{tab:phoenix_clusters}. \begin{table}[!htp] \begin{minipage}{.5\textwidth} \centering \begin{tabular}{rp{2.8cm}} \multicolumn{2}{l}{\textsc{Cluster f105c}} \\ \midrule Threat: & Palevo \\ IPs: & \texttt{176.74.176.175} \newline \texttt{208.87.35.107} \newline \newline \\ Domains: & \texttt{cvq.com} \newline \texttt{epu.org} \newline \texttt{bwn.org} \\ \end{tabular} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \begin{tabular}{rp{2.8cm}} \multicolumn{2}{l}{\textsc{Cluster 0f468}} \\ \midrule Threat: & Sality \\ IPs: & \texttt{217.119.57.22} \newline \texttt{91.215.158.57} \newline \texttt{178.162.164.24} \newline \texttt{94.103.151.195} \\ Domains: & \texttt{jhhfghf7.tk} \newline \texttt{faukiijjj25.tk} \newline \texttt{pvgvy.tk} \\ \end{tabular} \end{minipage} \caption{Two of the eleven clusters produced by Phoenix.} \label{tab:phoenix_clusters} \end{table} % subsection the_bootstrap (end) \subsection{Collecting Data} % (fold) \label{sub:collecting_data} The test started the day February, 7th 2013. At the end of the week, February, 14th 2013, roughly 22,000,000 DNS replies were analyzed, 187 domains were classified as malicious and labeled using the ground truth provided by \phoenix in the \textbf{Bootstrap Phase} (see Table~\ref{tab:classified}). We searched VirusTotal for the labeled domains and we were able to prove that 167 domains belonged to the \texttt{Conficker} botnet, while the remainder to other botnets or generic malware, including the \texttt{Flashback} botnet. Moreover 3,576 domains considered by \thesystem ``suspicious'' (i.e., they were not filtered out by the \important{Filtering Phase}) were stored together with their IP addresses: We counted exactly 1,300 distinct IP addresses, which means that multiple ``suspicious'' domains resolved to the same IP. \begin{table}[!htp] \centering \begin{tabular}{rp{2.8cm}p{2.8cm}} \multicolumn{2}{l}{\textsc{Labeled 07e21}} & \\ \midrule Threat: & Conficker & \\ Domains: & \texttt{hhdboqazof.biz} \newline \texttt{poxqmrfj.biz} \newline \texttt{hcsddszzzc.ws} & \texttt{tnoucgrje.biz} \newline \texttt{gwizoxej.biz} \newline \texttt{jnmuoiki.biz} \\ \end{tabular} \caption{A sample of the malicious domains classified during the first week.} \label{tab:classified} \end{table} % subsection collecting_data (end) \subsection{Producing New Knowledge} % (fold) \label{sub:producing_new_knowledge} At the end of the week \thesystem performed the clustering routine over the domains resolving to suspicious IP addresses. As described in Section~\ref{par:dbscan_clustering}, the clustering routine is performed only after grouping the IP addresses by Autonomous System. We report in Table~\ref{tab:as} the main ASs involved in the analysis. \begin{table}[!htp] \centering \begin{tabular}{lp{7cm}c} \toprule \textsc{AS} & \textsc{Network Name} & \textsc{Country} \\ \midrule 15456 & INTERNETX-AS InterNetX GmbH & DE \\ 22489 & Castle Access Inc & UK \\ 47846 & Sedo GmbH & DE \\ 53665 & BODIS-1 - Bodis, LLC & CN \\ \bottomrule \end{tabular} \caption{Autonomous Systems of the domains in clustering phase.} \label{tab:as} \end{table} We applied the clustering routine, which yielded 47 clusters, of which the bigger ones (counting more than 25 elements) are reported in Table~\ref{tab:clusters} along with the related threat. \begin{table}[!htp] \centering \begin{tabular}{lcp{3cm}r} \toprule \textsc{Threat} & \textsc{AS} & \textsc{IPs} & \textsc{Size} \\ \midrule Sality & 15456 & \texttt{62.116.181.25} & 26 \\ Palevo & 53665 & \texttt{199.59.243.118} & 40 \\ Jadtre* & 22489 & \texttt{69.43.161.180} \newline \texttt{69.43.161.174} & 173 \\ Jadtre** & 22489 & \texttt{69.43.161.180} & 37 \\ Jadtre*** & 22489 & \texttt{69.43.161.167} & 47 \\ Hiloti & 22489 & \texttt{69.43.161.167} & 24 \\ Palevo & 47846 & \texttt{82.98.86.171} \newline \texttt{82.98.86.176} \newline \texttt{82.98.86.175} \newline \texttt{82.98.86.167} \newline \texttt{82.98.86.168} \newline \texttt{82.98.86.165} & 142 \\ Jusabli & 30069 & \texttt{69.58.188.49} & 73 \\ Generic Trojan & 12306 & \texttt{82.98.86.169} \newline \texttt{82.98.86.162} \newline \texttt{82.98.86.178} \newline \texttt{82.98.86.163} & 57 \\ \bottomrule \end{tabular} \caption{Cerberus' new clusters (asterisks to match Table~\ref{tab:jadtre}).} \label{tab:clusters} \end{table} This means that, taking as input only the passive DNS traffic, \thesystem was able to identify, in a completely automatic fashion, groups of IPs that are associated to malicious botnet activity (e.g., C\&C). This is new knowledge, which an investigator can use to find new botnet servers. There are three clusters, two of which sharing the IP address \texttt{69.43.161.180} and all of three residing in the same AS, that are labeled with the same threat, \texttt{Jadtre}\footnote{\url{http://www.microsoft.com/security/portal/threat/encyclopedia/entry.aspx?Name=TrojanDownloader:Win32/Jadtre.A}}. The reason why they are not one cluster, though belonging to the same threat, is because they were generated using three different DGAs, as it is clear from Table~\ref{tab:jadtre}. This proves that \thesystem is able of a fine grained detection of malicious activities, showing its capability of isolating not only different threats, but different DGA used by the same threat. \begin{table} \centering \begin{tabular}{lp{2.8cm}p{3.3cm}p{3.3cm}} \toprule \textsc{Cluster} & \textsc{IP} & \multicolumn{2}{l}{\textsc{Sample Domains}} \\ \midrule Jadtre* & \texttt{69.43.161.180} \newline \texttt{69.43.161.174} & \texttt{379.ns4000wip.com} \newline \texttt{418.ns4000wip.com} \newline \texttt{285.ns4000wip.com} & \texttt{78.ns4000wip.com} \newline \texttt{272.ns4000wip.com} \newline \texttt{98.ns4000wip.com}\\ Jadtre** & \texttt{69.43.161.180} & \texttt{391.wap517.net} \newline \texttt{251.wap517.net} \newline \texttt{340.wap517.net} & \texttt{137.wap517.net} \newline \texttt{203.wap517.net} \newline \texttt{128.wap517.net} \\ Jadtre*** & \texttt{69.43.161.167} & \texttt{388.ns768.com} \newline \texttt{353.ns768.com} \newline \texttt{296.ns768.com} & \texttt{312.ns768.com} \newline \texttt{153.ns768.com} \newline \texttt{30.ns768.com} \\ \bottomrule \end{tabular} \caption{Jadtre threats sample domains.} \label{tab:jadtre} \end{table} The new clusters were then added to the ground truth. \thesystem ran a \emph{similarity check} to see whether the new clusters should be merged together with the old ones. The Welch's test told that there was not enough statistical evidence to consider clusters \emph{a} and \emph{b}, reported in Table~\ref{tab:merging}, dissimilar, thus \thesystem decided to merge them together. Further investigations\footnote{\url{https://palevotracker.abuse.ch/}} confirmed that the IP addresses from both cluster \emph{a} and cluster \emph{b} belonged to \texttt{Palevo} C\&Cs. This means that \thesystem was able to understand that the domains of the two clusters, the first generated by \phoenix from the \textsc{Exposure} blacklist, and the second discovered by \thesystem analyzing real passive DNS data, were produced by the same DGA. This discovery lead the ground truth to be enriched, as the IP address \texttt{199.59.243.118} was added to the cluster, together with the domains. This means that \thesystem was able to successfully discover a \emph{migration} by leveraging only the linguistic features computed by the SSK. \begin{table}[h!tp] \begin{minipage}{.5\textwidth} \centering \begin{tabular}{lp{2.5cm}} \textsc{Cluster a} & \\ \midrule IPs: & \verb+176.74.176.175+ \newline \verb+208.87.35.107+ \newline \newline \newline \newline \\ Sample Domains & \texttt{cvq.com} \newline \texttt{epu.org} \newline \texttt{bwn.org} \newline \texttt{lxx.net} \\ \end{tabular} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \begin{tabular}{lp{2.5cm}} \textsc{Cluster b} & \\ \midrule IPs: & \verb+82.98.86.171+ \newline \verb+82.98.86.176+ \newline \verb+82.98.86.175+ \newline \verb+82.98.86.167+ \newline \verb+82.98.86.168+ \newline \verb+82.98.86.165+ \\ Sample Domains & \texttt{knw.info} \newline \texttt{rrg.info} \newline \texttt{nhy.org} \newline \texttt{ydt.info} \\ \end{tabular} \end{minipage} \caption{Clusters merged by \thesystem.} \label{tab:merging} \end{table} After the new clusters were added to the ground truth, \thesystem started again the \textbf{Detection Phase}, leveraging the increased knowledge: The next day \thesystem classified 319 malicious domains, while during the whole previous week counted 187 malicious domains, on average 26 domains a day. Hence, \thesystem was able to increase its knowledge in a completely automatic and unsupervised fashion and use this enhanced knowledge to drastically (twelve times as much) augment the number of daily classified malicious domains. \subsection{Summary} % (fold) \label{sub:inthewild_summary} Firstly \thesystem was bootstrapped using the \textsc{Exposure}~\cite{bilge2011exposure} blacklist, extracting eleven clusters of domains referring to the same DGA. Then for one week \thesystem analyzed a stream of passive DNS data collected in the wild by a ISC/SIE DNS monitor. During that week 187 malicious domains were detected and 1,300 IP addresses were labeled as ``suspicious''. At the end of the week the clustering routine produced new knowledge in the form of 47 new clusters, which were added to the ground truth. One of them was automatically merged by \thesystem and a check of the IP addresses on the Web confirmed that both clusters belong to the \texttt{Palevo} botnet. \begin{samepage} Therefore \thesystem was able to \begin{enumerate} \item perform on-line detection of known threats; \item automatically detect new threats; \item increase its knowledge; \item use the new knowledge to classify previously unknown threats. \end{enumerate} \end{samepage} We think that these results are quite encouraging and so prove the effectiveness of \thesystem. Obviously there is more that can be done and there are some difficulties to overcome. These matters shall be addressed in the next chapter. % subsection inthewild_summary (end)
{ "alphanum_fraction": 0.7445096617, "avg_line_length": 44.9274074074, "ext": "tex", "hexsha": "e63106a55ec093e194d3bf7ce181aa8b82c226df", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-05-05T12:00:51.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-01T12:23:17.000Z", "max_forks_repo_head_hexsha": "4e4e83fd15e2ee940ad0fe60217c79bef4274e87", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "maimaris/master-thesis", "max_forks_repo_path": "chapters/validation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4e4e83fd15e2ee940ad0fe60217c79bef4274e87", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "maimaris/master-thesis", "max_issues_repo_path": "chapters/validation.tex", "max_line_length": 393, "max_stars_count": 40, "max_stars_repo_head_hexsha": "4e4e83fd15e2ee940ad0fe60217c79bef4274e87", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gcedo/master-thesis", "max_stars_repo_path": "chapters/validation.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-15T20:44:32.000Z", "max_stars_repo_stars_event_min_datetime": "2016-03-13T20:31:04.000Z", "num_tokens": 9958, "size": 30326 }
\chapter{General Evaluation} This section looks at the consequences of getting a system evaluated against one of the sets of security evaluation criteria -- TCSEC, ITSEC and CTCPEC from the perspective of the developer, the procurer and the end user. \section{Consequences for the developer} Developers need to be aware of technical security issues, protocols and standards specific to the system being built, in some cases interpretations of the criteria, and should be aware of security guidelines or requirements which are not specified in the criteria but have being identified by a client or other stake-holder in the development of the system. All the assurance levels of the three security evaluation criteria require that the developer employ standard software engineering practices, such as system requirements (formal) specification, design, implementation, testing, documentation, configuration management, and (formal) verification and validation. In addition to these general software engineering requirements, there are many requirements specifically relating to security, such as the development of the security policy model. If the system is being evaluated against the TCSEC, the developer should be aware that the interpretations of the TCSEC, the ``Rainbow Series'', need to be consulted for specific application type. Interpretations are not required for the ITSEC or the CTCPEC, as all these criteria were developed to target a greater range of trusted systems. If the system being evaluated against the ITSEC, the developer should be aware that functionality criteria are not specified in the standard and it is up to the developer to define the security functions of the system. \section{Consequences for the development organisation} The organisation developing a trusted system must be familiar with the assurance levels of the criteria and must notify the organisation conducting the evaluation to which assurance level the system is being targeted at. This means that the evaluation will only attempt to determine if the system provides that level of trust or assurance. The evaluation body usually also places other requirements on the development organisation. For example, in the United States, if a system is being submitted to the Trusted Product Evaluation Program (TPEP)~\footnote{The TPEP is part of the National Security Agency (NSA), which is part of the US Department of Defense} for evaluation against the TCSEC, the organisation must provide evidence that the system has a legitimate market in the United States. Although evaluation organisation such as TPEP do not charge the development organisation a fee for an evaluation, there should be an awareness that developing a system which satisfies any of the assurance levels of criteria such as TCSEC, ITSEC and CTCPEC results in very high development costs, especially at the higher levels of assurance. \section{Consequences for the end user} A user or a purchaser of a secure system needs to aware of a number of issues. \begin{itemize} \item What are the security requirements? \item Is there familiarity with the security requirements of the evaluation criteria which the product being considered has being evaluated against? \item With what level of assurance in the relevant evaluation criteria does the user's security requirement correspond to? \end{itemize} The user must also be aware of the secure installation, startup and operation of the system in order for the security requirements to be fulfilled. In general, a system which has been successfully evaluated against one of the evaluation criteria provides the user with a high degree of confidence that their is assurance that the system provides the level of assurance which it has been awarded.
{ "alphanum_fraction": 0.7605011054, "avg_line_length": 45.2333333333, "ext": "tex", "hexsha": "f879c63fbca2da6637361d75cae1b2d304af7432", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f93e57a5bd10bd470481c1f82d2bc62712eeb1c4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mikepsn/compsec-eval", "max_forks_repo_path": "evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f93e57a5bd10bd470481c1f82d2bc62712eeb1c4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mikepsn/compsec-eval", "max_issues_repo_path": "evaluation.tex", "max_line_length": 98, "max_stars_count": null, "max_stars_repo_head_hexsha": "f93e57a5bd10bd470481c1f82d2bc62712eeb1c4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mikepsn/compsec-eval", "max_stars_repo_path": "evaluation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 796, "size": 4071 }
The selection described in the previous section define the analysis Signal Region (SR) for $W \rightarrow \ell\nu$ and $ Z \rightarrow \ell\ell$ candidate events (ZR). % and $ Z \rightarrow \tau\tau$ candidate events (ZRtt). However additional background processes contributing to the dataset need to be estimated with data-driven techniques, employing different event selection. Two categories of backgrounds can be defined: the electroweak (single and diboson) and top backgrounds, obtained from the appropriate MC samples as described in Section~\ref{sec:bkg_EWK}, and the multijet (MJ) background, estimated from data in both the W and the Z channels, as discussed in Section \cite{sec:bkg_mj}. The numbers of expected background events in both channels are summarised in Tables~\ref{tbl:SR_observed_candidates} and \ref{tbl:ZR_observed_candidates}. The values for the predicted cross sections of the signal and background samples % and their estimated uncertainties are given in~\ref{tbl:mc_samples_ewk} \cite{CrossSectionHighOrder,SMDC14xsecs,TtbarNNLO}. This section summarises the evaluation of the expected background. \subsection{Electroweak and top backgrounds} \label{sec:bkg_EWK} The electroweak and top Monte Carlo samples listed in Table~\ref{tbl:mc_samples_ewk} are used to estimate the background in the analyses. Their contributions are normalised to the cross-sections shown in the same table. % , while their uncertainties, also in \todo{Table 10}, \todo{are used to evaluate the systematic uncertainties on the electroweak and top backgrounds.} Table \ref{tbl:ewk_bkg_SR} shows the expected contributions of individual background processes in each measurement channel. \begin{table}[h] \begin{center} \begin{tabular}{ c | c | c | c | c } & $W \rightarrow e\nu$ & $W \rightarrow \mu\nu$ & $Z \rightarrow ee$ & $Z \rightarrow \mu\mu$ \\ & \% MC & \% MC & \% MC & \% MC \\ \hline $W \rightarrow \tau\nu$ & 1.67 & 1.74 & 0.00 & 0.00 \\ $Z \rightarrow \tau\tau$ & 0.122 & 0.127 & 0.045 & 0.042 \\ Diboson & 0.11 & 0.1 & 0.106 & \\ single top and $t\bar{t}$ & 0.065 & 0.054 & 0.024 & 0.019 \\ $W \rightarrow e\nu$ & 96.64 & - & 0.033 & 0.00 \\ $W \rightarrow \mu\nu$ & - & 93.9 & 0.00 & 0.01 \\ $Z \rightarrow ee$ & 1.39 & - & 99.79 & 0.00 \\ $Z \rightarrow \mu\mu$ & - & 4.07 & 0.00 & 99.83 \\ \hline \end{tabular} \caption{ Electroweak background contributions estimated from simulation. Expectations are expressed as a percentage of the total simulated events coming from the sources listed in the table and passing signal selection in each channel. Totals with uncertainty are given in Tables~\ref{tbl:SR_observed_candidates} and \ref{tbl:ZR_observed_candidates}. }% \label{tbl:ewk_bkg_SR} \end{center} \end{table} \textbf{$W \rightarrow e\nu$:} Electroweak backgrounds $W \rightarrow \tau\nu$, $Z \rightarrow ee$, and $Z \rightarrow \tau\tau$ are evaluated. Top backgrounds $t\bar{t}$, $Wt$ and single top $t$-channel contributions are evaluated. Diboson backgrounds $WW$, $WZ$, $ZZ$ are evaluated. \textbf{$W \rightarrow \mu\nu$:} Electroweak backgrounds $W \rightarrow \tau\nu$, $Z \rightarrow \mu\mu$, and $Z \rightarrow \tau\tau$ are evaluated. Top backgrounds $t\bar{t}$, $Wt$ and single top $t$-channel contributions are evaluated. Diboson backgrounds $WW$, $WZ$, $ZZ$ are evaluated. All other sources of background are negligible in comparison. \subsection{$W\rightarrow\ell\nu$ multijet background estimate methodology} \label{sec:bkg_mj} The selection of an isolated lepton, high $E_T^{miss}$, and high $m_T$, effectively rejects most of the multijet QCD production (MJ). However, contamination from such background process remains because of its very high production cross-section, and a small probability of fake $W$-boson-like signatures from jets mimicking the isolated lepton selection, and $E_T^{miss}$, generated through energy mismeasurement in the event. The MJ background composition may also be very diverse, depending on the $p_T$ range of interest and the lepton type. It may be composed by heavy-quark leptonic decays, material conversions, or hadrons. Because of the difficulties in the precise simulation of all these effects, data-driven techniques are often used for the MJ estimate in the $W \rightarrow e\nu$ and $W \rightarrow \mu\nu$. A generic recipe for a data-driven estimate is based on the selection of an MJ fake-enriched data sample obtained by relaxing or inverting one of the isolated lepton identification cuts (\textit{MJ-template selection}) then the newly selected MJ-template is normalized using data in a Control Region (CR) selected to have a sizable MJ fraction. The normalization can be extracted using the fit of a kinematic distribution able to separate the signal from the MJ, where the MJ shape is derived from the MJ-template and the signal shape from MC. The normalization scaling factor extracted from the CR is then applied to the number of MJ-template events passing the Signal Region (SR) selection. The weak points arising from the MJ extraction with the described method are: \begin{itemize} \item Arbitrariness of MJ-template lepton selection \item Arbitrariness of the choice of the discriminant variable to fit \item Biases in composition and kinematics of MJ-template with respect to events containing non-prompt leptons or fakes passing the signal selection \item Subtraction from the MJ-template of contamination coming from prompt leptons produced by $W$ signal (of which we should measure the cross-section) or other electroweak processes. \end{itemize} % ToDo : ones extrapolation will work, use it \input{chapters/background_mj_extrapolation} \input{chapters/background_mj_ff}
{ "alphanum_fraction": 0.7626881344, "avg_line_length": 70.5432098765, "ext": "tex", "hexsha": "b2b63fc8631febae5798f5a204715014b3a1e931", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "solitonium/ANA-STDM-2020-X-INTX", "max_forks_repo_path": "chapters/05_background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "solitonium/ANA-STDM-2020-X-INTX", "max_issues_repo_path": "chapters/05_background.tex", "max_line_length": 344, "max_stars_count": null, "max_stars_repo_head_hexsha": "4f891754e19676d5b32be1308bd901690723f92f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "solitonium/ANA-STDM-2020-X-INTX", "max_stars_repo_path": "chapters/05_background.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1508, "size": 5714 }
% to update this, sometimes the .pdf gets locked, so I need to delete the old version. Then, if publications change, % go to the terminal, cd to the correct folder, and run latex MackeviciusCV; bibtex MackeviciusCV, then two more latex ...s %% start of file `template_en.tex'. %% Copyright 2007 Xavier Danaux ([email protected]). % % This work may be distributed and/or modified under the % conditions of the LaTeX Project Public License version 1.3c, % available at http://www.latex-project.org/lppl/. \documentclass[11pt,a4paper]{moderncv} \newcommand{\bulletpt}{$\vcenter{\hbox{\tiny$\bullet$}}$ } % smaller bullet points % moderncv themes %\moderncvtheme[red]{casual} % optional argument are 'blue' (default), 'orange', 'red', 'green', 'grey' and 'roman' (for roman fonts, instead of sans serif fonts) \moderncvtheme[grey]{classic} % idem % character encoding \usepackage[utf8]{inputenc} % replace by the encoding you are using % adjust the page margins \usepackage[scale=0.8]{geometry} \recomputelengths % required when changes are made to page layout lengths % personal data \firstname{Emily} \familyname{Mackevicius} \title{Curriculum Vitae} % optional, remove the line if not wanted \address{77 Mass Ave, 46-5145}{Cambridge MA 02139} % optional, remove the line if not wanted %\mobile{mobile (optional)} % optional, remove the line if not wanted \phone{617.460.3629} % optional, remove the line if not wanted %\fax{fax (optional)} % optional, remove the line if not wanted \email{\mbox{[email protected]}} % optional, remove the line if not wanted %\extrainfo{additional information (optional)} % optional, remove the line if not wanted %\photo[64pt]{picture} % '64pt' is the height the picture must be resized to and 'picture' is the name of the picture file; optional, remove the line if not wanted %\nopagenumbers{} % uncomment to suppress automatic page numbering for CVs longer than one page %---------------------------------------------------------------ma------------------- % content %---------------------------------------------------------------------------------- \begin{document} \maketitle \section{Education} \cventry{2011--2018}{PhD, Brain and Cognitive Sciences}{Massachusetts Institute of Technology, Cambridge MA}{}{}{}{} %arguments 3 to 6 are optional \cventry{2012}{Methods in Computational Neuroscience}{Marine Biological Laboratory, Woods Hole, MA}{}{}{} \cventry{2007--2011}{BS, Mathematics}{University of Chicago, Chicago IL}{}{}{}{} %arguments 3 to 6 are optional \cventry{2003--2007}{High School}{Belmont High School, Belmont, MA}{}{} \ \section{Employment} \cventry{2018-present}{Postdoctoral Associate}{}{MIT Department of Brain and Cognitive Sciences, Fee Lab}{}{} \cventry{2011-2018}{PhD Student}{}{MIT Department of Brain and Cognitive Sciences, Fee Lab}{}{} \cventry{2010-2011}{Undergraduate Researcher}{}{Bensmaia Somatosensory Research Lab, Chicago IL}{}{\bulletpt Analyzed neuronal somatosensory data with MATLAB; built precision equipment in the metal shop; trained primates} % arguments 3 to 6 are optional %\cventry{2011}{Chicago Careers in Higher Education}{}{Member; CCIHE Travel Grant Recipient (to present at COSYNE conference); CCIHE Undergrad Research Symposium Speaker}{}{} \cventry{2009}{VIGRE Summer REU}{}{University of Chicago Mathematics Department}{}{\bulletpt Wrote a research paper on configuration spaces of planar linkages; Attended Topology, Probability, Group Theory, Billiards, and Discrete Math classes } \cventry{2008-2010}{Personal Assistant to Paul Sally}{}{University of Chicago Mathematics Department}{}{\bulletpt Helped the Director of Undergraduate Mathematics with many tasks related to research, administration, and teaching} \cventry{2007-2008}{Gearup Classroom Support}{}{University of Chicago Neighborhood Schools Program}{}{\bulletpt Assisted 8th grade Chicago Public Schools classroom} % Publications from a BibTeX file \nocite{*} \bibliographystyle{unsrt} \bibliography{publications} % 'publications' is the name of a BibTeX file \ \section{Invited and Selected Talks} \cventry{2018}{Computational Neuroscience Tutorial, MIT Brain and Cognitive Sciences Department}{}{\href{https://www.youtube.com/playlist?list=PLyGKBDfnk-iAU7N6dYVy7HhK2aLjLSPKM}{https://www.youtube.com/playlist?list=PLyGKBDfnk-iAU7N6dYVy7HhK2aLjLSPKM}}{}{}{}{} \cventry{2018}{COSYNE (Computational and Systems Neuroscience) conference}{\href{https://youtu.be/XyWtCtZ_m-8?list=PL9YzmV9joj3FNsAV2S_cKxY8Ik_-YlQfu}{http://www.cosyne.org/c/index.php?title=Cosyne\_18}}{}{}{}{} \cventry{2017}{Janelia J-Theory seminar}{}{}{}{}{} \cventry{2017}{Stanford Neuroscience Invited Graduate Student Talk Series}{}{}{}{}{} \cventry{2017}{MIT Brain and Cognitive Sciences Department Retreat}{}{}{}{} \cventry{2016 \& 2017}{Quantitative Methods Workshop, MIT Biology Department and Center for Minds Brains and Machines}{}{}{}{} \cventry{2016 \& 2017}{MIT Brain and Cognitive Sciences Department Interview Day Talk}{}{}{}{} \cventry{2015}{Integrative Neuronal Systems Conference, MIT Brain and Cognitive Sciences Department}{}{}{}{} \cventry{2014}{Center for Minds Brains and Machines Summer School at Woods Hole}{\href{http://cbmm.mit.edu/video/emily-mackevicius-learning-computational-neuroscience-perspective}{http://cbmm.mit.edu/video/emily-mackevicius-learning-computational-neuroscience-perspective}}{}{}{} \section{Fellowships and Awards} \cventry{2018}{COSYNE Presenters Travel Grant}{Selected to give a talk and awarded a travel grant based on the high reviewer ranking of my abstract}{}{}{}{} \cventry{2015-present}{Computational Neuroscience Tutorial Series, awarded department funding for filming and admin support}{\href{https://stellar.mit.edu/S/project/bcs-comp-tut/index.html}{https://stellar.mit.edu/S/project/bcs-comp-tut/index.html}}{}{}{}{} \cventry{2013-2016}{National Defense Science and Engineering Graduate Fellowship (NDSEG)}{Three year graduate fellowship from the Department of Defense covering tuition and stipend for three years}{}{}{}{} \cventry{2015}{Angus MacDonald Award for Excellence in Undergraduate Teaching}{Awarded by MIT Brain and Cognitive Sciences Deparment for my work TAing a new undergraduate course in Computational Neuroscience}{}{}{}{} \cventry{2015}{MIT Graduate Women of Excellence Award}{One of roughly 50 awardees of more than 200 nominees. Award is meant to ``honor graduate women who exemplify leadership and outstanding accomplishment''}{}{}{}{} \cventry{2012}{Scholarship for Methods in Computational Neuroscience course}{Marine Biology Lab}{Woods Hole, MA}{}{}{}{} \cventry{2011-2012}{Henry E. Singleton (1940) Presidential Fellowship}{MIT fellowship for first year graduate students}{}{}{}{}{} \cventry{2010}{Computational Neuroscience Summer Researcher}{NIH-sponsored summer research experience for undergraduates}{}{}{}{}{}{} \cventry{2009}{Math Summer Undergraduate Research Fellowship}{University of Chicago}{}{}{}{}{}{} \cventry{2007-2011}{University Scholarship}{merit scholarship for entering students}{University of Chicago}{}{}{}{} \section{Teaching and Mentorship} \cventry{2014-present}{Member, Education Committee}{}{MIT Brain and Cognitive Sciences Department}{}{\bulletpt Advise committee on graduate and undergraduate curriculum}{} \cventry{2013-2017}{Founder, Computational Neuroscience Tutorial Series}{}{MIT Brain and Cognitive Sciences and Center for Minds Brains and Machines}{}{\bulletpt Chose topics, invited speakers, created course website with problem sets, references, slides, and videos: \href{https://stellar.mit.edu/S/project/bcs-comp-tut/index.html}{https://stellar.mit.edu/S/project/bcs-comp-tut/index.html}}{} \cventry{2013-2017}{Teaching Assistant, Methods in Computational Neuroscience}{}{Woods Hole Marine Biology Laboratory}{}{\bulletpt Made tutorials and problem sets, answered student questions, proposed novel projects, and advised students on projects}{} \cventry{2016-2017}{Instructor, Quantitative Methods Workshop}{}{MIT Biology Department and Center for Minds Brains and Machines}{}{\bulletpt Designed and taught tutorials at intensive quantitative workshop for undergrads from diverse backgrounds}{}{} \cventry{2013-2017}{Mentor and PAL, MIT Summer Research Program}{}{MIT Brain and Cognitive Sciences and Center for Minds Brains and Machines}{}{\bulletpt Mentored summer student and served as informal PAL to several students in diversity research program}{} \cventry{2014-2015}{Teaching Assistant}{}{MIT Brain and Cognitive Sciences Department}{}{\bulletpt 9.40 (Introduction to Computational Neuroscience) TA: designed problem set questions, helped plan lectures and course curriculum, held office hours, answered student questions.}{} \cventry{2014}{Teaching Assistant, Brains, Minds and Machines Summer Course}{}{Woods Hole Marine Biology Laboratory}{}{\bulletpt Made MATLAB for neuroscience tutorial, gave neuroscience lecture, answered student questions, advised students on projects.}{} \cventry{2014}{Conference organizer}{}{Graduate Women at MIT (GWAMIT)}{}{\bulletpt Organized a mentorship event for the 2014 Spring Empowerment Conference, co-chair of the 2014 Fall Leadership Conference}{} \cventry{2011-2014}{Video Maker}{}{MIT+K12 Videos}{}{\bulletpt Made outreach videos on science topics for the MIT+K12 Videos project, and Khan Academy. Designated as a `high quality veteran video maker'.}{} \cventry{2012}{Teaching Assistant}{}{MIT Brain and Cognitive Sciences Department}{}{\bulletpt 8.261/9.29 (Introduction to Computational Neuroscience) TA: held office hours and review sessions, and graded assignments}{} \cventry{2011}{Teaching Assistant}{}{University of Chicago Biology Department}{}{\bulletpt BIOS 20244 (Biophysics and Chemical Biology) TA: advised and assessed student presentations}{} \cventry{2009-2010}{YSP Group Leader}{}{University of Chicago Mathematics Department}{}{\bulletpt Led a discussion group of gifted $5^{th} - 9^{th}$ graders covering topics related to the mathematics of encryption} \cventry{2008-2010}{VIGRE Course Assistant}{}{University of Chicago Mathematics Department}{}{\bulletpt MATH 151,2,300 (Calculus) TA: held office hours, graded papers, assisted problem sessions} \cventry{2009}{VIGRE REU}{}{University of Chicago Mathematics Department}{}{\bulletpt Led a discussion group of $11^{th}$ and $12^{th}$ graders in Knot Theory and Applied Probability} \cventry{2007-2008}{Gearup Classroom Support}{}{University of Chicago Neighborhood Schools Program}{}{\bulletpt Assisted 8th grade Chicago Public Schools classroom} %\cventry{1997-present}{Cellist}{}{Chamber music including: MIT Chamber Music Society, University of Chicago Chamber Music Program and Classical Improvisation Group, Apple Hill Center for Chamber Music Summer Festivals}{}{} %\cventry{2007-2011}{Viola da Gamba Player}{}{University of Chicago Early Music Ensemble}{}{} %\cventry{2007-2010}{Assistant Principal Cellist (2007-2008)}{}{University Symphony Orchestra}{}{} \section{Skills and hobbies} \cvcomputer{Computer}{MATLAB, Python, Slurm, LaTex, Mac OS X, Microsoft Office, CAD, Eagle}{Fabrication}{precision Lathe, precision milling machine, band saw, drill press, laser cutter, computer-controlled machining, electronics, etc. } \cvcomputer{Cello}{MIT Chamber Music Society and Gilbert and Sullivan Pit Orchestra, University of Chicago Symphony Orchestra, Early Music Ensemble, Classical Improvisation Group}{Other}{Pottery, hiking, slackline, climbing} \closesection{} % needed to renewcommands \renewcommand{\listitemsymbol}{-} % change the symbol for lists \section{References} References available upon request %\cventry{ 617.324.0173\\ \scriptsize{[email protected]}}{Michale Fee}{Associate Professor}{}{}{Massachusetts Institute of Technology}{}{}{} %\cventry{ 617.324.3085\\ \scriptsize{[email protected]}}{Ed Boyden}{Associate Professor}{}{}{Massachusetts Institute of Technology}{}{}{} %\cventry{773.834.5203\\ \scriptsize{[email protected]}}{Sliman Bensmaia}{Associate Professor}{}{}{University of Chicago}{}{}{} \end{document} %% end of file `template_en.tex'.
{ "alphanum_fraction": 0.7475359509, "avg_line_length": 84.7808219178, "ext": "tex", "hexsha": "a9634f37c358d453df2afbdf58ca6e7b66d89d3a", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-10T21:50:32.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-10T21:50:32.000Z", "max_forks_repo_head_hexsha": "1e28ef57a53e343f9300fc4d9baa1b856c95625c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gabrc52/emackev.github.io", "max_forks_repo_path": "MackeviciusCV.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1e28ef57a53e343f9300fc4d9baa1b856c95625c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gabrc52/emackev.github.io", "max_issues_repo_path": "MackeviciusCV.tex", "max_line_length": 394, "max_stars_count": null, "max_stars_repo_head_hexsha": "1e28ef57a53e343f9300fc4d9baa1b856c95625c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gabrc52/emackev.github.io", "max_stars_repo_path": "MackeviciusCV.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3151, "size": 12378 }
\chapter{Specific requirements} \section{External Interface Requirements} The application shows its best potential when running in a mobile device, for instance a smartphone or a tablet. This permits to extend the features and the automatic tasks of the application, thanks to the built-in device functionalities. However, a computer client version of the application can be installed, too. \subsection{User interfaces} The user can interact with the application through several graphical interfaces: \begin{enumerate} \item \textbf{Registration/login interface}: allows the user to insert credentials in order to registering or logging into the system; \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/login".jpg} \caption{Registration/login interface} \end{center} \end{figure} \item \textbf{User account interface}: user can specify his characteristics \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/userprofile".jpg} \caption{User account interface} \end{center} \end{figure} \item \textbf{Home interface}: shows currently running schedule and displays some navigation links to other interfaces; \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/"home".jpg} \caption{Home interface} \end{center} \end{figure} \item \textbf{Appointment CRUD interface}: allows creating, showing and editing appointment parameters and related constraints; \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/appointment".jpg} \caption{Appointment CRUD interface} \end{center} \end{figure} \item \textbf{Appointments list interface}: provides a list of all inserted appointments, with the possibility to filter between non-scheduled/scheduled ones (includes the possibility to delete an item of the list); \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/"appointments".jpg} \caption{Appointments list interface} \end{center} \end{figure} \item \textbf{Schedules list interface}: display a list of the created schedules \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/"schedules".jpg} \caption{Schedules list interface} \end{center} \end{figure} \item \textbf{Schedule interface}: user can set parameters, contraints, optimization criteria and request a schedule creation for a given date; \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/"schedule".jpg} \caption{Schedule interface} \end{center} \end{figure} \item \textbf{Schedules result interface}: shows the computation of the requested schedules for a given date and asks the user to select one, then waits for confirmation for that; \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/"scheduleresults".jpg} \caption{Schedules result interface} \end{center} \end{figure} \item \textbf{Schedule progress interface}: permits to keep track of the completeness percentage, indicating the directions to be followed by the user in a map, in order to arrive to the next appointments; \begin{figure}[H] \begin{center} \includegraphics[width=250pt, keepaspectratio]{"images/interfaces/schedule progress".jpg} \caption{Schedule progress interface} \end{center} \end{figure} \item \textbf{Tickets/rides reservation interface}: allows user to buy tickets for public travel means and/or reserve a ride for the shared travel means; %\item \textbf{Appointments history interface}: shows a list of archived appointments; \end{enumerate} %\subsubsection{Notes on User Interfaces} %The interfaces displayed before have been created through the IDE Xcode. In this way, if the application will be implemented, the GUI will look like exactly as shown above. \subsection{Hardware interfaces} Hardware interfaces are physical linking across which two or more separate components of a system exchange information. A hardware interface is described by the mechanical and electrical signals at the interface and the protocol for sequencing them. There are no interesting hardware interfaces in our scope. %Our system relies on the following hardware interfaces: %\begin{itemize} %\item Mobile device: %\item Server: the subsystem is based on the client-server paradigm. %\item Travel means: gps on taxi and shared mean %\end{itemize} \subsection{Software interfaces} Software interfaces are logical linking across which two or more separate applications running on a system exchange information. The most relevant software interface in our system is API. APIs are sets of subroutine definitions, protocols and clearly defined methods of communication, allowing data exchanging and service requests. There are several kinds of these: \begin{itemize} \item \textbf{Operating System APIs}: specify interface between applications and OS, permitting to access low level routines calls (for instance, to communicate with memory or with an internal device) \item \textbf{Remote APIs}: DBMS expose a set of standards that the API user can adopt in order to manage the database data. SQL is the standard language for storing, manipulating and retrieving data in this context; \item \textbf{Web API}: information can be exchanged through the internet by encapsulating it in HTTP request/response. Weather forecast, travel services, mapping systems offer this typology of API. \end{itemize} \subsection{Communications interfaces} Communication interfaces allows two different architectures of the system to exchange information through communication channel. These non-homogeneous components of the system can communicate thanks to the following software interfaces and protocols: \begin{itemize} \item \textbf{Cellular connectivity}: mobile devices can connect to the internet thanks to LTE standard; \item \textbf{GPS}: cellular can retrieve his coordinates position through NMEA protocol; \item \textbf{QRCode}: associates a matrix of bits to an URL. QRCodes are present in most of the shared means, semplifying the booking of that. %identify the nearby transportation \end{itemize} \section{Functional requirements} \subsection{Scenarios} Here are some scenarios that describe the usage of the system. \subsubsection{Scenario 1} \label{scenario:1} Luana has some problems in scheduling his daily appointments in fact she is always late. One of her friend told her that has been released a new application Travlendar+ that can be useful for scheduling appointments. Luana decides to download it and then she registers herself to the system by submitting his e-mail and a password. After the e-mail confirmation she inserts in the system her user parameters. \subsubsection{Scenario 2} \label{scenario:2} Giovanni will start the fourth year of his Master's degree. Surfing the internet, he finds out that his lesson schedule for the first semester has been published. Giovanni decides to fill in the application with his new appointments related to lessons attendance. In fact he knows where to go, at which time and day and for which amount of time. Since he knows that these events will going to happen for 3 months, he sets them as recurrent. \subsubsection{Scenario 3} \label{scenario:3} Edoardo wants to start training but he doesn't know what are the best hours in which he can run in accord to his appointments, he know only that he can run between 5 and 7 pm, for 45 minutes. he can insert this last appointment in the application whitout specify the exactly starting hour and the system will schedule it at best. \subsubsection{Scenario 4} \label{scenario:4} Federico has scheduled his appointments but at lunch time his son called him because he needed a ride for go back to home. Federico decided to help his son and so he brought him home. now the current running schedule is not more valid so he request to the system a reschedule of his appointment according to his position. \subsubsection{Scenario 5} \label{scenario:5} John is ready to start is daily tasks in fact he has already scheduled his appointments through Travlendar+ application. 15 minutes before the schedule starting time he recieves a notification by the application saying that there is a possible rearregment of his appointments by using shared travel means that can improve the optimization criteria selected by him. so he decides to accepts this new schedule. \subsection{Use cases} \subsubsection{User registration} \label{usecase:User registration} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} User registration \\ \hline \textbf{Actors:} External User, external e-mail service \\ \hline \textbf{Goals:} \goalref{goal:G1} \\ \hline \textbf{Input Condition:} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user wants to register to the system, so he runs the application; \item The system display the login/registration page; \item The user fills the form (the form is present on the first page that the application display after the startup); \item The user submit the filled form to the system; \item The system send a confirmation e-mail to the user; \item The system display a message in which the user is informed that he will recieve a confirmation e-mail; \item The user confirm the registration by clicking a link in the received e-mail; \item A confirmation message is sent to the application; \item The user is redirected in his profile page inside the application; \item The user specifies his parameters. \end{enumerate} \\ \hline \textbf{Output Condition:} The registration is confirmed to the system; \\ \hline \textbf{Exceptions:} \begin{enumerate} \item The e-mail given by the user is fake \item The user makes a typo during the insertion of his e-mail \end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Events 1 through 3 are granted by \reqref{req:R1}; \item Evens 4 through 9 are granted by \reqref{req:R2}; \item Event 10 is granted by \reqref{req:R13}. \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/RegistrationSequenceDiagram".png} \caption{Registration sequence diagram} \label{img:seqDiagrAppEditing00} \end{center} \end{figure} \subsubsection{User log-in} \begin{tabular}{|p{14cm}|} \hline \textbf{Name:} User log-in \\ \hline \textbf{Actors:} Registered User \\ \hline \textbf{Goals}: \goalref{goal:G2}\\ \hline \textbf{Input Condition:} The user is registered to the system \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user needs to log-in the application, so he runs it; \item The system provides to the user a form to fill; \item The user fills up the form with his e-mail and his password; (as said in \ref{subsect:usermodel}) \item The user submits the form to the system; \item The system checks the user identity \item The system synchronize the data; \item The system provides to the user the main application page . \end{enumerate} \\ \hline \textbf{Output Condition:} The user is logged-in to the system. \\ \hline \textbf{Exceptions:} The user submits the form after having filled it with a wrong email or password. \\ \hline \textbf{Mapping on requirements:} \begin{itemize} \item Events from 3 through 5 granted by \reqref{req:R5}; \item Event 6 granted by \reqref{req:R30}; \item Event 7 granted by \reqref{req:R6}; \end{itemize} \\ \hline \end{tabular} \includegraphics[width=400pt, keepaspectratio]{"images/LogInSequenceDiagram".png} %\caption{Log-in sequence diagram} \subsubsection{Recover credentials}\label{usecase:recovercredentials} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} Schedule appointments \\ \hline \textbf{Actors:} Registered User, External Email Service \\ \hline \textbf{Goals:} \goalref{goal:G3} \\ \hline \textbf{Input Condition:} The user is registered to the system \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user wants to recover his passoword; \item The user requests to recover his passoword; \item The system provides to the user the schedule form with his e-mail; \item The user fills up the field of the form; \item The user submits the form to the system; \item The system sends the e-mail to the user with his password. \end{enumerate} \\ \hline \textbf{Output Condition:} The user has recovered his password; \\ \hline \textbf{Exceptions:} \begin{enumerate} \item The form it's left blank; \item An invalid address is given; \end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Actions 2 through 5 granted by requirement \reqref{req:R7} \item Action 6 granted by requirement \reqref{req:R6} \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=450pt, keepaspectratio]{"images/PasswordRecoverySequenceDiagram".png} \caption{Password recovery sequence diagram} \label{img:seqDiagrPasswordRecovery} \end{center} \end{figure} \subsubsection{Appointment creation} \label{usecase:appcreation} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} Appointment creation \\ \hline \textbf{Actors:} Logged User \\ \hline \textbf{Goals:} \goalref{goal:G4} \\ \hline \textbf{Input Condition:} \begin{itemize} \item The user is registered to the system \item The user is logged into the systems \end{itemize} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user wants to add a new appointment to his schedule; \item The user requests the appointments page \item The system provides the appointments page \item The user requests the creation of a new appointment to the application; \item The system provides to the user a form to fill; \item The user fills up the form with the parameters (specified in \ref{subsect:appointmentmodel}) and constraints (specified in \ref{subsubsect:constronappoint} about the new appointment; \item The user submit the form to the system; \item The system allocates the new appointment as Unscheduled (referring to statechart in figure; \label{fig:stchartApp}) \item The system sends a confirmation to the user. \end{enumerate} \\ \hline \textbf{Output Condition:} The user has created a new appointment; \\ \hline \textbf{Exceptions:} \begin{enumerate} \item Some fields of the form referring to parameters are left blank. \end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Events 4 through 7 are granted by \reqref{req:R8} \item Event 8 is granted by \reqref{req:R9} \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/sequenceDiagramAppointmentCreation".png} \caption{Appointment creation sequence diagram} \label{img:seqDiagrAppCreation} \end{center} \end{figure} \subsubsection{Appointment editing}\label{usecase:appediting} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} Appointment editing \\ \hline \textbf{Actors:} Logged User \\ \hline \textbf{Goals:} (\goalref{goal:G5})\\ \hline \textbf{Input Condition: The user is logged-in to the system} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user wants to modify an appointment of his schedule; \item The user selects the appointment to modify; \item The system provides to the user the appointment form with all the parameters and constraints; that were specified yet by the user; \item The user edits the fields of the form; \item The user submit the form to the system; \item The system set the appointment as Unscheduled with the new parameters (referring to statechart in figure \ref{fig:stchartApp}); \item The system sends a confirmation to the user. \end{enumerate} \\ \hline \textbf{Output Condition:} The user has modified an appointment; \\ \hline \textbf{Exceptions:} \begin{enumerate} \item Some fields of the form referring to parameters are left blank. \end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Events 3 through 5 are granted by \reqref{req:R10} \item Event 6 is granted by \reqref{req:R11} \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/seqDiagramAppointmentEditing".png} \caption{Appointment editing sequence diagram} \label{img:seqDiagrAppEditing00} \end{center} \end{figure} \subsubsection{Schedule appointments}\label{usecase:scheduleappointments} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} Schedule appointments \\ \hline \textbf{Actors:} Logged User, External API \\ \hline \textbf{Goals:} \goalref{goal:G6} \\ \hline \textbf{Input Condition:} The user is logged-in to the system \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user wants to schedule his appointments; \item The user requests the creation of a new schedule for the current day; \item The system provides to the user the schedule form with all the parameters, optimization criteria and the constraints; \item The user fills up the fields of the form; \item The user submits the form to the system; \item The system retrieves information from external APIs about; travel options and related travel option data, weather forecast and strike days; \item The system retrieves about the Travel Option Data of the newly created Schedule \item The system stores the Schedule, together with his travel option data and stores it as Saved (\ref{fig:stchartApp}) with the appointment selected by the user. \end{enumerate} \\ \hline \textbf{Output Condition:} The user has created a valid schedule of his appointments; \\ \hline \textbf{Exceptions:} \begin{enumerate} \item Some fields of the form referring to schedule variables and optimization criteria are left blank. The parameters of schedule constraints could also be left blank since they will assume default values; \item It's not possible to list the appointments as a Valid Schedule, so the schedule is Discarded (\ref{fig:stchartApp}) \end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Events 3 through 5 are granted by \reqref{req:R12} through \reqref{req:R14}; \item Event 6 and 7 is granted by \reqref{req:R15}; \item Event 8 is granted by \reqref{req:R16} and \reqref{req:R17}. \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=450pt, keepaspectratio]{"images/SequenceDiagramSchedule".png} \caption{Schedule appointents sequence diagram} \label{img:seqDiagrAppEditing00} \end{center} \end{figure} \subsubsection{Schedule selection} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} Multiple Schedules creation \\ \hline \textbf{Actors:} Logged User \\ \hline \textbf{Goals:} \goalref{goal:G7} \\ \hline \textbf{Input Condition:} \begin{itemize} \item The user is registered to the system; \item The user is logged in to the systems. \end{itemize} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user wants to compare multiple schedules; \item The user requests the schedules page; \item The system provides the schedules page; \item The user selects a schedule to be run; \item The system display the mainpage with the schedule results (\ref{def:schedulingResult}). \end{enumerate} \\ \hline \textbf{Output Condition:} The user selects a schedule to be run \\ \hline \textbf{Exceptions:} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Events are granted by the requirment \reqref{req:R18} \end{itemize} \\ \hline \end{longtable} \label{usecase:ScheduleSelection} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/ScheduleSelectionSequenceDiagram".png} \caption{Schedule selection sequence diagram} \label{img:ScheduleSelection} \end{center} \end{figure} \subsubsection{Booking phase} \label{usecase:Booking Phase} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:}Booking phase \\ \hline \textbf{Actors:} Logged User, External APIs \\ \hline \textbf{Goals:} \goalref{goal:G8} \\ \hline \textbf{Input Condition:} \begin{itemize} \item The user must be logged in to the system; \item The user must have selected a schedule to be run; \item The user must have linked to the system his external accounts; \item The user would like to buy the tickets for the travel means involved in the running schedule. \end{itemize} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The System, after the user have selected a schedule, asks to the user if he want to buy the ticket for the running schedule; \item The User confirm to the system his intention; \item The System perform a call to the travel means APIs for buying the ticket; \item The APIs send back a confirmation message of the purchase; \item The system send a confirmation message to the user. \end{enumerate} \\ \hline \textbf{Output Condition:} The User recieve the confirmation message; \\ \hline \textbf{Exceptions:} \begin{enumerate} \item The user doesn't have enough money in his card to complete the transaction; \item there aren't free sits in one of the selected travel means; \end{enumerate} \\ \hline \textbf{Mapping on Requirements:} %\begin{itemize} Events 3 through 5 granted by \reqref{req:R19}; \\ \hline %\end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/BookingPhaseSequenceDiagram".png} \caption{Booking phase sequence diagram} \label{img:seqDiagrAppEditing00} \end{center} \end{figure} \subsubsection{Dynamic directions} \label{usecase:Dynamic Directions} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:}Dynamic Directions\\ \hline \textbf{Actors:} Logged User, External APIs, GPS \\ \hline \textbf{Goals:} \goalref{goal:G9} \\ \hline \textbf{Input Condition:} \begin{itemize} \item The user must be logged in to the system; \item The user must have a running schedule; \end{itemize} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The user requests the Directions for the travel to the system; \item The system retrieves the user position from his GPS; \item The system retrives from external APIs the directions to give to the user based on his position; \item The system display to the user the updated map and the directions that him must follow in order to arrive to the next appointment \item User doesn't need more directions so he closes the dynamic map. \end{enumerate} \\ \hline \textbf{Output Condition:} The User is satisfied with the information gathered until this moment so he decides to close the dynamic map; \\ \hline %\textbf{Exceptions:} % %\begin{enumerate} %\item %\end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Events 2 granted by \reqref{req:R20} \item Event 3 granted by \reqref{req:R21} \item Event 4 granted by \reqref{req:R22} \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/DynamicDirectionsSequenceDiagram".png} \caption{Dynamic directions sequence diagram} \end{center} \end{figure} In this sequence diagram the loop of the actions described in the rectangle continues until the dynamicMap is closed, as shown in the activity diagram above. \begin{figure}[H] \begin{center} \includegraphics[width=150pt, keepaspectratio]{"images/activityDiagramMaps".png} \caption{Dynamic directions activity diagram} \end{center} \end{figure} \subsubsection{Notify Shared Means}\label{usecase:Notify Shared Means} \begin{longtable}{|p{14cm}|} \hline \textbf{Name:} Notify Shared Means \\ \hline \textbf{Actors:} Registered User, External Api \\ \hline \textbf{Goals:} \goalref{goal:G10} \\ \hline \textbf{Event Flow:} \begin{enumerate} \item The system requests information to an external API about Shared Travel Means; \item The external API service respond to the system with the information requested; \item The system with the gathered information computes if there is a better path for the user according to the chosen constraints and optimization criteria; \item if the path is found by the system is sent a notification to the user; \end{enumerate} \\ \hline \textbf{Output Condition:} a better path is found; \\ \hline \textbf{Exceptions:} %\begin{enumerate} %\end{enumerate} \\ \hline \textbf{Mapping on Requirements:} \begin{itemize} \item Actions 1 and 2 are granted by \reqref{req:R24} \item Actions 4 is granted by \reqref{req:R23} \end{itemize} \\ \hline \end{longtable} \begin{figure}[H] \begin{center} \includegraphics[width=400pt, keepaspectratio]{"images/NotifySharedMeansSequenceDiagram".png} \caption{Notify Shared Means sequence diagram} \end{center} \end{figure} \subsection{Use Case Diagram} \begin{figure}[H] \begin{center} \includegraphics[width=450pt, keepaspectratio]{"images/useCaseDiagram".png} \caption{Use Case Diagram} \label{img:seqDiagrPasswordRecovery} \end{center} \end{figure} \subsection{Notes on diagrams} In these diagrams we assume that the user has ran the application, but when necessary this action is explicitly specified. In particular, if the actor is a \textit{Registered User} or an \textit{External User} then we assume that the user is facing the login page. On the other hand, if the actor is a \textit{Logged User} then is assumed that it is on the home page. \section{Performance Requirements} \label{sec:performanceRequirements} The user must be notified in real time when a shared travel mean can be booked in order to provide a better mobility option, since these kind of transportation can remain available for a limited amount of time. Other performance requirements can't be easily expressed because they depends heavily on external services and on the device in which the application is run. For example the time needed to create a schedule is influenced by the promptness of the APIs. Anyway an upper bound of 5 seconds for the creation of a schedule is given. Moreover, the position of the user during the progress of a schedule must be track with a maximum delay of 100ms. \section{Design Constraints} \subsection{Standard compliance} Our system conforms to OAuth2 \footnote{Open Authentication, an industry-standard protocol for authorization. It focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.} to handle the registration and login process. Moreover, HTTPS \footnote{HTTPS is a communications protocol for secure communication over a computer network. The main motivation for HTTPS is authentication of the visited website and protection of the privacy and integrity of the exchanged data.} protocol is used to guarantee secure treatment of user's sensitive data. \subsection{Hardware limitations} The bottleneck on the performance of the system is represented by the network infrastructure capability. In particular the most affected activity is the schedule computation. \subsubsection{Analysis} We can assume that the upload and download speeds of the APIs server is respectively 50Mbit/s and 150Mbit/s per client and that a request and a response weights are 20KB and 180KB. We can consider 2 cases: \begin{itemize} \item 10Mbit/s and 5Mbit/s download and upload speeds respectively of the client; \item 100Mbit/s and 50Mbit/s download and upload speeds respectively of the client. \end{itemize} \begin{table}[htbp] \centering \begin{tabular}{|r|r|r|r|r|} \hline \multicolumn{1}{|l|}{\textbf{No. Appointments}} & \multicolumn{1}{l|}{\textbf{No. APIs calls}} & \multicolumn{1}{l|}{\textbf{Request [MB]}} & \multicolumn{1}{l|}{\textbf{Response [MB]}} & \multicolumn{1}{l|}{\textbf{Tot [sec]}} \\ \hline %\midrule 5 (2) & 4 & 0.625 & 5.625 & 0.687 \\ \hline %\midrule 10 (4) & 28 & 4.375 & 39.375 & 4.812 \\ \hline %\midrule 15 (7) & 5047 & 788.6 & 7097.3 & 867.453 \\ \hline %\bottomrule \end{tabular}% \caption{Total times in the case of 10Mbit/s download speed and 5Mbit/s upload speed} \label{tab:addlabel}% \end{table}% \begin{table}[htbp] \centering \begin{tabular}{|r|r|r|r|r|} \hline \multicolumn{1}{|l|}{\textbf{No. Appointments}} & \multicolumn{1}{l|}{\textbf{No. APIs calls}} & \multicolumn{1}{l|}{\textbf{Request [MB]}} & \multicolumn{1}{l|}{\textbf{Response [MB]}} & \multicolumn{1}{l|}{\textbf{Tot [sec]}} \\ \hline %\midrule 5 (2) & 4 & 0.625 & 5.625 & 0.125 \\ \hline %\midrule 10 (4) & 28 & 4.375 & 39.375 & 0.875 \\ \hline %\midrule 15 (7) & 5047 & 788.6 & 7097.3 & 157.719 \\ \hline %\bottomrule \end{tabular}% \caption{Total times in the case of 100Mbit/s download speed and 50Mbit/s upload speed.} \label{tab:addlabel}% \end{table}% In the first column the numbers between brackets rempresent the number $n$ of appointments with variable starting time, so that can be arranged differently relative to each other, changing their order in the schedule. Then the number of calls to External APIs that should be done is calculated, in case of a brute-force approach in the scheduling algorithm. Therefore the number of calls is proportional to $n!$. Finally the total amount of time is calculated considering the previous assumptions. We can realize that the number of calls to external APIs should be minimized in order to fullfill the requirement on performance expressed in \ref{sec:performanceRequirements}. \section{Software System Attributes} \subsection{Reliability} The system should guarantee that from the data retrieved is always constructed the most convenient valid schedules, according to user preferences and constraints, if it exists. \subsection{Availability} The system should be accessible 24 hours per day and should be available 99,9\% of the time (up to 8,76 hours per year of downtime). Anyway the availability of the features involving the use of external services can't be directly controlled. In particular the availability of the feature $j$ is given by: \begin{equation} A_j = a_0 \prod_{i=1}^n a_i \end{equation} where each $a_i$ represent the availability of the external service $i$ used and $a_0$ is the availability of the application. For instance, in the case of a schedule creation: \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth, keepaspectratio]{"images/availability diagram".jpg} \caption{A failure in one of the chain of request to the APIs causes the entire process to break down} \end{center} \end{figure} \subsection{Security} The identity of the user must be verified through a login phase. User's characteristics must be protected during transmission from client to server throughout the registration. User credentials are cryptographied and then saved. \subsection{Maintainability} The system should be open to modifications. In particular the application should be able to consider new travel means, new scheduling optimization criteria and new constraints. Moreover, also the GUI should be easily editable, so that can adapt to new operating systems. \subsection{Portability} The system should be adaptable to run in all the devices (\ref{def:device}) considered.
{ "alphanum_fraction": 0.7740658492, "avg_line_length": 43.9476661952, "ext": "tex", "hexsha": "e9cda1e829b15625ea92816f0c8afd85e3f9ba92", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-10-19T08:25:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-06T15:07:29.000Z", "max_forks_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "keyblade95/DamicoGabboliniParroni", "max_forks_repo_path": "RASD/cap3_specificreq.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "keyblade95/DamicoGabboliniParroni", "max_issues_repo_path": "RASD/cap3_specificreq.tex", "max_line_length": 440, "max_stars_count": null, "max_stars_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "keyblade95/DamicoGabboliniParroni", "max_stars_repo_path": "RASD/cap3_specificreq.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8020, "size": 31071 }
\chapter{QVT-R} Taken from the QVT Specification
{ "alphanum_fraction": 0.78, "avg_line_length": 12.5, "ext": "tex", "hexsha": "902ffaf0a6821c8dd837eb1c0dbf652415c8d2fc", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2017-10-24T14:38:41.000Z", "max_forks_repo_forks_event_min_datetime": "2017-04-06T12:41:50.000Z", "max_forks_repo_head_hexsha": "c39fb085723f4b3828050a7a20e32a278b4d13ab", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "arcanefoam/mde_listings", "max_forks_repo_path": "demo/QVTr.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c39fb085723f4b3828050a7a20e32a278b4d13ab", "max_issues_repo_issues_event_max_datetime": "2017-10-24T14:43:14.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-24T14:43:14.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "arcanefoam/mde_listings", "max_issues_repo_path": "demo/QVTr.tex", "max_line_length": 32, "max_stars_count": 3, "max_stars_repo_head_hexsha": "c39fb085723f4b3828050a7a20e32a278b4d13ab", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "arcanefoam/mde-listings", "max_stars_repo_path": "demo/QVTr.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-27T13:41:49.000Z", "max_stars_repo_stars_event_min_datetime": "2018-04-13T11:34:36.000Z", "num_tokens": 14, "size": 50 }
%\section{Homework!} \subsection{Homework for basics} \subsubsection{Assignment 1} Change code 12.75 so that \begin{itemize} \item Does not use "\&\&"(AND) \item Instead, uses "||" (OR). \end{itemize} Comment: This is also a test if you can think things logically\ldots Thanks to Prof. Boole. \subsubsection{Assignment 2} Write a macro that draws gird (lattice) in a image (see example, attached). If you have time, modify the macro so that the macro plots diagonal lattice. Steps should be something like: \begin{enumerate} \item creat a new image \item loop in x direction and draw vertical line \dots for this, use command \ilcom{drawLine(x1, y1, x2, y2)} \dots see \url{http://rsb.info.nih.gov/ij/developer/macro/functions.html#drawLine} \item loop in y direction and draw horizontal line \end{enumerate} Hints: if you want to draw white lines on black image \begin{itemize} \item you need to select black background when you make a new image \item you need to set the drawing color using \ilcom{setColor()} \item see \url{http://rsb.info.nih.gov/ij/developer/macro/functions.html#setColor} \end{itemize} %composing grids \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.6]{fig/grid.png} \caption{Composing grid image} \label{fig_homeworkGrid} \end{center} \end{figure} %composing diagonalgrids \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.6]{fig/gridDiagonal.png} \caption{Composing grid image} \label{fig_homeworkGridDiagonal} \end{center} \end{figure} \subsubsection{Assignment 3} Write a macro that deletes every second frame (even-numbered frames) in a stack. Hint: use \ilcom{run("Delete Slice");} to delete a single slice. Comment: it might be tricky. \subsubsection{Assignment 4} Write a time stamping macro for t-stacks. You should implement following functions. \begin{itemize} \item User inputs the time resolution of the recording (how many seconds per frame). \item The time point of each frame appears at the top-left corner of each frame. \item If possible, time should be in the following format: \\ mm:ss \\ (two-digits minutes and two digits seconds) \end{itemize} Hint: Use following: for-statement, \ilcom{nSlices, setSlice, getNumber,\\ setForegroundcolor, setBackgroundColor, drawString, IJ.pad}. (refer to the Build-in Macro Function page in ImageJ web site!) \subsubsection{Assignment 5} Modify code 14 so that the macro does not use "while" loop. For example with the following way. \begin{itemize} \item Macro measures the integrated density of all area in the first frame ( = ref\_int). \item In the next frame, full integrated intensity is measured again (temp\_int). \item Decrease the lower for the thresholding by temp\_int/ref\_int. \end{itemize} \subsection{Homework for a bit advanced} \subsubsection{Assignment 6} Write an elementary calculator macro with single dialog box that does: \begin{itemize} \item user input two numbers \item user selects either one of addition, subtraction, multiplication or division. \item answer appears in the Log window. \end{itemize} Hint: use \ilcom{Dialog.addChoice Dialog.getChoice} command. \subsubsection{Assignment 7} Write a macro that does pseudo high-pass filtering by Gaussian blurred image (duplicate an image, do Gaussian blurring with a large kernel to create background and subtract it from the original). If you could successfully write a macro, then convert it to a function and use it from a macro. Hint: use \ilcom{getImageID(), selectImage(id)} command.
{ "alphanum_fraction": 0.7679180887, "avg_line_length": 35.5151515152, "ext": "tex", "hexsha": "bcf79e1903a29e8c6c7f7e7b42798c14b9457755", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2e465787f2a06fd795460432297b90cf0fbf721b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mistltoe/mistltoe.github.io", "max_forks_repo_path": "assets/experimentalTools/ImageJ/reference/cmci-ij_textbook2-d852848/sections/homeworks.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2e465787f2a06fd795460432297b90cf0fbf721b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mistltoe/mistltoe.github.io", "max_issues_repo_path": "assets/experimentalTools/ImageJ/reference/cmci-ij_textbook2-d852848/sections/homeworks.tex", "max_line_length": 96, "max_stars_count": null, "max_stars_repo_head_hexsha": "2e465787f2a06fd795460432297b90cf0fbf721b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mistltoe/mistltoe.github.io", "max_stars_repo_path": "assets/experimentalTools/ImageJ/reference/cmci-ij_textbook2-d852848/sections/homeworks.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 925, "size": 3516 }
\documentclass[12pt]{article} \usepackage[usenames]{color} %used for font color \usepackage{amsmath, amssymb, amsthm} \usepackage{wasysym} \usepackage[utf8]{inputenc} %useful to type directly diacritic characters \usepackage{graphicx} \usepackage [english]{babel} \usepackage [autostyle, english = american]{csquotes} \MakeOuterQuote{"} \graphicspath{ {./} } \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\prob}{\mathbb{P}} \newcommand{\degrees}{^{\circ}} \author{Tianshuang (Ethan) Qiu} \begin{document} \title{Math 74, Week 4} \maketitle \section{Lec Mon, 1c} \subsection{a} Since each term is the product of $x^a, y^b, z^c$, and $a+b+c = 2020$, we can simplify this problem into dogs and biscuits with 2020 biscuits and 3 dogs: $\binom{2020+3-1}{3-1} = \binom{2022}{2}$ \subsection{b} Before combining, we expand each term by picking one variable from each of the 2020 $(x+y+z)$ multiplied together. So we have $3^{2020}$. \subsection{c} This is a "no dogs go hungry" problem. We can just simply pick $x,y,z$ from the first three brackets, feeding each dog a biscuit. Now we have 2017 biscuits with 3 dogs $\binom{2017+3-1}{3-1} = \binom{2019}{2}$. \subsubsection{Alternate solution} We can reach the same result by subtracting the amount where there is only $x$, or only $y$, or only $z$, or $xy$, $xz$, $yz$. \newline For the first three, there is only 1 way for that to happen since that variable has to be raised to 2020. For $xz$, we have $a+b = 2020$, feeding 2018 biscuits to 2 dogs. We need to subtract 2 since we have already counted having only one term. Therefore $\binom{2018+2-1}{2-1} = 2019$. \newline Adding them together we have $1 \times 3 + 2019 \times 3 = 6060$. Now we subtract it from what we had in part (a): $\binom{2022}{2}-6060$ which is the same as $\binom{2019}{2}$ \newpage \section{Dis Mon, 1a} LHS is the amount of ways to choose a team with $k$ people and a captain from a group with $n$ people. It chooses the team first: $\binom{n}{k}$. Then from that team we choose a captain with $k$ ways to do it. \newline RHS calculates the amount of ways to choose a captain first: $n$, then the rest of the team: $\binom{n-1}{k-1}$. Both sides calculate the same thing. Therefore LHS = RHS. \newline Q.E.D. \section{Dis Mon, 4} $$x+\frac{1}{x}=7$$ $$(x+\frac{1}{x})^2=49$$ $$x^2+\frac{1}{x^2}+2=49$$ $$x^2+\frac{1}{x^2}=47$$ \newpage \section{Lec Wed, 1a} \begin{figure}[h] \includegraphics{GRAPH1} \end{figure} The sum of these angles is 90$\degrees$. \newline Proof: We attempt to move all three angles to $\angle HBC$, since a square has 4 right angles, we can see that $\angle HBC = 90\degrees$. \newline We construct a row of 3 identical squares with $A'M' = AM, M'H' = MH, etc$, and connect $DH', H'B$ \newline Consider $\triangle DAM$, it is a right isosceles triangle, since it is in a square ($DA = AM, DA \perp AM$). Now consider $\triangle DA'H', \triangle H'B'B$. Since all the squares have the same side length, we have $DA' = H'B', A'H'=B'B$. Furthermore, since these are all squares, we have $\angle DA'H' = \angle H'B'B = 90\degrees$. \newline Therefore $\triangle DA'H' \cong \triangle H'B'B$ by SAS property. $DH' = H'B, \angle H'B'B = \angle DH'A$. Since the sum of the three angles in a triangle is $180\degrees$ and $\angle H'B'B = DA'H' = 90\degrees$, we can see that $\angle B'H'B + \angle AH'D = 90\degrees$. Finally, since M'H'B' forms a line $\angle M'H'B' = 180 \degrees$, we have $\angle DH'B = \angle DAM = 90 \degrees$. \newline Now we have $\triangle DH'B \sim \triangle DAM$ by RAR property for similarity, therefore $\angle AMD = \angle H'BD$. \newline By essentially the same logic as $\triangle DA'H' \cong \triangle H'B'B$, we can prove that $\triangle DAH \cong \triangle H'B'B$, so $\angle DHA = \angle B'BH$. \newline We have proven that $\angle AMD = \angle H'BD$, $\angle DHA = \angle B'BH$. Since the sum these two and $\angle DBA$ is a right angle, we have proven our claim. \newpage \section{Lec Wed, 1b} \begin{figure}[h] \includegraphics[width = 100mm]{GRAPH2.png} \end{figure} We reflect F over $AD$ to $F'$, and connect $F'C$. Now we first show that B is the optimal point. \newline By the definiton of reflection, we can see that if we were to reflect $F'$ back over $AD$ it will overlap with our original point. This "overlap" causes all three vertices of $\triangle ABF$ to overlap with those of $\triangle ABF'$. By Euclid's definition they are congruent. \newline Consider a point on $AD$ that is not $B: B'$. By similar reasoning we can also show that $\triangle AB'F \cong \triangle AB'F'$. \newline From these congruencies we see that $B'F = B'F', BF = BF'$. The farmers route can then be converted into $F'B+B'C$ and $F'B + BC$. Furthermore, since F'B'C forms a triangle, we have $F'B+B'C > F'B + BC$ by the triangle inequality, thus proving that B gives the shortest route to the cow. \newline Since $FF' \perp AD$ and $CD \perp AD$, we have $FF' \parallel CD$. By the property of alternate interior angles, $\angle FF'C = \angle F'CD$. \newline $\angle ABF' + \angle ABC = 180 \degrees$ since F'C is a line. By the same logic, $\angle CBD + \angle ABC = 180 \degrees$. Therefore $\angle ABF' = DBC$. We have now shown $\triangle ABF' \sim \triangle CBD$ by AAA property. \newline $AF = AF' = 2$. Let $AB = x$, $BD$ would then equal $4-x$. By property of similarity we have $\frac{CD}{F'A} = \frac{DB}{AB}$ $$\frac{6}{2} = \frac{4-x}{x}$$ Solving the above equation yields $x=1$. \newpage \section{Lec Wed, 2b} \begin{figure}[h] \includegraphics[width = 100mm]{GRAPH3.png} \end{figure} Let $AD = CD, CE = EB$, the two segments intersect at O. Connect $BD$, $AE$, $DE$, $CO$, and extend $CO$ to intersect $AB$ at $F$. \newline This assumes that 2a ($DE \parallel AB$, $DE = 0.5AB$) has already been proven. \newline Since $DE \parallel AB$, the alternate interior angles are equal, $\angle EDB = \angle DBA$, $\angle DEA = EAB$. Since the sum of all the angles of a triangle is $180 \degrees$, the third angle must also be the same. Therefore $\triangle DEO \sim \triangle BAO$ by AAA similarity. \newline Because $DE = 0.5AB$, $DO = 0.5BO, EO = 0.5 AO$ by the principles of similarity. \newline Using $DE \parallel AB$ again, we can see that $\angle EGF = \angle AFG$. By the same reasoning as above we can see that $\triangle GOE \sim \triangle FOA$ by AAA similarity. Therefore $$\frac{GE}{AF} = \frac{EO}{OA} = \frac{1}{2}$$ Since $DE \parallel AB$, we can show that $\angle CGE = \angle CFB$, $\angle CEG = \angle CBF$ due to the corresponding angles being equal. $\triangle CGE \sim \triangle CFB$ by AAA similarity. Since E is the midpoint of BC, $CE = 0.5 CB$, then by the principle of similarity, $GE = 0.5FB$ \newline Since $GE = \frac{1}{2}FB = \frac{1}{2}AF$, $AF = FB$. Therefore all three medians intersect at point $O$. \newpage \section{Dis Wed, 1a} \begin{figure}[h] \includegraphics[width = 100mm]{GRAPH7.png} \end{figure} Since these are all squares we have $DA = AM = BC = CJ$. And since $DM$ connects the diagonal of a square, $\angle DMA = \angle ADM = 45 \degrees$. \newline Let $DA$ have length $x$, then $DM$ would have length $\sqrt 2 x$. Since $\angle DMA = 45 \degrees$, its compliment $\angle DMH = 135 \degrees$. $MB$ has length $2x$, and $DM$ has length $x$. \newline $\frac{MH}{DM} = \frac{DM}{MB} = \frac{1}{\sqrt2}$, and since $\triangle DMH, \triangle BMD$ both have angle DMH, $\triangle DMH \sim \triangle BMD$ (RAR similarity), and $\angle DHM = \angle BDM$. \newline Now we prove a congruency between two right triangles. From the squares we have $AD = CB = x$, $CD = AB = 3x$, and $\angle DAB = \angle BCD = 90 \degrees$, so $\triangle DAB \cong \triangle BCD$ by SAS congruency. Therefore $\angle ABD = \angle BDC$ \newline Thus we have moved all three angles into the right angle $\angle ADC$, proving that the sum of these three angles is $90 \degrees$. Q.E.D. \newpage \section{Dis Wed, 3b} \begin{figure}[h] \includegraphics[width = 100mm]{GRAPH6.png} \end{figure} Let $\triangle ABC$ be an arbitrary triangle, extend AC to E, construct $a \parallel AB$. \newline For this proof I need the axiom that a straight line is $180 \degrees$, and that the corresponding angles and alternate interior angles between two parallel lines are equal. \newline Since $a \parallel AB$, we have $\angle BAC = \angle DCE$, and $\angle ABC = \angle BCD$. Since the three angles form $\angle ACE$, which is a straight line, the 3 angles add up to $180 \degrees$. \newpage \section{Lec Fri, 3} If the 5th postulate was indeed redundant (provable from others), then the 4 postulates alone must always define the Euclidean space, and that there should be no other spaces that could follow the first 4 postulates but not the fifth. \newline Hyperbolic geometry satisfies the first 4 with its own definitions of lines, circles, and right angles. The resulting space does not follow Euclid's 5th postulate: two lines whose sum of the inner angles on one side is less than $180 \degrees$ can still curve away from each other and never intersect. \newline This shows that we need Euclid's 5th postulate to properly define a Euclidean space. \newpage \section{Lec Fri, 4b} Let us label "There is at most 1 parallel line to a given line $l$ through a given point $P$" "Statement A", and "two lines that are parallel to the same line are also parallel to eachother" "Statement B". \newline For this problem we need to prove that $A \iff B$. We will begin by proving $A \implies B$. \newline \begin{figure}[h] \includegraphics[width = 100mm]{GRAPH5.png} \end{figure} \paragraph{1} Assume that statement B is false, so the two lines are not parallel to each other. Then let $a \parallel c, b \parallel c$, and $a, c$ intersect at point $P$. Then at $P$, there are two different lines that are both parallel to $c$. \lightning \newline Therefore our initial assumption is incorrect, statement B is true. \paragraph{2} Now we try to prove that $B \implies A$. Assume that statement A is false, so there are at least 2 different lines that go through that point. Then let $a \parallel c, b \parallel c$, $a,b$ go through the same point $P$. Then by statement B, $a \parallel b$, but $a, b$ both contain $P$, so they have to be the same line. \lightning \newline Therefore there exists at most one line through $P$ parallel to $c$. \end{document}
{ "alphanum_fraction": 0.7041849799, "avg_line_length": 59.8057142857, "ext": "tex", "hexsha": "9083340d2966f0b08d2b35591087cf66a70f91a0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "89f1c999b9744af6062185ab91834887a81ca3d5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TianshuangQiu/Math74-Homework", "max_forks_repo_path": "week4/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "89f1c999b9744af6062185ab91834887a81ca3d5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TianshuangQiu/Math74-Homework", "max_issues_repo_path": "week4/main.tex", "max_line_length": 389, "max_stars_count": null, "max_stars_repo_head_hexsha": "89f1c999b9744af6062185ab91834887a81ca3d5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TianshuangQiu/Math74-Homework", "max_stars_repo_path": "week4/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3241, "size": 10466 }
%% ----------------------------------------------------------------------------- \subsection{\Aname{}\ Theorems} \begin{theorem}\label{A-sync} If\/ $\fwellformedO{\sexpr_0}{\stoptional}$ then\/ $\fwellformed{\fforget{\sexpr_0}}{\stoptional}$ and\/ $\sexpr_0 \credAanns \sexpr_1$ iff\/ $\fforget{\sexpr_0} \credA \fforget{\sexpr_1}$ \end{theorem} \begin{lamportproof} By the definition of $\credAanns$. \end{lamportproof} \begin{theorem}[type soundness]\label{A-S-type-soundness} If\/ $\fwellformed{\sexpr_0}{\stype_0}$ then one of the following holds: \begin{itemize} \item $\sexpr_0 \rredA \svalue_0$ and\/ $\snil \sWTA \svalue_0 : \stype_0$ \item $\sexpr_0$ diverges \item $\sexpr_0 \rredA \ctx_0[\edynb{\sbnd_1}{\ctx[\sexpr_1]}]$ and\/ $\sexpr_1 \nredAD \tagerrorD$ \item $\sexpr_0 \rredA \divisionbyzeroerror$ \item $\sexpr_0 \rredA \boundaryerror{\sblist_1}{\svalue_1}$ \end{itemize} \end{theorem} \begin{lamportproof} By progress and preservation lemmas (\lemmaref{A-type-progress} and \lemmaref{A-type-preservation}). \end{lamportproof} \begin{theorem}[dynamic soundness]\label{A-D-type-soundness} If\/ $\fwellformed{\sexpr_0}{\tdyn}$ then one of the following holds: \begin{itemize} \item $\sexpr_0 \rredA \svalue_0$ and\/ $\snil \sWTA \svalue_0 : \tdyn$ \item $\sexpr_0$ diverges \item $\sexpr_0 \rredA \ctx_0[\sexpr_1]$ and\/ $\sexpr_1 \nredAD \tagerrorD$ \item $\sexpr_0 \rredA \divisionbyzeroerror$ \item $\sexpr_0 \rredA \boundaryerror{\sblist_1}{\svalue_1}$ \end{itemize} \end{theorem} \begin{lamportproof} By progress and preservation lemmas (\lemmaref{A-type-progress} \& \lemmaref{A-type-preservation}). \end{lamportproof} \begin{theorem}[incomplete monitoring]\label{A-incomplete-monitoring} There exist\/ $\sexpr_0,\sexpr_1,\sowner_0,\stoptional$ such that\/ $\fwellformedO{\obars{\sexpr_0}{\sowner_0}}{\stoptional}$ and\/ $\sexpr_0 \rredAanns \sexpr_1$ and\/ $\cdot; \sowner \not \sWSOP \sexpr_1$. \end{theorem} \begin{lamportproof} {\newcommand{\thetype}{(\tfun{\tint}{\tint})} \newcommand{\thefun}{\efun{\svar_0}{(\esum{\tint}{\svar_0}{1})}} \newcommand{\theargval}{\efun{\svar_1}{0}} \newcommand{\theexprA}{\estab{\obnd{\sowner_0}{\thetype}{\sowner_1}}{\obars{\edynb{\obnd{\sowner_1}{\thetype}{\sowner_2}}{\obars{\thefun}{\sowner_2}}}{\sowner_1}}} \newcommand{\theexprB}{\obars{\eapp{\tdyn}{\sexpr_0}{(\thefun)}}{\sowner_0}} Let\/ $\begin{array}{lll} \sexpr_f & \eeq & \theexprA\\ \sexpr_0 & \eeq & \theexprB \\ \svalue_f & \eeq & \obars{\ehist{\eset{\obnd{\sowner_0}{\thetype}{\sowner_1}, \obnd{\sowner_1}{\thetype}{\sowner_2}}}{\obbars{\thefun}{\fconcat{\sowner_2}{\sowner_1}}}}{\sowner_0}\\ \sexpr_1 & \eeq & \obars{\eapp{\tdyn}{\svalue_0}{(\theargval)}}{\sowner_0} \end{array}$ \smallskip With a straight-forward application of the reduction rules we obtain: \(\begin{array}[t]{l@{~}l} \obars{\sexpr_f}{\sowner_0} & \rredAanns \obars{\estab{\obnd{\sowner_0}{\thetype}{\sowner_1}}{\obars{\emon{\obnd{\sowner_1}{\thetype}{\sowner_2}}{\obars{\thefun}{\sowner_2}}}{\sowner_1}}}{\sowner_0} \\ & \rredAanns \obars{\ehist{\eset{\obnd{\sowner_0}{\thetype}{\sowner_1}, \obnd{\sowner_1}{\thetype}{\sowner_2}}}{\obbars{\thefun}{\fconcat{\sowner_2}{\sowner_1}}}}{\sowner_0} \\ & \eeq \svalue_f \\ \zerowidth{\mbox{therefore}} \\ \sexpr_0 & \rredAanns \sexpr_1 \end{array}\)} \end{lamportproof} \begin{theorem}[sound and complete blame]\label{A-correct-blame} If\/ $\cdot; \sownertop \sWL \sexpr_0$ and\/ \(\sexpr_0 \rredAanns \boundaryerror{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{\svalue_1}\) then \begin{itemize} \item either\/ $\fhasbnd{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{\sexpr_0}$ or\/ $\fhasbnd{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\sexpr_0}$, and \item $\fblistsenders{\sblist_1}= \fvalueowners{\svalue_1}$ \end{itemize} \end{theorem} {\newcommand{\Abcbeo}{$\sexpr_0 \rredAanns \ctx_0[\edynb{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\svalue_1}] \credAanns \boundaryerror{\sblist_1}{\svalue_1}$} \newcommand{\Abcitems}{% \begin{enumerate} \item \(\forall\,\sbnd_1 \in \sblist_1\), either\/ \(\fhasbnd{\sexpr_0}{\sbnd_1}\) or \(\fhasbnd{\sexpr_0}{\fflip{\sbnd_1}}\) \item one of the following holds: \begin{enumerate} \item $\svalue_1 \not\in (\ehist{\sblist}{\obbars{\svalue}{\sownerlist}})$ and\/ $\snil; \sowner_2 \sWLA \svalue_1$ \item $\svalue_1 \eeq (\ehist{\sblist_2}{\obbars{\svalue_2}{\sownerlist_2\sowner_2}})$ and\/ $\fbndeqowners{\sblist_2}{\fconcat{\sowner_2}{\sownerlist_2\sowner_2}}$ and\/ $\snil; \flast{\sowner_3} \sWLA \svalue_2$ \end{enumerate} \end{enumerate}} \begin{theorem}[blame correctness]\label{A-S-blame-correctness} If\/ $\fwellformedO{\obars{\sexpr_0}{\sowner_0}}{\stoptional}$ then one of the following holds: \begin{itemize} \item $\sexpr_0 \rredAanns \svalue_0$ and\/ $\snil; \sowner_0 \sWLA \svalue_0$ \item $\sexpr_0$ diverges \item $\sexpr_0 \rredAanns \tagerrorD$ \item $\sexpr_0 \rredAanns \divisionbyzeroerror$ \item \Abcbeo{} and furthermore: \Abcitems \end{itemize} \end{theorem} \begin{lamportproof}\leavevmode \step{1}{\suffices{\assume{\Abcbeo} \prove{\Abcitems}}} \begin{pfproof} by \lemmaref{A-label-progress} and \lemmaref{A-label-preservation} \end{pfproof} \step{2}{$\obnd{\sowner_1}{\sowner_2} \sWLA \stype_1$ and\/ $\sowner_2 \sWLA \svalue_1$} \begin{pfproof} by \lemmaref{A-label-preservation} \end{pfproof} \step{3}{\(\forall \sbnd_1 \in \sblist_1\) either \(\sbnd_1 \in \sexpr_0\) or \(\fflip{\sbnd_1} \in \sexpr_0\)} \begin{pfproof} by \lemmaref{A-source-boundary} \end{pfproof} \step{4}{either\/ $\sblist_1 \eeq \obnd{\sowner_1}{\sowner_2}$ \\ or\/ $\svalue_1 \eeq (\ehist{\sblist_0}{\svalue_0})$ and\/ $\sblist_1 \eeq \fconcat{\obnd{\sowner_1}{\sowner_2}}{\sblist_0}$} \begin{pfproof} by the definition of\/ $\credAanns$ \end{pfproof} \end{lamportproof}} \begin{corollary}[minimal blame info] If\/ $\fwellformed{\sexpr_0}{\stoptional}$ and\/ $\sexpr_0 \rredA \boundaryerror{\sblist_1}{\svalue_1}$ then\/ $\sblist_1 \neq \snil$ \end{corollary} \begin{lamportproof} by \theoremref{A-S-blame-correctness} \end{lamportproof} \begin{corollary}[blame/ownership match] If\/ $\sownerenv_0; \sowner_0 \sWLA \sexpr_0$ then for all subterms\/ $(\ehist{\sblist_1}{\sexpr_1})$ there exists\/ $\sownerlist_1$ such that\/ $\sexpr_1 \eeq \obbars{\sexpr_2}{\sownerlist_1}$ and\/ $\fbndeqowners{\sblist_1}{\sownerlist_1}$ \end{corollary} \begin{lamportproof} by definition of $\sWLA$ \end{lamportproof} \begin{theorem}\label{A-S-mon-limit} If\/ $\fwellformed{\sexpr_0}{\stype_0}$ and\/ $\sexpr_0 \rredA \svalue_1$ then\/ $\fmondepth{\svalue_1} \leq 2$ \end{theorem}{ \begin{lamportproof} \step{0}{\suffices{if $\edynb{\sbnd_0}{\svalue_2} \nredAS \svalue_3$ then $\fmondepth{\svalue_3} \leq 2$}} \begin{pfproof} because the only way to increase the $\smondepth$ of a value is by crossing a boundary \end{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-D-mon-limit} and the definition of $\nredAD$ \end{pfproof} \end{lamportproof}} \begin{theorem}\label{A-D-mon-limit} If\/ $\fwellformed{\sexpr_0}{\tdyn}$ and\/ $\sexpr_0 \rredA \svalue_1$ then\/ $\fmondepth{\svalue_1} \leq 1$ \end{theorem}{ \begin{lamportproof} \step{0}{\suffices{if $\estab{\sbnd_0}{\svalue_2} \nredAD \svalue_3$ then $\fmondepth{\svalue_3} \leq 1$}} \begin{pfproof} because the only way to increase the $\smondepth$ of a value is by crossing a boundary \end{pfproof} \qedstep \begin{pfproof} by definition of $\nredAD$ \end{pfproof} \end{lamportproof}} %% ----------------------------------------------------------------------------- \subsection{\Aname{}\ Lemmas} \begin{lemma}[$\sWTA$ progress]\label{A-type-progress} If\/ $\snil \sWTA \sexpr_0 : \toptional$ then one of the following holds: \begin{itemize} \item $\sexpr_0 \in \svalue$ \item $\sexpr_0 \in \eerr$ \item $\exists\,\sexpr_1$ such that\/ $\sexpr_0 \credA \sexpr_1$ \end{itemize} \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\sexpr_0$.} \begin{lamportproof*} \shortpf \mainproof \shortpf By \lemmaref{A-decomposition} it suffices to consider the following cases. \step{0}{\case{$\sexpr_0 \in \svalue$}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{$\sexpr_0 \eeq \ctx_0[\eerr]$}} \begin{pfproof} \qedstep \end{pfproof} \step{2}{\case{$\sexpr_0 \eeq \ctx_0[\eapp{{\stype_1}}{\svalue_0}{\svalue_1}]$}} \begin{pfproof} \step{2.0}{$\svalue_0 \in (\efun{\tann{\svar}{\stype}}{\sexpr}) \cup (\emon{\sbnd}{\svalue})$} \begin{pfproof} by \lemmaref{A-typed-hole} and inversion $\sWTA$ \end{pfproof} \step{2.1}{\scase{$\svalue_0 \eeq \efun{\tann{\svar_2}{\stype_2}}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAS \ctx_0[\esubst{\sexpr_2}{\svar_2}{\svalue_1}]$ \end{pfproof} \end{pfproof} \step{2.2}{\scase{$\svalue_0 \eeq \emon{\obnd{\sowner_0}{(\tfun{\stype_1}{\stype_2})}{\sowner_1}}{\svalue_2}$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAS \ctx_0[\edynb{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{(\eapp{\tdyn}{\svalue_2}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})})}]$ \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\sexpr_0 \eeq \ctx_0[\eapp{{\stype_1}}{\svalue_0}{\svalue_1}]$}} \begin{pfproof} \step{3.0}{\scase{$\svalue_0 \eeq \ehopt{\sblist_0}{(\efun{\tann{\svar_2}{\stype_2}}{\sexpr_2})}$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAD \ctx_0[\eprehist{\sblist_0}{(\esubst{\sexpr_2}{\svar_2}{\svalue_1})}]$ \end{pfproof} \end{pfproof} \step{3.1}{\scase{$\svalue_0 \eeq \ehopt{\sblist_0}{(\emon{\obnd{\sowner_0}{(\tfun{\stype_1}{\stype_2})}{\sowner_1}}{\svalue_2})}$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAD \ctx_0[\eprehist{\sblist_0}{(\estab{\obnd{\sowner_0}{\stype_2}{\sowner_1}}{(\eapp{\fforget{\stype_2}}{\svalue_2}{(\edynb{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})})})}]$ \end{pfproof} \end{pfproof} \step{3.2}{\scase{$\svalue_0 \not\in (\ehopt{\sblist}{(\efun{\tann{\svar}{\stype}}{\sexpr})}) \cup (\ehopt{\sblist}{(\emon{\sbnd}{\svalue})})$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAD \ctx_0[\tagerrorD]$ \end{pfproof} \end{pfproof} \end{pfproof} \step{4}{\case{$\sexpr_0 \eeq \ctx_0[\eunopt{\stoptional}{\svalue_0}]$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-delta-type-progress} \end{pfproof} \end{pfproof} \step{5}{\case{$\sexpr_0 \eeq \ctx_0[\ebinopt{\stoptional}{\svalue_0}{\svalue_1}]$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-delta-type-progress} \end{pfproof} \end{pfproof} \step{6}{\case{$\sexpr_0 \eeq \ctx_0[{\edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\svalue_0}}]$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-dyn-type-progress} \end{pfproof} \end{pfproof} \step{7}{\case{$\sexpr_0 \eeq \ctx_0[{\estab{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\svalue_0}}]$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-sta-type-progress} \end{pfproof} \end{pfproof} \step{8}{\case{$\sexpr_0 \eeq \ctx_0[\eprehist{\sblist_0}{\svalue_0}]$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAD \ctx_0[\faddtrace{\sblist_0}{\svalue_0}]$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\sWTA$ type preservation]\label{A-type-preservation} If\/ $\snil \sWTA \sexpr_0 : \stoptional$ and\/ $\sexpr_0 \credA \sexpr_1$ then\/ $\snil \sWTA \sexpr_1 : \stoptional$. \end{lemma}{ \newcommand{\shortpf}{By \lemmaref{A-S-rr-preservation} and \lemmaref{A-D-rr-preservation}.} \begin{lamportproof*} \shortpf \mainproof \shortpf \end{lamportproof*}} \begin{lemma}[$\nredAS$ preservation]\label{A-S-rr-preservation} If\/ $\snil \sWTA \sexpr_0 : \stype_0$ and\/ $\sexpr_0 \nredAS \sexpr_1$ then\/ $\snil \sWTA \sexpr_1 : \stype_0$. \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\nredAS$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\( \sdeltaA(\sunop, \svalue_0) \mbox{ is defined} \)\\and \( \eunopt{\stoptional}{\svalue_0} \nredAS \sdeltaA(\sunop, \svalue_0) \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-type-preservation} \end{pfproof} \end{pfproof} \step{1}{\case{\( \efst{\stype_0}{(\emon{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{\svalue_0})} \nredAS \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\efst{\tdyn}{\svalue_0})} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \svalue_0 : \tdyn } }{ \snil \sWTA \efst{\tdyn}{\svalue_0} : \tdyn } }{ \snil \sWTA \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\efst{\tdyn}{\svalue_0})} : \stype_0 } \end{mathpar} \end{pfproof} \end{pfproof} \step{2}{\case{\( \esnd{\stype_0}{(\emon{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{\svalue_0})} \nredAS \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\esnd{\tdyn}{\svalue_0})} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \svalue_0 : \tdyn } }{ \snil \sWTA \esnd{\tdyn}{\svalue_0} : \tdyn } }{ \snil \sWTA \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\esnd{\tdyn}{\svalue_0})} : \stype_0 } \end{mathpar} \end{pfproof} \end{pfproof} \step{3}{\case{$\ebinopt{\stoptional}{\svalue_0}{\svalue_1} \nredAS \sdeltaA(\sbinop, \svalue_0, \svalue_1)$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-type-preservation} \end{pfproof} \end{pfproof} \step{4}{\case{$\eapp{{\stype_0}}{(\efun{\tann{\svar_1}{\stype_1}}{\sexpr_1})}{\svalue_2} \nredAS \esubst{\sexpr_1}{\svar_1}{\svalue_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-type-substitution} \end{pfproof} \end{pfproof} \step{5}{\case{\( \eapp{\stype_0}{(\emon{\obnd{\sowner_0}{(\tfun{\stype_1}{\stype_2})}{\sowner_1}}{\svalue_0})}{\svalue_1} \\ \nredAS \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})})} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_0 : \tdyn } \\ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_1 : \stype_1 } }{ \snil \sWTA \estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1} : \tdyn } }{ \snil \sWTA \eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})} : \tdyn } }{ \snil \sWTA \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})})} : \stype_0 } \end{mathpar} \end{pfproof} \end{pfproof} \step{6}{\case{\(\edynb{\sbnd_0}{\svalue_0} \nredAS \svalue_1\)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-dyn-type-preservation} \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\nredAD$ preservation]\label{A-D-rr-preservation} If\/ $\snil \sWTA \sexpr_0 : \tdyn$ and\/ $\sexpr_0 \nredAD \sexpr_1$ then\/ $\snil \sWTA \sexpr_1 : \tdyn$. \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\nredAD$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{$\eunopt{\stoptional}{\svalue_0} \nredAD \tagerrorD$}} \begin{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \tagerrorD : \tdyn$ \end{pfproof} \end{pfproof} \step{1}{\case{$\eunopt{\stoptional}{\svalue_0} \nredAD \sdeltaA(\sunop, \svalue_0)$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-type-preservation} \end{pfproof} \end{pfproof} \step{2}{\case{\( \efst{\tdyn}{(\ehopt{\sblist_0}{(\emon{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\svalue_1})})} \nredAD \eprehist{\sblist_0}{(\estab{\sbnd_7}{(\efst{\ftypefst{\stype_0}}{\svalue_1})})} \)\\where \( \sbnd_7 \sassign \obnd{\sowner_1}{\ftypefst{\stype_0}}{\sowner_2} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_1 : \stype_0 } }{ \snil \sWTA \efst{\ftypefst{\stype_0}}{\svalue_1} : \ftypefst{\stype_0} } }{ \snil \sWTA \estab{\sbnd_7}{(\efst{\ftypefst{\stype_0}}{\svalue_1})} : \tdyn } }{ \snil \sWTA \eprehist{\sblist_0}{(\estab{\sbnd_7}{(\efst{\ftypefst{\stype_0}}{\svalue_1})})} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{3}{\case{\( \esnd{\tdyn}{(\ehopt{\sblist_0}{(\emon{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\svalue_1})})} \nredAD \eprehist{\sblist_0}{(\estab{\sbnd_7}{(\esnd{\ftypesnd{\stype_0}}{\svalue_1})})} \)\\where \( \sbnd_7 \sassign \obnd{\sowner_1}{\ftypesnd{\stype_0}}{\sowner_2} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_1 : \stype_0 } }{ \snil \sWTA \esnd{\ftypesnd{\stype_0}}{\svalue_1} : \ftypesnd{\stype_0} } }{ \snil \sWTA \estab{\sbnd_7}{(\esnd{\ftypesnd{\stype_0}}{\svalue_1})} : \tdyn } }{ \snil \sWTA \eprehist{\sblist_0}{(\estab{\sbnd_7}{(\esnd{\ftypesnd{\stype_0}}{\svalue_1})})} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{4}{\case{$\ebinopt{\stoptional}{\svalue_0}{\svalue_1} \nredAD \tagerrorD$}} \begin{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \tagerrorD : \tdyn$ \end{pfproof} \end{pfproof} \step{5}{\case{$\ebinopt{\stoptional}{\svalue_0}{\svalue_1} \nredAD \sdeltaA(\sbinop, \svalue_0, \svalue_1)$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-type-preservation} \end{pfproof} \end{pfproof} \step{6}{\case{\( \eapp{\tdyn}{(\ehopt{\sblist_0}{(\efun{\svar_1}{\sexpr_1})})}{\svalue_2} \\\nredAD \eprehist{\sblist_0}{(\esubst{\sexpr_1}{\svar_1}{\svalue_2})} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-type-substitution} \end{pfproof} \end{pfproof} \step{7}{\case{\( \eapp{\tdyn}{(\ehopt{\sblist_0}{(\emon{\obnd{\sowner_0}{(\tfun{\stype_1}{\stype_0})}{\sowner_1}}{\svalue_0})})}{\svalue_1} \\ \nredAD \eprehist{\sblist_0}{(\estab{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\eapp{{\stype_0}}{\svalue_0}{(\edynb{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})})})} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_0 : \tfun{\stype_1}{\stype_0} } \\ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_1 : \tdyn } }{ \snil \sWTA \edynb{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1} : \stype_1 } }{ \snil \sWTA \eapp{{\stype_0}}{\svalue_0}{(\edynb{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})} : \stype_0 } }{ \snil \sWTA \eprehist{\sblist_0}{(\estab{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\eapp{{\stype_0}}{\svalue_0}{(\edynb{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\svalue_1})})})} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{8}{\case{\(\estab{\sbnd_0}{\svalue_0} \nredAD \svalue_1\)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-sta-type-preservation} \end{pfproof} \end{pfproof} \step{9}{\case{\( \eprehist{\sblist_0}{\svalue_0} \nredAD \faddtrace{\sblist_0}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-addtrace-type-preservation} \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[unique decomposition]\label{A-decomposition} If\/ $\snil \sWTA \sexpr_0 : \toptional$ then either: \begin{itemize} \item $\sexpr_0 \in \svalue$ \item $\sexpr_0 \eeq \ctx_0[\eapp{\toptional}{\svalue_0}{\svalue_1}]$ \item $\sexpr_0 \eeq \ctx_0[\eunopt{\stoptional}{\svalue_0}]$ \item $\sexpr_0 \eeq \ctx_0[\ebinopt{\stoptional}{\svalue_0}{\svalue_1}]$ \item $\sexpr_0 \eeq \ctx_0[\edynb{\sbnd_1}{\svalue_1}]$ \item $\sexpr_0 \eeq \ctx_0[\estab{\sbnd_1}{\svalue_1}]$ \item $\sexpr_0 \eeq \ctx_0[\eprehist{\sblist_1}{\svalue_1}]$ \item $\sexpr_0 \eeq \ctx_0[\eerr]$ \end{itemize} \end{lemma}{ \newcommand{\shortproof}{By induction on the structure of $\sexpr_0$.} \begin{lamportproof*} \shortproof \mainproof\leavevmode \shortproof \step{0}{\case{$\sexpr_0 \eeq \svar_0$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \sexpr_0 : \toptional$ \end{pfproof} \end{pfproof} \step{1}{\case{$\sexpr_0 \eeq \svalue_0$}} \begin{pfproof} \qedstep \end{pfproof} \step{2}{\case{$\sexpr_0 \eeq \epair{\sexpr_1}{\sexpr_2}$}} \begin{pfproof} \step{2.0}{\scase{$\sexpr_1 \not\in \svalue$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2.1}{\scase{$\sexpr_1 \in \svalue$ and $\sexpr_2 \not\in \svalue$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2.2}{\scase{$\sexpr_1 \in \svalue$ and $\sexpr_2 \in \svalue$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \in \svalue$ \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\sexpr_0 \eeq \eapp{\toptional}{\sexpr_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{\case{$\sexpr_0 \eeq \eunopt{\stoptional}{\sexpr_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{\case{$\sexpr_0 \eeq \ebinopt{\stoptional}{\sexpr_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{\case{$\sexpr_0 \eeq \edynb{\sbnd_1}{\sexpr_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{\case{$\sexpr_0 \eeq \estab{\sbnd_1}{\sexpr_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{\case{$\sexpr_0 \eeq \eprehist{\sblist_1}{\sexpr_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{\case{$\sexpr_0 \in \eerr$}} \begin{pfproof} \qedstep \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-typed-hole}\leavevmode If\/ $\snil \sWTA \ctx_0[\sexpr_0] : \toptional$ then one of the following holds: \begin{itemize} \item $\snil \sWTA \sexpr_0 : \tdyn$ \item $\exists\,\stype_0~.~\snil \sWTA \sexpr_0 : \stype_0$ \end{itemize} \end{lemma}{ \newcommand{\shortproof}{By induction on the structure of $\ctx_0$ and case analysis of $\sWTA$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\case{$\ctx_0 \eeq \ctxhole$}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{$\ctx_0 \eeq \epair{\ctx_1}{\sexpr_2}$}} \begin{pfproof} \step{1.0}{$\snil \sWTA \ctx_1[\sexpr_0] : \toptional$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2}{\case{$\ctx_0 \eeq \epair{\svalue_1}{\ctx_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{3}{\case{$\ctx_0 \eeq \eapp{\toptional}{\ctx_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{\case{$\ctx_0 \eeq \eapp{\toptional}{\svalue_1}{\ctx_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{\case{$\ctx_0 \eeq \eunopt{\stoptional}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{\case{$\ctx_0 \eeq \ebinopt{\stoptional}{\ctx_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{\case{$\ctx_0 \eeq \ebinopt{\stoptional}{\svalue_1}{\ctx_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{\case{$\ctx_0 \eeq \edynb{\sbnd_1}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{\case{$\ctx_0 \eeq \estab{\sbnd_1}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{10}{\case{$\ctx_0 \eeq \eprehist{\sblist_1}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\sWTA$ replacement]\label{A-type-replacement}\leavevmode \begin{itemize} \item If\/ $\snil \sWTA \ctx_0[\sexpr_0] : \toptional$ and the derivation contains a proof of\/ $\snil \sWTA \sexpr_0 : \stype_0$ and\/ $\snil \sWTA \sexpr_1 : \stype_0$ then\/ $\snil \sWTA \ctx_0[\sexpr_1] : \toptional$. \item If\/ $\snil \sWTA \ctx_0[\sexpr_0] : \toptional$ and the derivation contains a proof of\/ $\snil \sWTA \sexpr_0 : \tdyn$ and\/ $\snil \sWTA \sexpr_1 : \tdyn$ then\/ $\snil \sWTA \ctx_0[\sexpr_1] : \toptional$. \end{itemize} \end{lemma}{ \newcommand{\shortproof}{By induction on $\ctx_0$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\case{$\ctx_0 \eeq \ctxhole$}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{$\ctx_0 \eeq \epair{\ctx_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2}{\case{$\ctx_0 \eeq \epair{\svalue_1}{\ctx_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{3}{\case{$\ctx_0 \eeq \eapp{\toptional}{\ctx_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{\case{$\ctx_0 \eeq \eapp{\toptional}{\svalue_1}{\ctx_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{\case{$\ctx_0 \eeq \eunopt{\stoptional}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{\case{$\ctx_0 \eeq \ebinopt{\stoptional}{\ctx_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{\case{$\ctx_0 \eeq \ebinopt{\stoptional}{\svalue_1}{\ctx_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{\case{$\ctx_0 \eeq \edynb{\sbnd_1}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{\case{$\ctx_0 \eeq \estab{\sbnd_1}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{10}{\case{$\ctx_0 \eeq \eprehist{\sblist_1}{\ctx_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-delta-type-progress}\leavevmode \begin{itemize} \item If\/ $\snil \sWTA \eunopt{\stype_1}{\svalue_0} : \stype_0$ then\/ $\eunopt{\stype_1}{\svalue_0} \nredAS \sexpr_1$. \item if\/ $\snil \sWTA \ebinopt{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$ then\/ $\ebinopt{\stype_1}{\svalue_0}{\svalue_1} \nredAS \sexpr_1$. \item If\/ $\snil \sWTA \eunopt{\tdyn}{\svalue_0} : \tdyn$ then\/ $\eunopt{\tdyn}{\svalue_0} \nredAD \sexpr_1$. \item if\/ $\snil \sWTA \ebinopt{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$ then\/ $\ebinopt{\tdyn}{\svalue_0}{\svalue_1} \nredAD \sexpr_1$. \end{itemize} \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\sdeltaA$, $\sWTA$, and $\nredAD$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\( \snil \sWTA \efst{\stype_0}{\svalue_0} \)}} \begin{pfproof} \step{0.0}{$\svalue_0 \in \epair{\svalue}{\svalue} \cup \emon{\obnd{\sowner}{\tpair{\stype}{\stype}}{\sowner}}{\svalue}$} \begin{pfproof} by $\sWTA$ canonical forms \end{pfproof} \step{0.1}{\scase{$\svalue_0 \eeq \epair{\svalue_1}{\svalue_2}$}} \begin{pfproof} \qedstep \begin{pfproof} $\efst{\stype_0}{\svalue_0} \nredAS \svalue_1$ \end{pfproof} \end{pfproof} \step{0.2}{\scase{$\svalue_0 \eeq \emon{\obnd{\sowner_0}{\tpair{\stype_1}{\stype_2}}{\sowner_1}}{\svalue_1}$}} \begin{pfproof} \qedstep \begin{pfproof} $\efst{\stype_0}{\svalue_0} \nredAS \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{(\efst{\tdyn}{\svalue_0})}$ \end{pfproof} \end{pfproof} \end{pfproof} \step{1}{\case{\( \snil \sWTA \esnd{\stype_0}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} similar to the $\sfst$ case \end{pfproof} \end{pfproof} \step{2}{\case{\( \snil \sWTA \efst{\tdyn}{\svalue_0} \)}} \begin{pfproof} \step{2.0}{\scase{$\svalue_0 \eeq \epair{\svalue_1}{\svalue_2}$}} \begin{pfproof} \qedstep \begin{pfproof} $\efst{\tdyn}{\svalue_0} \nredAD \svalue_1$ \end{pfproof} \end{pfproof} \step{2.1}{\scase{$\svalue_0 \eeq \ehopt{\sblist_0}{(\emon{\obnd{\sowner_1}{\tpair{\stype_1}{\stype_2}}{\sowner_2}}{\svalue_1})}$}} \begin{pfproof} \qedstep \begin{pfproof} $\efst{\tdyn}{\svalue_0} \nredAD \eprehist{\sblist_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{(\efst{\stype_1}{\svalue_1})})}$ \end{pfproof} \end{pfproof} \step{2.2}{\scase{$\svalue_0 \not\in \epair{\svalue}{\svalue} \cup (\emon{\obnd{\sowner}{\tpair{\stype}{\stype}}{\sowner}}{\svalue})$}} \begin{pfproof} \qedstep \begin{pfproof} $\efst{\tdyn}{\svalue_0} \nredAD \tagerrorD$ \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{\( \snil \sWTA \esnd{\tdyn}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} similar to the $\sfst$ case \end{pfproof} \end{pfproof} \step{4}{\case{$\snil \sWTA \ebinopt{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$}} \begin{pfproof} \step{4.0}{$\svalue_0 \in \sint$ and $\svalue_1 \in \sint$} \begin{pfproof} by $\sWTA$ canonical forms \end{pfproof} \qedstep \begin{pfproof} $\ebinopt{\stype_1}{\svalue_0}{\svalue_1} \nredAS \sdeltaA(\sbinop, \svalue_0, \svalue_1)$ \end{pfproof} \end{pfproof} \step{5}{\case{$\snil \sWTA \ebinopt{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$}} \begin{pfproof} \step{5.0}{\scase{$\svalue_0 \in \sint$ and $\svalue_1 \in \sint$}} \begin{pfproof} \qedstep \begin{pfproof} $\ebinopt{\tdyn}{\svalue_0}{\svalue_1} \nredAD \sdeltaA(\sbinop, \svalue_0, \svalue_1)$ \end{pfproof} \end{pfproof} \step{5.1}{\scase{$\svalue_0 \not\in \sint$ or $\svalue_1 \not\in \sint$}} \begin{pfproof} \qedstep \begin{pfproof} $\ebinopt{\tdyn}{\svalue_0}{\svalue_1} \nredAD \tagerrorD$ \end{pfproof} \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-delta-type-preservation}\leavevmode \begin{itemize} \item If\/ $\snil \sWTA \eunopt{\stype_1}{\svalue_0} : \stype_0$ and\/ $\eunopt{\stype_1}{\svalue_0} \nredAS \sexpr_1$ then\/ $\snil \sWTA \sexpr_1 : \stype_0$. \item If\/ $\snil \sWTA \ebinopt{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$ and\/ $\ebinopt{\stype_1}{\svalue_0}{\svalue_1} \nredAS \sexpr_2$ then\/ $\snil \sWTA \sexpr_2 : \stype_0$. \item If\/ $\snil \sWTA \eunopt{\tdyn}{\svalue_0} : \tdyn$ and\/ $\eunopt{\tdyn}{\svalue_0} \nredAD \sexpr_1$ then\/ $\snil \sWTA \sexpr_1 : \tdyn$. \item If\/ $\snil \sWTA \ebinopt{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$ and\/ $ \ebinopt{\tdyn}{\svalue_0}{\svalue_1} \nredAD \sexpr_2$ then\/ $\snil \sWTA \sexpr_2 : \tdyn$. \end{itemize} \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\sdeltaA$ and $\sWTA$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\( \snil \sWTA \efst{\stype_0}{\svalue_0} : \stype_0 \)}} \begin{pfproof} \step{0.0}{\scase{$\efst{\stype_0}{\epair{\svalue_1}{\svalue_2}} \nredAS \svalue_1$}} \begin{pfproof} \qedstep \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \end{pfproof} \step{0.1}{\scase{\( \efst{\stype_0}{(\emon{\obnd{\sowner_0}{\tpair{\stype_1}{\stype_2}}{\sowner_1}}{\svalue_1})} \\\nredAS \edynb{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{(\efst{\tdyn}{\svalue_1})} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_1 : \tdyn } }{ \snil \sWTA \efst{\tdyn}{\svalue_1} : \tdyn } }{ \snil \sWTA \edynb{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{(\efst{\tdyn}{\svalue_1})} : \stype_1 } \\ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \stype_1 \subteq \stype_0 } }{ \snil \sWTA \edynb{\obnd{\sowner_0}{\stype_1}{\sowner_1}}{(\efst{\tdyn}{\svalue_1})} : \stype_0 } \end{mathpar} \end{pfproof} \end{pfproof} \end{pfproof} \step{1}{\case{$\snil \sWTA \esnd{\stype_0}{\svalue_0} : \stype_0$}} \begin{pfproof} \qedstep \begin{pfproof} similar to $\sfst$ \end{pfproof} \end{pfproof} \step{2}{\case{$\snil \sWTA \esum{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$}} \begin{pfproof} \step{2.0}{$\esum{\stype_1}{\svalue_0}{\svalue_1} \nredAS \sdeltaA(\ssum, \svalue_0, \svalue_1)$} \step{2.1}{$\stype_0 \in \tint \cup \tnat$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{2.2}{\scase{$\stype_0 \eeq \tint$}} \begin{pfproof} \step{2.2.0}{$\snil \sWTA \svalue_0 : \tint$ and $\snil \sWTA \svalue_1 : \tint$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{2.2.1}{$ \svalue_0 \in \sint$ and $\svalue_1 \in \sint$} \begin{pfproof} by $\sWTA$ canonical forms \end{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \sdeltaA(\sbinop, \svalue_0, \svalue_1) : \tint$ \end{pfproof} \end{pfproof} \step{2.3}{\scase{$\stype_0 \eeq \tnat$}} \begin{pfproof} \step{2.3.0}{$\snil \sWTA \svalue_0 : \tnat$ and $\snil \sWTA \svalue_1 : \tnat$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{2.3.1}{$ \svalue_0 \in \snat$ and $\svalue_1 \in \snat$} \begin{pfproof} by $\sWTA$ canonical forms \end{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \sdeltaA(\sbinop, \svalue_0, \svalue_1) : \tnat$ \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\snil \sWTA \equotient{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$}} \begin{pfproof} \step{3.0}{$\equotient{\stype_1}{\svalue_0}{\svalue_1} \nredAS \sdeltaA(\squotient, \svalue_0, \svalue_1)$} \step{3.1}{$\stype_0 \in \tint \cup \tnat$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{3.2}{\scase{$\stype_0 \eeq \tint$}} \begin{pfproof} \step{3.2.0}{$\snil \sWTA \svalue_0 : \tint$ and $\snil \sWTA \svalue_1 : \tint$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{3.2.1}{$ \svalue_0 \in \sint$ and $\svalue_1 \in \sint$} \begin{pfproof} by $\sWTA$ canonical forms \end{pfproof} \qedstep \begin{pfproof} $\sdeltaA(\sbinop, \svalue_0, \svalue_1) \in \sint \cup \divisionbyzeroerror$ \end{pfproof} \end{pfproof} \step{3.3}{\scase{$\stype_0 \eeq \tnat$}} \begin{pfproof} \step{3.3.0}{$\snil \sWTA \svalue_0 : \tnat$ and $\snil \sWTA \svalue_1 : \tnat$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{3.3.1}{$ \svalue_0 \in \snat$ and $\svalue_1 \in \snat$} \begin{pfproof} by $\sWTA$ canonical forms \end{pfproof} \qedstep \begin{pfproof} $\sdeltaA(\sbinop, \svalue_0, \svalue_1) \in \snat \cup \divisionbyzeroerror$ \end{pfproof} \end{pfproof} \end{pfproof} \step{4}{\case{$\snil \sWTA \efst{\tdyn}{\svalue_0} : \tdyn$}} \begin{pfproof} \step{4.0}{\scase{\( \svalue_0 \eeq \ehopt{\sblist_0}{\epair{\svalue_1}{\svalue_2}} \)\\and \( \efst{\tdyn}{\svalue_0} \nredAD \faddtrace{\sblist_0}{\svalue_1} \)}} \begin{pfproof} \step{4.0.0}{$\snil \sWTA \svalue_1 : \tdyn$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-addtrace-type-preservation} \end{pfproof} \end{pfproof} \step{4.1}{\scase{\( \svalue_0 \eeq \ehopt{\sblist_0}{(\emon{\obnd{\sowner_1}{\tpair{\stype_1}{\stype_2}}{\sowner_2}}{\svalue_1})} \)\\and \( \efst{\tdyn}{\svalue_0} \nredAD \eprehist{\sblist_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{(\efst{\stype_1}{\svalue_1})})} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_1 : \tdyn } }{ \snil \sWTA \efst{\stype_1}{\svalue_1} : \tdyn } }{ \snil \sWTA \estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{(\efst{\stype_1}{\svalue_1})} : \tdyn } }{ \snil \sWTA \eprehist{\sblist_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{(\efst{\stype_1}{\svalue_1})})} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{4.2}{\scase{\( \svalue_0 \not\in (\ehopt{\sblist}{\epair{\svalue}{\svalue}}) \cup (\ehopt{\sblist}{(\emon{\sbnd}{\svalue})}) \)\\and \( \efst{\tdyn}{\svalue_0} \nredAD \tagerrorD \)}} \begin{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \tagerrorD : \tdyn$ \end{pfproof} \end{pfproof} \end{pfproof} \step{5}{\case{$\snil \sWTA \esnd{\tdyn}{\svalue_0} : \tdyn$}} \begin{pfproof} \qedstep \begin{pfproof} similar to $\sfst$ \end{pfproof} \end{pfproof} \step{6}{\case{$\snil \sWTA \esum{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$}} \begin{pfproof} \step{6.0}{$\esum{\tdyn}{\svalue_0}{\svalue_1} \nredAD \sdeltaA(\sbinop, \svalue_0, \svalue_1)$} \step{6.1}{$\sdeltaA(\sbinop, \svalue_0, \svalue_1) \in \sint$} \begin{pfproof} by definition $\sdeltaA$ \end{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \sdeltaA(\sbinop, \svalue_0, \svalue_1) : \tdyn$ \end{pfproof} \end{pfproof} \step{7}{\case{$\snil \sWTA \equotient{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$}} \begin{pfproof} \step{7.0}{$\equotient{\tdyn}{\svalue_0}{\svalue_1} \nredAD \sdeltaA(\sbinop, \svalue_0, \svalue_1)$} \step{7.1}{$\sdeltaA(\sbinop, \svalue_0, \svalue_1) \in \sint \cup \divisionbyzeroerror$} \begin{pfproof} by definition $\sdeltaA$ \end{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \sdeltaA(\sbinop, \svalue_0, \svalue_1) : \tdyn$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-dyn-type-progress} If\/ $\snil \sWTA \edynb{\sbnd_0}{\svalue_0} : \stype_0$ and\/ $\sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1}$ then\/ $\exists\,\sexpr_1$ such that\/ $\edynb{\sbnd_0}{\svalue_0} \nredAS \sexpr_1$. \end{lemma}{ \newcommand{\shortproof}{By case analysis of $\fshallow{\tagof{\stype_0}}{\svalue_0}$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\case{$\fshallow{\tagof{\stype_0}}{\ehopt{\sblist_2}{(\efun{\svar_1}{\sexpr_1})}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\edynb{\sbnd_0}{\svalue_0} \nredAS \emon{\sbnd_0}{\svalue_0}$ \end{pfproof} \end{pfproof} \step{1}{\case{$\fshallow{\tagof{\stype_0}}{\efun{\tann{\svar_1}{\stype_1}}{\sexpr_1}}$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \edynb{\sbnd_0}{\svalue_0} : \stype_0$ \end{pfproof} \end{pfproof} \step{2}{\case{$\fshallow{\tagof{\stype_0}}{\ehopt{\sblist_2}{(\emon{\sbnd_1}{\svalue_1})}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\edynb{\sbnd_0}{\svalue_0} \nredAS \emon{\sbnd_0}{\svalue_0}$ \end{pfproof} \end{pfproof} \step{3}{\case{$\fshallow{\tagof{(\tpair{\stype_1}{\stype_2})}}{\ehopt{\sblist_2}{\epair{\svalue_1}{\svalue_2}}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\edynb{\sbnd_0}{\svalue_0} \nredAS \emon{\sbnd_0}{\svalue_0}$ \end{pfproof} \end{pfproof} \step{4}{\case{$\fshallow{\tagof{\tint}}{\ehopt{\sblist_1}{\svalue_1}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\edynb{\sbnd_0}{\svalue_0} \nredAS \svalue_1$ \end{pfproof} \end{pfproof} \step{5}{\case{$\fshallow{\tagof{\tnat}}{\ehopt{\sblist_1}{\svalue_1}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\edynb{\sbnd_0}{\svalue_0} \nredAS \svalue_1$ \end{pfproof} \end{pfproof} \step{6}{\case{$\neg\fshallow{\tagof{\stype_0}}{\svalue_0}$}} \begin{pfproof} \qedstep \begin{pfproof} $\edynb{\sbnd_0}{\svalue_0} \nredAS \boundaryerror{\sbnd_0}{\svalue_0}$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-sta-type-progress} If\/ $\snil \sWTA \estab{\sbnd_0}{\svalue_0} : \tdyn$ and\/ $\sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1}$ then\/ $\exists\,\sexpr_1$ such that\/ $\estab{\sbnd_0}{\svalue_0} \nredAD \sexpr_1$. \end{lemma}{ \newcommand{\shortproof}{By case analysis on $\svalue_0$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\case{$\svalue_0 \in \ehopt{\sblist_2}{(\efun{\svar}{\sexpr})}$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \estab{\sbnd_0}{\svalue_0} : \tdyn$ \end{pfproof} \end{pfproof} \step{1}{\case{$\svalue_0 \in \efun{\tann{\svar}{\stype}}{\sexpr}$}} \begin{pfproof} \qedstep \begin{pfproof} $\estab{\sbnd_0}{\svalue_0} \nredAD \emon{\sbnd_0}{\svalue_0}$ \end{pfproof} \end{pfproof} \step{2}{\case{$\svalue_0 \eeq \emon{\sbnd_1}{\svalue_1}$}} \begin{pfproof} \step{2.0}{\scase{\( \svalue_1 \eeq \ehopt{\sblist_2}{\svalue_2} \)\\and \( \svalue_2 \in (\efun{\svar}{\sexpr}) \cup \epair{\svalue}{\svalue} \)}} \begin{pfproof} \qedstep \begin{pfproof} $\estab{\sbnd_0}{\svalue_0} \nredAD \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\svalue_0}$ \end{pfproof} \end{pfproof} \step{2.1}{\scase{\( \svalue_1 \eeq \ehopt{\sblist_2}{(\emon{\sbnd_3}{\svalue_2})} \)\\and \( \svalue_2 \in (\efun{\tann{\svar}{\stype}}{\svalue}) \cup \epair{\svalue}{\svalue} \)}} \begin{pfproof} $\estab{\sbnd_0}{\svalue_0} \nredAD \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{(\emon{\sbnd_3}{\svalue_2})}$ \end{pfproof} \end{pfproof} \step{3}{\case{$\svalue_0 \eeq \epair{\svalue_1}{\svalue_2}$}} \begin{pfproof} \step{3.0}{$\stype_0 \eeq \tpair{\stype_1}{\stype_2}$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \qedstep \begin{pfproof} $\estab{\sbnd_0}{\svalue_0} \nredAD \emon{\sbnd_0}{\svalue_0}$ \end{pfproof} \end{pfproof} \step{4}{\case{$\svalue_0 \in \sint$}} \begin{pfproof} \qedstep \begin{pfproof} $\estab{\sbnd_0}{\svalue_0} \nredAD \svalue_0$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-dyn-type-preservation}\leavevmode If\/ $\snil \sWTA \edynb{\sbnd_0}{\svalue_0} : \stype_0$ and\/ $\sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1}$ and\/ $\edynb{\sbnd_0}{\svalue_0} \nredAS \sexpr_1$ then\/ $\snil \sWTA \sexpr_1 : \stype_0$. \end{lemma}{ \newcommand{\shortproof}{By case analysis of $\nredAS$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\case{\( \edynb{\sbnd_0}{\svalue_0} \nredAS \emon{\sbnd_0}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_0 : \tdyn } }{ \snil \sWTA \emon{\sbnd_0}{\svalue_0} : \stype_0 } \end{mathpar} \end{pfproof} \end{pfproof} \step{2}{\case{\( \edynb{\sbnd_0}{\ehopt{\sblist_1}{\sint_0}} \nredAS \sint_0 \)}} \begin{pfproof} \qedstep \begin{pfproof} by case analysis of $\fshallow{\tagof{\stype_0}}{\sint_0}$ \end{pfproof} \end{pfproof} \step{3}{\case{\( \edynb{\sbnd_0}{\svalue_0} \nredAS \boundaryerror{\sbnd_0}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \boundaryerror{\sbnd_0}{\svalue_0} : \stype_0$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-sta-type-preservation} If\/ $\snil \sWTA \estab{\sbnd_0}{\svalue_0} : \tdyn$ and\/ $\sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1}$ and\/ $\estab{\sbnd_0}{\svalue_0} \nredAD \sexpr_1$ then\/ $\snil \sWTA \sexpr_1$. \end{lemma}{ \newcommand{\shortproof}{By case analysis of $\nredAD$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\case{\( \svalue_0 \in (\efun{\tann{\svar}{\stype}}{\sexpr}) \cup (\epair{\svalue}{\svalue}) \) \\ and \( \estab{\sbnd_0}{\svalue_0} \nredAD \emon{\sbnd_0}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_0 : \stype_0 } }{ \snil \sWTA \emon{\sbnd_0}{\svalue_0} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{1}{\case{\( \svalue_0 \in (\efun{\svar}{\sexpr}) \cup (\epair{\svalue}{\svalue}) \) \\ and \( \estab{\sbnd_0}{(\emon{\sbnd_1}{(\ehopt{\sblist_2}{\svalue_0})})} \nredAD \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\svalue_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_0 : \tdyn } }{ \snil \sWTA \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\svalue_0} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{2}{\case{\( \svalue_0 \in (\efun{\tann{\svar}{\stype}}{\sexpr}) \cup (\epair{\svalue}{\svalue}) \) \\ and \( \estab{\sbnd_0}{(\emon{\sbnd_1}{(\ehopt{\sblist_2}{(\emon{\sbnd_3}{\svalue_0})})})} \nredAD \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{(\emon{\sbnd_3}{\svalue_0})} \)\\ and \( \sbnd_3 \eeq \obnd{\sowner_4}{\stype_3}{\sowner_5} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWTA$} }{ \snil \sWTA \svalue_0 : \stype_3 } }{ \snil \sWTA \emon{\sbnd_3}{\svalue_0} : \tdyn } }{ \snil \sWTA \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{(\emon{\sbnd_3}{\svalue_0})} : \tdyn } \end{mathpar} \end{pfproof} \end{pfproof} \step{3}{\case{\( \estab{\sbnd_0}{\sint_0} \nredAD \sint_0 \)}} \begin{pfproof} \qedstep \begin{pfproof} $\snil \sWTA \sint_0 : \tdyn$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-addtrace-type-preservation} If\/ $\snil \sWTA \eprehist{\sblist_0}{\svalue_0} : \tdyn$ then\/ $\snil \sWTA \faddtrace{\sblist_0}{\svalue_0} : \tdyn$. \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\saddtrace$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\(\faddtrace{\snil}{\svalue_0} \feq \svalue_0\)}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{\( \faddtrace{\sblist_0}{\obbars{\ehist{\sblist_1}{\svalue_1}}{\sownerlist_2}} \feq \ehist{\fconcat{\sblist_0}{\sblist_1}}{\obbars{\svalue_1}{\sownerlist_2}} \)}} \begin{pfproof} \qedstep \begin{pfproof} by $\snil \sWTA \svalue_1 : \tdyn$ \end{pfproof} \end{pfproof} \step{2}{\case{\( \faddtrace{\sblist_0}{\svalue_1} \feq \ehist{\sblist_0}{\svalue_1} \)}} \begin{pfproof} \qedstep \begin{pfproof} by $\snil \sWTA \svalue_1 : \tdyn$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-type-substitution}\leavevmode \begin{itemize} \item If\/ $\fcons{\tann{\svar_0}{\stype_0}}{\stypeenv_0} \sWTA \sexpr_1 : \toptional$ and\/ $\snil \sWTA \svalue_0 : \stype_0$ then\/ $\stypeenv_0 \sWTA \esubst{\sexpr_1}{\svar_0}{\svalue_0} : \toptional$ \item If\/ $\fcons{\tann{\svar_0}{\tdyn}}{\stypeenv_0} \sWTA \sexpr_1 : \toptional$ and\/ $\snil \sWTA \svalue_0 : \tdyn$ then\/ $\stypeenv_0 \sWTA \esubst{\sexpr_1}{\svar_0}{\svalue_0} : \toptional$ \end{itemize} \end{lemma}{ \newcommand{\shortpf}{By induction on $\sexpr_1$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{$\sexpr_1 \eeq \svar_2$}} \begin{pfproof} \step{0.0}{\scase{$\svar_0 \eeq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} {$\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \svalue_0$} \end{pfproof} \end{pfproof} \step{0.1}{\scase{$\svar_0 \neq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} {$\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$} \end{pfproof} \end{pfproof} \end{pfproof} \step{1}{\case{$\sexpr_1 \eeq \sint_0$}} \begin{pfproof} \qedstep \begin{pfproof} {$\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$} \end{pfproof} \end{pfproof} \step{2}{\case{$\sexpr_1 \eeq \efun{\svar_2}{\sexpr_2}$}} \begin{pfproof} \step{2.0}{\scase{$ \svar_0 \eeq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2.1}{\scase{$\svar_0 \neq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} {$\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$} \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\sexpr_1 \eeq \efun{\tann{\svar_2}{\stype_2}}{\sexpr_2}$}} \begin{pfproof} \step{3.0}{\scase{$ \svar_0 \eeq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{3.1}{\scase{$\svar_0 \neq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} {$\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$} \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\sexpr_1 \eeq \epair{\sexpr_2}{\sexpr_3}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{\case{$\sexpr_1 \eeq \eapp{\toptional}{\sexpr_2}{\sexpr_3}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{\case{$\sexpr_1 \eeq \eunopt{\stoptional}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{\case{$\sexpr_1 \eeq \ebinopt{\stoptional}{\sexpr_2}{\sexpr_3}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{\case{$\sexpr_1 \eeq \edynb{\sbnd_2}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{\case{$\sexpr_1 \eeq \estab{\sbnd_2}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{\case{$\sexpr_1 \eeq \ehist{\sblist_2}{\svalue_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{10}{\case{$\sexpr_1 \eeq \eprehist{\sblist_2}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} %\begin{lemma}[inversion]\label{$\sWTA$ inversion}\leavevmode %\end{lemma} %\begin{lemma}[canonical forms]\label{$\sWTA$ canonical forms}\leavevmode %\end{lemma} \begin{lemma}[$\sWLA$-progress]\label{A-label-progress} If\/ $\snil \sWTA \sexpr_0 : \toptional$ and\/ $\snil; \ownertop \sWLA \sexpr_0$ then one of the following holds: \begin{itemize} \item $\sexpr_0 \in \obbars{\svalue}{\sownerlist}$ \item $\sexpr_0 \in \ctx\ctxbars{\eerr}{\sowner}$ \item $\exists\,\sexpr_1$ such that\/ $\sexpr_0 \credA \sexpr_1$ \end{itemize} \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\sexpr_0$.} \begin{lamportproof*} \shortpf \mainproof \shortpf By \lemmaref{A-label-decomposition}, it suffices to consider the following cases. \step{0}{\case{$\sexpr_0 \in \obbars{\svalue}{\sownerlist}$}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{$\sexpr_0 \in \ctx\ctxbars{\eerr}{\sowner}$}} \begin{pfproof} \qedstep \end{pfproof} \step{2}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\eapp{{\stype_0}}{\obbars{\svalue_0}{\sownerlist_0}}{\svalue_1}}{\sowner_1}$}} \begin{pfproof} \step{2.0}{$\svalue_0 \in (\efun{\tann{\svar}{\stype}}{\sexpr}) \cup (\emon{\sbnd}{\svalue})$} \begin{pfproof} by $\sWTA$ inversion and canonical forms \end{pfproof} \step{2.1}{\scase{\( \svalue_0 \eeq \efun{\tann{\svar_2}{\stype_2}}{\sexpr_2} \)}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAS \ctx\ctxbars{\obbars{\esubst{\sexpr_2}{\svar_2}{\obars{\svalue_1}{\fconcat{\sowner_1}{\frev{\sownerlist_0}}}}}{\sownerlist_0}}{\sowner_1}$ \end{pfproof} \end{pfproof} \step{2.2}{\scase{\( \svalue_0 \eeq \emon{\obnd{\sowner_2}{(\tfun{\stype_2}{\stype_3})}{\sowner_3}}{\obars{\svalue_2}{\sowner_4}} \)}} \begin{pfproof} \step{2.2.0}{\pflet{\( \sbnd_3 \sassign \obnd{\sowner_2}{\stype_3}{\sowner_3} \) \\ and \( \sbnd_4 \sassign \obnd{\sowner_3}{\stype_2}{\sowner_2} \)}} \qedstep \begin{pfproof} \(\sexpr_0 \nredAS \ctx\ctxbars{\obbars{\edynb{\sbnd_3}{\obars{\eapp{\tdyn}{\svalue_2}{(\estab{\sbnd_4}{\obbars{\svalue_1}{\fconcat{\sowner_1}{\fconcat{\sowner_1}{\frev{\sownerlist_0}}}}})}}{\sowner_4}}}{\sownerlist_0}}{\sowner_1} \) \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\eapp{\tdyn}{\obbars{\svalue_0}{\sownerlist_0}}{\svalue_1}}{\sowner_1}$}} \begin{pfproof} \step{3.0}{\scase{$\svalue_0 \eeq \ehopt{\sblist_2}{\obbars{\efun{\svar_2}{\sexpr_2}}{\sownerlist_3}}$}} \begin{pfproof} \step{3.0.0}{\pflet{\( \svalue_2 \sassign \faddtrace{\frev{\sblist_2}}{\obbars{\svalue_1}{\fconcat{\sowner_1}{\fconcat{\frev{\sownerlist_0}}{\frev{\sownerlist_3}}}}} \)}} \qedstep \begin{pfproof} \(\sexpr_0 \nredAD \ctx\ctxbars{\obars{\eprehist{\sblist_2}{\obbars{\esubst{\sexpr_2}{\svar_2}{\svalue_1}}{\sownerlist_3}}}{\sownerlist_0}}{\sowner_1} \) \end{pfproof} \end{pfproof} \step{3.1}{\scase{\( \svalue_0 \eeq \ehopt{\sblist_2}{\obbars{\emon{\obnd{\sowner_3}{\obars{\tfun{\stype_2}{\stype_3}}{\sowner_4}}{\sowner_4}}{\obars{\svalue_2}{\sowner_5}}}{\sownerlist_6}} \)}} \begin{pfproof} \step{3.1.0}{\pflet{\( \sbnd_7 \sassign \obnd{\sowner_3}{\stype_3}{\sowner_4} \) \\and \( \sbnd_8 \sassign \obnd{\sowner_4}{\stype_2}{\sowner_3} \) \\and \( {\stype_4} \sassign \fforget{\stype_3} \)}} \qedstep \begin{pfproof} \(\sexpr_0 \nredAD \ctx\ctxbars{\obbars{\eprehist{\sblist_2}{\obbars{\estab{\sbnd_7}{\obars{\eapp{{\stype_4}}{\svalue_2}{(\edynb{\sbnd_8}{\obbars{\svalue_2}{\flast{\sownerlist_6}}})}}{\sowner_3}}}{\sownerlist_6}}}{\sownerlist_0}}{\sowner_1} \) \end{pfproof} \end{pfproof} \step{3.2}{\scase{$\svalue_0 \not\in (\efun{\svar}{\sexpr}) \cup (\emon{\sbnd}{\svalue})$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \nredAD \ctx\ctxbars{\tagerrorD}{\sowner_0}$ \end{pfproof} \end{pfproof} \end{pfproof} \step{4}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\eunopt{\stoptional}{\obbars{\svalue_0}{\sownerlist_0}}}{\sowner_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-delta-label-progress} \end{pfproof} \end{pfproof} \step{5}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\ebinopt{\stoptional}{\obbars{\svalue_0}{\sownerlist_0}}{\obbars{\svalue_1}{\sownerlist_1}}}{\sowner_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-delta-label-progress} \end{pfproof} \end{pfproof} \step{6}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\edynb{\sbnd_0}{\obbars{\svalue_1}{\sownerlist_1}}}{\sowner_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-dyn-label-progress} \end{pfproof} \end{pfproof} \step{7}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\estab{\sbnd_0}{\obbars{\svalue_1}{\sowner_1}}}{\sowner_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-typed-hole} and \lemmaref{A-sta-label-progress} \end{pfproof} \end{pfproof} \step{8}{\case{$\sexpr_0 \eeq \ctx\ctxbars{\eprehist{\sblist_0}{\svalue_0}}{\sowner_2}$}} \begin{pfproof} \qedstep \begin{pfproof} \(\sexpr_0 \nredAD \ctx\ctxbars{\faddtrace{\sblist_0}{\svalue_0}}{\sowner_2}\) \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\sWLA$-preservation]\label{A-label-preservation} If\/ $\snil \sWTA \sexpr_0 : \toptional$ and\/ $\snil; \ownertop \sWLA \sexpr_0$ and\/ $\sexpr_0 \credA \sexpr_1$ then\/ $\snil; \ownertop \sWLA \sexpr_1$ \end{lemma}{ \newcommand{\shortpf}{By \lemmaref{A-S-label-preservation} and \lemmaref{A-D-label-preservation}.} \begin{lamportproof*} \shortpf \mainproof \shortpf \end{lamportproof*}} \begin{lemma}\label{A-S-label-preservation} If\/ $\snil \sWTA \sexpr_0 : \stype_0$ and\/ $\snil; \sowner_0 \sWLA \sexpr_0$ and\/ $\sexpr_0 \nredAS \sexpr_1$ then\/ $\snil; \sowner_0 \sWLA \sexpr_1$ \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\nredAS$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\( \sdeltaA(\sunop, \svalue_0) \mbox{ is defined} \)\\and \( \obars{\eunopt{\stype_1}{\svalue_0}}{\sowner_0} \nredAS \obars{\sdeltaA(\sunop, \svalue_0)}{\sowner_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-label-preservation} \end{pfproof} \end{pfproof} \step{1}{\case{\( \sdeltaA(\sbinop, {\svalue_0}, {\svalue_1}) \mbox{ is defined} \)\\and \( \obars{\ebinopt{\stype_1}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAS \obars{\sdeltaA(\sbinop, {\svalue_0}, {\svalue_1})}{\sowner_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-label-preservation} \end{pfproof} \end{pfproof} \step{2}{\case{\( \obars{\eapp{{\stype_0}}{\obbars{\efun{\tann{\svar_0}{{\stype_1}}}{\sexpr_0}}{\sownerlist_0}}{\svalue_1}}{\sowner_1} \nredAS \obars{\esubst{\sexpr_0}{\svar_0}{\obbars{\svalue_1}{\fconcat{\sowner_1}{\frev{\sownerlist_0}}}}}{\fconcat{\sownerlist_0}{\sowner_1}} \)}} \begin{pfproof} \step{2.0}{$\sownerlist_0 \eeq \sowner_1 \cdots \sowner_1$} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \step{2.1}{$\snil; \sowner_0 \sWLA \obbars{\svalue_1}{\fconcat{\sowner_1}{\frev{\sownerlist_0}}}$} \begin{pfproof} by \stepref{2.0} and inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-label-substitution} \begin{mathpar} \inferrule*{ \inferrule*{ \mbox{by \lemmaref{A-label-substitution}} }{ \snil; \sowner_1 \sWLA \esubst{\sexpr_0}{\svar_0}{\obbars{\svalue_1}{\fconcat{\sowner_1}{\frev{\sownerlist_0}}}} } }{ \snil; \sowner_1 \sWLA \obars{\esubst{\sexpr_0}{\svar_0}{\obbars{\svalue_1}{\fconcat{\sowner_1}{\frev{\sownerlist_0}}}}}{\fconcat{\sownerlist_0}{\sowner_1}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{3}{\case{\( \obars{\eapp{{\stype_0}}{\obbars{\emon{\obnd{\sowner_0}{\obars{\tfun{\stype_1}{\stype_2}}{\sowner_1}}{\sowner_1}}{\obars{\svalue_0}{\sowner_2}}}{\sownerlist_3}}{\svalue_1}}{\sowner_4} \\\nredAS \obars{\edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\obars{\eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}}})}}{\sowner_2}}}{\fconcat{\sownerlist_3}{\sowner_4}} \)}} \begin{pfproof} \step{3.0}{\( \sowner_1 \eeq \sowner_2 \)\\ and \( \sownerlist_3 \eeq \sowner_4 \cdots \sowner_4 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_1 \sWLA \svalue_0 } \\ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_0 \sWLA \svalue_1 } }{ \snil; \sowner_0 \sWLA \obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}} } }{ \snil; \sowner_1 \sWLA \estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}}} } }{ \snil; \sowner_1 \sWLA \eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}}})} } }{ \snil; \sowner_1 \sWLA \obars{\eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}}})}}{\sowner_2} } }{ \snil; \sowner_0 \sWLA \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\obars{\eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}}})}}{\sowner_2}} } }{ \snil; \sowner_0 \sWLA \obars{\edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\obars{\eapp{\tdyn}{\svalue_0}{(\estab{\obnd{\sowner_1}{\stype_1}{\sowner_0}}{\obbars{\svalue_1}{\fconcat{\sowner_4}{\frev{\sownerlist_3}}}})}}{\sowner_2}}}{\fconcat{\sownerlist_3}{\sowner_4}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{4}{\case{\( \obars{\edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\obbars{\svalue_0}{\sownerlist_2}}}{\sowner_3} \nredAS \sexpr_2 \)}} \begin{pfproof} by \lemmaref{A-dyn-label-preservation} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-D-label-preservation} If\/ $\snil \sWTA \sexpr_0 : \tdyn$ and\/ $\snil; \sowner_0 \sWLA \sexpr_0$ and\/ $\sexpr_0 \nredAD \sexpr_1$ then\/ $\snil; \sowner_0 \sWLA \sexpr_1$ \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\nredAD$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\( \svalue_0 \not\in \emon{\obnd{\sowner}{\tpair{\stype}{\stype}}{\sowner}}{\svalue} \)\\and \( \sdeltaA(\sunop, {\svalue_0}) \mbox{ is not defined} \)\\and \( \obars{\eunopt{\tdyn}{\svalue_0}}{\sowner_0} \nredAD \obars{\tagerrorD}{\sowner_0} \)}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{\( \sdeltaA(\sunop, {\svalue_0}) \mbox{ is defined} \)\\and \( \obars{\eunopt{\tdyn}{\svalue_0}}{\sowner_0} \nredAD \obars{\sdeltaA(\sunop, {\svalue_0})}{\sowner_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-label-preservation} \end{pfproof} \end{pfproof} \step{2}{\case{\( \obars{\efst{\tdyn}{\obbars{\ehopt{\sblist_0}{\obbars{\emon{\obnd{\sowner_1}{\obars{\tpair{\stype_0}{\stype_1}}{\sowner_2}}{\sowner_2}}{\obars{\svalue_1}{\sowner_3}}}{\sownerlist_4}}}{\sownerlist_5}}}{\sowner_6} \nredAD \\\obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{{\stype_0}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_6}} \)}} \begin{pfproof} \step{2.0}{\( \sownerlist_5 \eeq \sowner_6 \cdots \sowner_6 \)\\ and \( \fbndeqowners{\sblist_0}{\sownerlist_4} \)\\and \( \flast{\sownerlist_4} \eeq \sowner_1 \)\\and \( \sowner_2 \eeq \sowner_3 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_2 \sWLA \svalue_1 } }{ \snil; \sowner_2 \sWLA \efst{{\stype_0}}{\svalue_1} } }{ \snil; \sowner_2 \sWLA \obars{\efst{{\stype_0}}{\svalue_1}}{\sowner_3} } }{ \snil; \flast{\sownerlist_4} \sWLA \estab{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{{\stype_0}}{\svalue_1}}{\sowner_3}} } }{ \snil; \sowner_6 \sWLA \eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{{\stype_0}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}} } }{ \snil; \sowner_6 \sWLA \obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{{\stype_0}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_6}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{3}{\case{\( \obars{\esnd{\tdyn}{\obbars{\ehopt{\sblist_0}{\obbars{\emon{\obnd{\sowner_1}{\obars{\tpair{\stype_0}{\stype_1}}{\sowner_2}}{\sowner_2}}{\obars{\svalue_1}{\sowner_3}}}{\sownerlist_4}}}{\sownerlist_5}}}{\sowner_6} \nredAD \\\obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\esnd{{\stype_1}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_6}} \)}} \begin{pfproof} \qedstep \begin{pfproof} similar to $\sfst$ \end{pfproof} \end{pfproof} \step{4}{\case{\( \sdeltaA(\sbinop, {\svalue_0}, {\svalue_1}) \mbox{ is not defined} \)\\and \( \obars{\ebinopt{\tdyn}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAD \obars{\tagerrorD}{\sowner_0} \)}} \begin{pfproof} \qedstep \end{pfproof} \step{5}{\case{\( \sdeltaA(\sbinop, {\svalue_0}, {\svalue_1}) \mbox{ is defined} \)\\and \( \obars{\ebinopt{\tdyn}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAD \obars{\sdeltaA(\sbinop, {\svalue_0}, {\svalue_1})}{\sowner_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-delta-label-preservation} \end{pfproof} \end{pfproof} \step{6}{\case{\( \obars{\eapp{\tdyn}{\obbars{\ehopt{\sblist_0}{\obbars{\efun{\svar_0}{\sexpr_0}}{\sownerlist_1}}}{\sownerlist_2}}{\svalue_1}}{\sowner_3} \nredAD \\\obars{\eprehist{\sblist_0}{\obbars{\esubst{\sexpr_0}{\svar_0}{\faddtrace{\frev{\sblist_0}}{\obbars{\svalue_1}{\fconcat{\sowner_3}{\fconcat{\frev{\sownerlist_2}}{\frev{\sownerlist_1}}}}}}}{\sownerlist_1}}}{\fconcat{\sownerlist_2}{\sowner_3}} \)}} \begin{pfproof} \step{6.0}{\( \sownerlist_2 \eeq \sowner_3 \cdots \sowner_3 \)\\and \( \fbndeqowners{\sblist_0}{\sownerlist_1} \)\\and \( \snil; \flast{\sownerlist_1} \sWLA \efun{\svar_0}{\sexpr_0} \)} \step{6.1}{\( \snil; \flast{\sownerlist_1} \sWLA \faddtrace{\frev{\sblist_0}}{\obbars{\svalue_1}{\fconcat{\sowner_3}{\fconcat{\frev{\sownerlist_2}}{\frev{\sownerlist_1}}}}} \)} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by \lemmaref{A-label-substitution}} }{ \snil; \flast{\sownerlist_1} \sWLA \esubst{\sexpr_0}{\svar_0}{\faddtrace{\frev{\sblist_0}}{\obbars{\svalue_1}{\fconcat{\sowner_3}{\fconcat{\frev{\sownerlist_2}}{\frev{\sownerlist_1}}}}}} } }{ \snil; \sowner_3 \sWLA \eprehist{\sblist_0}{\obbars{\esubst{\sexpr_0}{\svar_0}{\faddtrace{\frev{\sblist_0}}{\obbars{\svalue_1}{\fconcat{\sowner_3}{\fconcat{\frev{\sownerlist_2}}{\frev{\sownerlist_1}}}}}}}{\sownerlist_1}} } }{ \snil; \sowner_3 \sWLA \obars{\eprehist{\sblist_0}{\obbars{\esubst{\sexpr_0}{\svar_0}{\faddtrace{\frev{\sblist_0}}{\obbars{\svalue_1}{\fconcat{\sowner_3}{\fconcat{\frev{\sownerlist_2}}{\frev{\sownerlist_1}}}}}}}{\sownerlist_1}}}{\fconcat{\sownerlist_2}{\sowner_3}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{7}{\case{\( \obars{\eapp{\tdyn}{\obbars{\ehopt{\sblist_0}{\obbars{\emon{\obnd{\sowner_1}{\obars{\tfun{\stype_0}{\stype_1}}{\sowner_2}}{\sowner_2}}{\obars{\svalue_0}{\sowner_3}}}{\sownerlist_4}}}{\sownerlist_5}}{\svalue_1}}{\sowner_6} \nredAD \\\obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\eapp{{\stype_1}}{\svalue_0}{(\edynb{\obnd{\sowner_2}{\stype_0}{\sowner_1}}{\obars{\svalue_2}{\flast{\sownerlist_4}}})}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_6}} \)}} \begin{pfproof} \step{7.0}{\( \sownerlist_5 \eeq \sowner_6 \cdots \sowner_6 \)\\and \( \fbndeqowners{\sblist_0}{\sownerlist_4} \)\\and \( \sowner_1 \eeq \flast{\sownerlist_4} \)\\and \( \sowner_2 \eeq \sowner_3 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_2 \sWLA \svalue_0 } \\ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_1 \sWLA \svalue_2 } }{ \snil; \sowner_2 \sWLA \edynb{\obnd{\sowner_2}{\stype_0}{\sowner_1}}{\obars{\svalue_2}{\flast{\sownerlist_4}}} } }{ \snil; \sowner_2 \sWLA \eapp{{\stype_1}}{\svalue_0}{(\edynb{\obnd{\sowner_2}{\stype_0}{\sowner_1}}{\obars{\svalue_2}{\flast{\sownerlist_4}}})} } }{ \snil; \flast{\sownerlist_4} \sWLA \estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\eapp{{\stype_1}}{\svalue_0}{(\edynb{\obnd{\sowner_2}{\stype_0}{\sowner_1}}{\obars{\svalue_2}{\flast{\sownerlist_4}}})}}{\sowner_3}} } }{ \snil; \sowner_6 \sWLA \eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\eapp{{\stype_1}}{\svalue_0}{(\edynb{\obnd{\sowner_2}{\stype_0}{\sowner_1}}{\obars{\svalue_2}{\flast{\sownerlist_4}}})}}{\sowner_3}}}{\sownerlist_4}} } }{ \snil; \sowner_6 \sWLA \obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\eapp{{\stype_1}}{\svalue_0}{(\edynb{\obnd{\sowner_2}{\stype_0}{\sowner_1}}{\obars{\svalue_2}{\flast{\sownerlist_4}}})}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_6}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{8}{\case{\( \obars{\estab{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obbars{\svalue_0}{\sowner_3}}}{\sowner_0} \nredAD \sexpr_2 \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-sta-label-preservation} \end{pfproof} \end{pfproof} \step{9}{\case{\( \obars{\eprehist{\sblist_0}{\svalue_0}}{\sowner_1} \\\nredAD \obars{\faddtrace{\sblist_0}{\svalue_0}}{\sowner_1} \)}} \begin{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-addtrace-label-preservation} \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-label-decomposition} If\/ $\snil \sWTA \sexpr_0 : \toptional$ and\/ $\snil; \sowner_0 \sWLA \sexpr_0$ then either: \begin{itemize} \item $\sexpr_0 \in \obbars{\svalue}{\sownerlist}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\eapp{\toptional}{\obars{\svalue_0}{\sownerlist_0}}{\obars{\svalue_1}{\sowner_1}}}{\sowner_2}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\eunopt{\stoptional}{\obbars{\svalue_0}{\sownerlist_0}}}{\sowner_1}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\ebinopt{\stoptional}{\obbars{\svalue_0}{\sownerlist_0}}{\obbars{\svalue_1}{\sownerlist_1}}}{\sowner_2}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\edynb{\sbnd_1}{\obbars{\svalue_1}{\sownerlist_0}}}{\sowner_1}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\estab{\sbnd_1}{\obbars{\svalue_1}{\sownerlist_0}}}{\sowner_1}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\estab{\sbnd_1}{\obbars{\svalue_1}{\sownerlist_0}}}{\sowner_1}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\eprehist{\sblist_1}{\obbars{\svalue_1}{\sownerlist_0}}}{\sowner_1}$ \item $\sexpr_0 \eeq \ctx_0\ctxbars{\eerr}{\sowner_0}$ \end{itemize} \end{lemma}{ \newcommand{\shortproof}{By induction on the structure of $\sexpr_0$.} \begin{lamportproof*} \shortproof \mainproof\leavevmode \shortproof \step{0}{\case{$\sexpr_0 \eeq \svar_0$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil; \sowner_0 \sWLA \sexpr_0$ \end{pfproof} \end{pfproof} \step{1}{\case{$\sexpr_0 \in \obbars{\svalue}{\sownerlist}$}} \begin{pfproof} \qedstep \end{pfproof} \step{2}{\case{$\sexpr_0 \eeq \epair{\sexpr_1}{\sexpr_2}$}} \begin{pfproof} \step{2.0}{$\snil \sWTA \sexpr_1 : \toptional$ and $\snil \sWTA \sexpr_2 : \toptional$} \begin{pfproof} by inversion $\sWTA$ \end{pfproof} \step{2.1}{$\snil; \sowner_0 \sWLA \sexpr_1$ and $\snil; \sowner_0 \sWLA \sexpr_2$} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \step{2.2}{\scase{$\sexpr_1 \not\in \obbars{\svalue}{\sownerlist}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2.3}{\scase{$\sexpr_1 \in \obbars{\svalue}{\sownerlist}$ and $\sexpr_2 \not\in \obbars{\svalue}{\sownerlist}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2.4}{\scase{$\sexpr_1 \in \obbars{\svalue}{\sownerlist}$ and $\sexpr_2 \in \obbars{\svalue}{\sownerlist}$}} \begin{pfproof} \qedstep \begin{pfproof} $\sexpr_0 \in \svalue$ \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\sexpr_0 \eeq \eapp{\toptional}{\sexpr_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{\case{$\sexpr_0 \eeq \eunopt{\stoptional}{\sexpr_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{\case{$\sexpr_0 \eeq \ebinopt{\stoptional}{\sexpr_1}{\sexpr_2}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{\case{$\sexpr_0 \eeq \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_1}}{\obars{\sexpr_1}{\sowner_1}}$}} \begin{pfproof} \step{6.0}{$\snil; \sowner_1 \sWLA \sexpr_1$} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{\case{$\sexpr_0 \eeq \estab{\sbnd_1}{\sexpr_1}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{\case{$\sexpr_0 \eeq \eprehist{\sblist_1}{\obbars{\sexpr_1}{\sownerlist_2}}$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-labeled-hole} If\/ $\snil; \sowner_0 \sWLA \ctx_0[\sexpr_0]$ then\/ $\exists\,\sowner_1$ such that\/ $\snil; \sowner_1 \sWLA \sexpr_0$ \end{lemma}{ \newcommand{\shortproof}{By induction on the structure of $\ctx_0$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{$\ctx_0 \eeq \ctxhole$} \begin{pfproof} \qedstep \end{pfproof} \step{1}{$\ctx_0 \eeq \epair{\ctx_1}{\sexpr_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2}{$\ctx_0 \eeq \epair{\sexpr_1}{\ctx_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{3}{$\ctx_0 \eeq \eunopt{\stoptional}{\ctx_1}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{$\ctx_0 \eeq \ebinopt{\stoptional}{\ctx_1}{\sexpr_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{$\ctx_0 \eeq \ebinopt{\stoptional}{\sexpr_1}{\ctx_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{$\ctx_0 \eeq \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_2}}{\obars{\ctx_1}{\sowner_3}}$} \begin{pfproof} \step{6.0}{$\snil; \sowner_2 \sWLA \ctx_1[\sexpr_0]$} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{$\ctx_0 \eeq \estab{\obnd{\sowner_0}{\stype_0}{\sowner_2}}{\obars{\ctx_1}{\sowner_3}}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{$\ctx_0 \eeq \obars{\ctx_1}{\sowner_0}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{$\ctx_0 \eeq \eprehist{\sblist_1}{\obbars{\ctx_1}{\sownerlist_1}}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\sWLA$ replacement]\label{A-label-replacement} If\/ $\snil; \sowner_0 \sWLA \ctx_0[\sexpr_0]$ and the derivation contains a proof of\/ $\snil; \sowner_1 \sWLA \sexpr_0$ and\/ $\snil; \sowner_1 \sWLA \sexpr_1$ then\/ $\sownerenv_0; \sowner_0 \sWLA \ctx_0[\sexpr_1]$ \end{lemma}{ \newcommand{\shortproof}{By induction on the structure of $\ctx_0$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{$\ctx_0 \eeq \ctxhole$} \begin{pfproof} \qedstep \end{pfproof} \step{1}{$\ctx_0 \eeq \epair{\ctx_1}{\sexpr_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{2}{$\ctx_0 \eeq \epair{\sexpr_1}{\ctx_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{3}{$\ctx_0 \eeq \eunopt{\stoptional}{\ctx_1}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{$\ctx_0 \eeq \ebinopt{\stoptional}{\ctx_1}{\sexpr_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{$\ctx_0 \eeq \ebinopt{\stoptional}{\sexpr_1}{\ctx_2}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{$\ctx_0 \eeq \edynb{\obnd{\sowner_0}{\stype_0}{\sowner_2}}{\obars{\ctx_1}{\sowner_3}}$} \begin{pfproof} \step{6.0}{$\snil; \sowner_2 \sWLA \ctx_1[\sexpr_0]$} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{$\ctx_0 \eeq \estab{\obnd{\sowner_0}{\stype_0}{\sowner_2}}{\obars{\ctx_1}{\sowner_3}}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{$\ctx_0 \eeq \obars{\ctx_1}{\sowner_0}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{$\ctx_0 \eeq \eprehist{\sblist_1}{\obbars{\ctx_1}{\sownerlist_1}}$} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\sdeltaA$ label progress]\label{A-delta-label-progress}\leavevmode \begin{itemize} \item If\/ $\snil \sWTA \eunopt{\stype_1}{\svalue_0} : \stype_0$ and\/ $\snil; \sowner_0 \sWLA \eunopt{\stype_1}{\svalue_0}$ and\/ $\obars{\eunopt{\stype_1}{\svalue_0}}{\sowner_0} \nredAS \obars{\sexpr_1}{\sowner_0}$. \item if\/ $\snil \sWTA \ebinopt{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$ and\/ $\snil; \sowner_0 \sWLA \ebinopt{\stype_1}{\svalue_0}{\svalue_1}$ and\/ $\obars{\ebinopt{\stype_1}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAS \obars{\sexpr_2}{\sowner_0}$. \item If\/ $\snil \sWTA \eunopt{\tdyn}{\svalue_0} : \tdyn$ and\/ $\snil; \sowner_0 \sWLA \eunopt{\tdyn}{\svalue_0}$ then\/ $\obars{\eunopt{\tdyn}{\svalue_0}}{\sowner_0} \nredAD \obars{\sexpr_1}{\sowner_0}$. \item if\/ $\snil \sWTA \ebinopt{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$ and\/ $\snil; \sowner_0 \sWLA \ebinopt{\tdyn}{\svalue_0}{\svalue_1}$ then\/ $\obars{\ebinopt{\tdyn}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAD \obars{\sexpr_2}{\sowner_0}$. \end{itemize} \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\sdeltaA$, $\sWTA$, $\sWLA$, and $\nredAD$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{$\snil \sWTA \eunopt{\stype_1}{\svalue_0} : \stype_0$}} \begin{pfproof} \step{0.0}{$\svalue_0 \in (\obbars{\epair{\svalue}{\svalue}}{\sownerlist}) \cup (\obbars{\emon{\obnd{\sowner}{\obars{\tpair{\stype}{\stype}}{\sowner}}{\sowner}}{\svalue}}{\sownerlist})$} \begin{pfproof} by $\sWTA$ inversion and canonical forms \end{pfproof} \step{0.1}{\scase{$\svalue_0 \eeq \obbars{\epair{\svalue_1}{\svalue_2}}{\sownerlist_0}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\eunopt{\stype_1}{\svalue_0}}{\sowner_0} \nredAS \obars{\sdeltaA(\sunop, \svalue_0)}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{0.2}{\scase{$\svalue_0 \eeq \obbars{\emon{\obnd{\sowner_1}{\obars{\tpair{\stype_1}{\stype_2}}{\sowner_2}}{\sowner_2}}{\svalue_1}}{\sownerlist_3}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\efst{{\stype_0}}{\svalue_0}}{\sowner_0} \nredAS \obbars{\edynb{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{(\efst{\tdyn}{\svalue_1})}}{\fconcat{\sownerlist_3}{\sowner_1}}$ \\(and similarly for $\ssnd$) \end{pfproof} \end{pfproof} \end{pfproof} \step{1}{\case{$\snil \sWTA \ebinopt{\stype_1}{\svalue_0}{\svalue_1} : \stype_0$}} \begin{pfproof} \step{2.0}{$\svalue_0 \in \obbars{\sint}{\sownerlist}$ and $\svalue_1 \in \obbars{\sint}{\sownerlist}$} \begin{pfproof} by $\sWTA$ inversion and canonical forms \end{pfproof} \qedstep \begin{pfproof} $\obars{\ebinopt{\stype_1}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAS \obars{\sdeltaA(\sbinop, \svalue_0, \svalue_1)}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{2}{\case{$\snil \sWTA \eunopt{\tdyn}{\svalue_0} : \tdyn$}} \begin{pfproof} \step{2.0}{\scase{$\svalue_0 \in \ehopt{\sblist}{\obbars{\emon{\sbnd}{\obars{\svalue}{\sowner}}}{\sownerlist}}$}} \begin{pfproof} \qedstep \begin{pfproof} by definition $\nredAD$ \end{pfproof} \end{pfproof} \step{2.1}{\scase{$\svalue_0 \in \ehopt{\sblist}{\obbars{\epair{\svalue}{\svalue}}{\sownerlist}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\eunopt{\tdyn}{\svalue_0}}{\sowner_0} \nredAD \obars{\sdeltaA(\sunop, \svalue_0)}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{2.2}{\scase{$\fremtrace{\svalue_0} \not\in \epair{\svalue}{\svalue} \cup (\emon{\sbnd}{\obars{\svalue}{\sowner}})$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\eunopt{\tdyn}{\svalue_0}}{\sowner_0} \nredAD \obars{\tagerrorD}{\sowner_0}$ \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{$\snil \sWTA \ebinopt{\tdyn}{\svalue_0}{\svalue_1} : \tdyn$}} \begin{pfproof} \qedstep \begin{pfproof} by definition $\nredAD$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[$\sdeltaA$ label preservation]\label{A-delta-label-preservation}\leavevmode \begin{itemize} \item If\/ $\snil; \sowner_0 \sWLA \eunopt{\stoptional}{\svalue_0}$ and\/ $\obars{\eunopt{\stoptional}{\svalue_0}}{\sowner_0} \nredAX \obars{\sexpr_1}{\sowner_0}$ then\/ $\snil; \sowner_0 \sWLA \sexpr_1$. \item If\/ $\snil; \sowner_0 \sWLA \ebinopt{\stoptional}{\svalue_0}{\svalue_1}$ and\/ $\obars{\ebinopt{\stoptional}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAX \obars{\sexpr_1}{\sowner_0}$ then\/ $\snil; \sowner_0 \sWLA \sexpr_1$. \end{itemize} \end{lemma}{ \newcommand{\shortproof}{By case analysis of $\nredAX$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\( \obars{\efst{{\stype_0}}{\obbars{\epair{\svalue_1}{\svalue_2}}{\sownerlist_1}}}{\sowner_0} \nredAS \obars{\obbars{\svalue_1}{\sownerlist_1}}{\sowner_0} \)} \begin{pfproof} \step{0.0}{\( \snil; \sowner_0 \sWLA {\svalue_0} \)\\and \( \sownerlist_1 \eeq \sowner_0 \cdots \sowner_0 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \end{pfproof} \step{1}{\( \obars{\efst{{\stype_0}}{\obbars{\emon{\obnd{\sowner_1}{\obars{\tpair{\stype_1}{\stype_2}}{\sowner_2}}{\sowner_2}}{\obars{\svalue_1}{\sowner_2}}}{\sownerlist_3}}}{\sowner_0} \nredAS \obars{\edynb{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{\tdyn}{\svalue_0}}{\sowner_2}}}{\fconcat{\sownerlist_3}{\sowner_0}} \)} \begin{pfproof} \step{1.0}{\( \sownerlist_3 \eeq \sowner_0 \cdots \sowner_0 \)\\and \( \sowner_0 \eeq \sowner_1 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_0 \sWLA {\svalue_0} } }{ \snil; \sowner_0 \sWLA \efst{\tdyn}{\svalue_0} } }{ \snil; \sowner_0 \sWLA \edynb{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{\tdyn}{\svalue_0}}{\sowner_2}} } }{ \snil; \sowner_0 \sWLA \obars{\edynb{\obnd{\sowner_1}{\stype_0}{\sowner_2}}{\obars{\efst{\tdyn}{\svalue_0}}{\sowner_2}}}{\fconcat{\sownerlist_3}{\sowner_0}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{2}{\( \obars{\efst{\tdyn}{\obbars{\ehopt{\sblist_0}{\obbars{\epair{\svalue_1}{\svalue_2}}{\sownerlist_1}}}{\sownerlist_2}}}{\sowner_0} \nredAD \obars{\faddtrace{\sblist_0}{\obbars{\svalue_1}{\sownerlist_1}}}{\fconcat{\sownerlist_2}{\sowner_0}} \)} \begin{pfproof} \step{2.0}{\( \sownerlist_2 \eeq \sowner_0 \cdots \sowner_0 \)\\and \( \fbndeqowners{\sblist_0}{\sownerlist_1} \)\\and \( \snil; \flast{\sownerlist_1} \sWLA \svalue_1 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} by \lemmaref{A-addtrace-label-preservation} \end{pfproof} \end{pfproof} \step{3}{\( \obars{\efst{\tdyn}{\obbars{\ehopt{\sblist_0}{\obbars{\emon{\obnd{\sowner_1}{\obars{\tpair{\stype_1}{\stype_2}}{\sowner_2}}{\sowner_2}}{\obars{\svalue_1}{\sowner_3}}}{\sownerlist_4}}}{\sownerlist_5}}}{\sowner_0} \nredAD \\\obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\efst{\fforget{\stype_1}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_0}} \)} \begin{pfproof} \step{1.0}{\( \sownerlist_5 \eeq \sowner_0 \cdots \sowner_0 \)\\and \( \fbndeqowners{\sblist_0}{\sownerlist_4} \)\\and \( \flast{\sownerlist_4} \eeq \sowner_1 \)\\and \( \snil; \sowner_2 \sWLA \svalue_1 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_0 \sWLA \svalue_1 } }{ \snil; \sowner_0 \sWLA \efst{\fforget{\stype_1}}{\svalue_1} } }{ \snil; \sowner_0 \sWLA \estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\efst{\fforget{\stype_1}}{\svalue_1}}{\sowner_3}} } }{ \snil; \sowner_0 \sWLA \eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\efst{\fforget{\stype_1}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}} } }{ \snil; \sowner_0 \sWLA \obars{\eprehist{\sblist_0}{\obbars{\estab{\obnd{\sowner_1}{\stype_1}{\sowner_2}}{\obars{\efst{\fforget{\stype_1}}{\svalue_1}}{\sowner_3}}}{\sownerlist_4}}}{\fconcat{\sownerlist_5}{\sowner_0}} } \end{mathpar} \end{pfproof} \end{pfproof} \step{4}{$\obars{\esnd{\toptional}{\svalue_0}}{\sowner_0} \nredAX \obbars{\sexpr_2}{\sowner_0}$} \begin{pfproof} \qedstep \begin{pfproof} similar to $\sfst$ cases \end{pfproof} \end{pfproof} \step{5}{$\obars{\esum{\stoptional}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAX \obars{\sint_2}{\sowner_0}$} \begin{pfproof} \qedstep \end{pfproof} \step{3}{$\obars{\equotient{\stoptional}{\svalue_0}{\svalue_1}}{\sowner_0} \nredAX \obars{\divisionbyzeroerror}{\sowner_0}$} \begin{pfproof} \qedstep \end{pfproof} \step{4}{$\obars{\equotient{\stoptional}{\obbars{\sint_1}{\sowner_1}}{\obbars{\sint_2}{\sowner_2}}}{\sowner_0} \nredAX \obars{\floorof{\sint_1 / \sint_2}}{\sowner_0}$} \begin{pfproof} \qedstep \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-dyn-label-progress} If\/ $\snil \sWTA \edynb{\sbnd_0}{\svalue_0} : \stype_0$ and\/ $\snil; \sowner_0 \sWLA \edynb{\sbnd_0}{\svalue_0}$ then\/ $\obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\sexpr_1}{\sowner_0}$. \end{lemma}{ \newcommand{\shortproof}{By inversion of $\sWTA$ and case analysis of $\fshallow{\tagof{\stype_0}}{\svalue_0}$.} \begin{lamportproof*} \shortproof \mainproof \shortproof % maybe should go by possible values, show which are contradictory ... that way there's no question we missed any \step{0}{\( \sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1} \)\\and \( \sowner_0; \sowner_1 \sWL \stype_0 \)\\and \( \snil; \sowner_1 \sWLA \svalue_0 \)\\and \( \svalue_0 \eeq \obbars{\svalue_1}{\sowner_1} \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \step{2}{\case{\( \fshallow{\tagof{\stype_0}}{\svalue_1} \)\\and \( \fremtrace{\svalue_1} \in \obbars{\efun{\svar}{\sexpr}}{\sownerlist} \cup \obbars{\epair{\svalue}{\svalue}}{\sownerlist} \cup \obbars{\emon{\sbnd}{\svalue}}{\sownerlist} \)}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\emon{\sbnd_0}{\svalue_0}}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{4}{\case{$\svalue_1 \in \sint$ and $\fshallow{\tagof{\tint}}{\svalue_1}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\svalue_1}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{5}{\case{$\svalue_1 \in \snat$ and $\fshallow{\tagof{\tnat}}{\svalue_1}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\svalue_1}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{6}{\case{$\neg\fshallow{\tagof{\stype_0}}{\svalue_1}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\boundaryerror{\sbnd_0}{\svalue_0}}{\sowner_0}$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-sta-label-progress} If\/ $\snil \sWTA \estab{\sbnd_0}{\svalue_0} : \tdyn$ and\/ $\snil; \sowner_0 \sWLA \estab{\sbnd_0}{\svalue_0}$ then\/ $\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\sexpr_1}{\sowner_0}$. \end{lemma}{ \newcommand{\shortproof}{By case analysis on $\svalue_0$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\( \sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1} \)\\and \( \sowner_0; \sowner_1 \sWL \stype_0 \)\\and \( \snil; \sowner_1 \sWLA \svalue_0 \)\\and \( \svalue_0 \eeq \obbars{\svalue_1}{\sownerlist_2} \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \step{2}{\case{$\svalue_1 \in \efun{\svar}{\sexpr}$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \estab{\sbnd_0}{\svalue_0} : \tdyn$ \end{pfproof} \end{pfproof} \step{3}{\case{$\svalue_1 \in \efun{\tann{\svar}{\stype}}{\sexpr}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\emon{\sbnd_0}{\svalue_0}}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{4}{\case{$\svalue_1 \in \epair{\svalue}{\svalue}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\emon{\sbnd_0}{\svalue_0}}{\sowner_0}$ \end{pfproof} \end{pfproof} \step{5}{\case{$\svalue_1 \eeq \emon{\sbnd_1}{\obbars{\ehopt{\sblist_2}{\obbars{\svalue_2}{\sownerlist_3}}}{\sownerlist_4}}$}} \begin{pfproof} \step{5.0}{\scase{$\svalue_2 \in (\efun{\svar}{\sexpr}) \cup (\epair{\svalue}{\svalue})$}} \begin{pfproof} \qedstep \begin{pfproof} \(\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\eprehist{\fconcat{\sbnd_0}{\sbnd_1}{\sblist_2}}{\obbars{\svalue_2}{\fconcat{\sownerlist_3}{\fconcat{\sownerlist_4}{\sownerlist_2}}}}}{\sowner_0} \) \end{pfproof} \end{pfproof} \step{5.1}{\scase{$\svalue_2 \in (\efun{\tann{\svar}{\stype}}{\sexpr})$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \svalue_0 : \stype_0$ \end{pfproof} \end{pfproof} \step{5.2}{\scase{$\svalue_2 \eeq (\emon{\sbnd_5}{\obbars{\svalue_3}{\sownerlist_6}})$}} \begin{pfproof} \step{5.2.0}{\sscase{$\svalue_3 \in (\efun{\tann{\svar}{\sexpr}}) \cup \epair{\svalue}{\svalue}$}} \begin{pfproof} \(\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\obbars{\svalue_2}{\fconcat{\sownerlist_3}{\fconcat{\sownerlist_4}{\sownerlist_2}}}}}{\sowner_0} \) \end{pfproof} \step{5.2.1}{\sscase{$\svalue_3 \not\in (\efun{\tann{\svar}{\sexpr}}) \cup \epair{\svalue}{\svalue}$}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \svalue_1 : \stype_0$ \end{pfproof} \end{pfproof} \end{pfproof} \step{5.3}{\scase{otherwise}} \begin{pfproof} \absurdstep \begin{pfproof} $\snil \sWTA \svalue_1 : \stype_0$ \end{pfproof} \end{pfproof} \end{pfproof} \step{6}{\case{$\svalue_1 \eeq \ehopt{\sblist_0}{\obbars{\sint_1}{\sownerlist_1}}$}} \begin{pfproof} \qedstep \begin{pfproof} $\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\sint_1}{\sowner_0}$ \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-dyn-label-preservation}\leavevmode If\/ $\snil \sWTA \edynb{\sbnd_0}{\svalue_0} : \stype_0$ and\/ $\snil; \sowner_0 \sWLA \edynb{\sbnd_0}{\svalue_0}$ and\/ $\obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\sexpr_1}{\sowner_0}$ then\/ $\snil; \sowner_0 \sWLA \sexpr_1$. \end{lemma}{ \newcommand{\shortproof}{By case analysis of $\nredAS$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\( \sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1} \)\\and \( \sowner_0; \sowner_1 \sWL \stype_0 \)\\and \( \snil; \sowner_1 \sWLA \svalue_0 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \step{2}{\case{\( \obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\emon{\sbnd_0}{\svalue_0}}{\sowner_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_1 \sWLA \svalue_0 } }{ \snil; \sowner_0 \sWLA \emon{\sbnd_0}{\svalue_0} } \end{mathpar} \end{pfproof} \end{pfproof} \step{4}{\case{\( \obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\sint_1}{\sowner_0} \)}} \begin{pfproof} \qedstep \end{pfproof} \step{5}{\case{\( \obars{\edynb{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAS \obars{\boundaryerror{\sbnd_0}{\svalue_0}}{\sowner_0} \)}} \begin{pfproof} \qedstep \end{pfproof} \end{lamportproof*}} \begin{lemma}[\asym-$\ssta$ preservation]\label{A-sta-label-preservation} If\/ $\snil \sWTA \estab{\sbnd_0}{\svalue_0} : \tdyn$ and\/ $\snil; \sowner_0 \sWLA \estab{\sbnd_0}{\svalue_0}$ and\/ $\obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\sexpr_1}{\sowner_0}$ then\/ $\snil; \sowner_0 \sWLA \sexpr_1$. \end{lemma}{ \newcommand{\shortproof}{By case analysis of $\nredAD$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{\( \sbnd_0 \eeq \obnd{\sowner_0}{\stype_0}{\sowner_1} \)\\and \( \snil; \sowner_1 \sWLA \svalue_0 \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \step{1}{\case{\( \obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\emon{\sbnd_0}{\svalue_0}}{\sowner_0} \)}} \begin{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \sowner_1 \sWLA \svalue_0 } }{ \snil; \sowner_0 \sWLA \emon{\sbnd_0}{\svalue_0} } \end{mathpar} \end{pfproof} \end{pfproof} \step{2}{\case{\( \obars{\estab{\sbnd_0}{\obbars{\emon{\sbnd_1}{\obbars{\ehopt{\sblist_2}{\svalue_2}}{\sownerlist_4}}}{\sownerlist_5}}}{\sowner_0} \nredAD \obars{\eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\obbars{\svalue_2}{\fconcat{\sownerlist_4}{\fconcat{\sownerlist_5}{\sowner_0}}}}}{\sowner_0} \)}} \begin{pfproof} \step{2.0}{\( \fbndeqowners{\sblist_2}{\sownerlist_4} \)} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} \begin{mathpar} \inferrule*{ \inferrule*{ \inferrule*{ \mbox{by inversion $\sWLA$} }{ \snil; \flast{\sownerlist_4} \sWLA \svalue_2 } }{ \snil; \sowner_0 \sWLA \eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\obbars{\svalue_2}{\fconcat{\sownerlist_4}{\fconcat{\sownerlist_5}{\sowner_0}}}} } }{ \snil; \sowner_0 \sWLA \obars{\eprehist{\fconcat{\sbnd_0}{\fconcat{\sbnd_1}{\sblist_2}}}{\obbars{\svalue_2}{\fconcat{\sownerlist_4}{\fconcat{\sownerlist_5}{\sowner_0}}}}}{\sowner_0} } \end{mathpar} \end{pfproof} \end{pfproof} \step{4}{\case{\( \obars{\estab{\sbnd_0}{\svalue_0}}{\sowner_0} \nredAD \obars{\sint_1}{\sowner_0} \)}} \begin{pfproof} qedstep \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-addtrace-label-preservation} If\/ $\snil \sWTA \eprehist{\sblist_0}{\svalue_0} : \tdyn$ and\/ $\snil; \sowner_0 \sWLA \eprehist{\sblist_0}{\svalue_0}$ then\/ $\snil; \sowner_0 \sWLA \faddtrace{\sblist_0}{\svalue_0}$. \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\saddtrace$.} \begin{lamportproof*} \shortpf \mainproof \shortpf \step{0}{\case{\(\faddtrace{\snil}{\svalue_0} \feq \svalue_0\)}} \begin{pfproof} \qedstep \end{pfproof} \step{1}{\case{\( \faddtrace{\sblist_0}{\obbars{\ehist{\sblist_1}{\svalue_1}}{\sownerlist_2}} \feq \ehist{\fconcat{\sblist_0}{\sblist_1}}{\obbars{\svalue_1}{\sownerlist_2}} \)}} \begin{pfproof} \qedstep \begin{pfproof} \step{1.0}{$\fbndeqowners{\sblist_0}{\sownerlist_2}$} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \end{pfproof} \end{pfproof} \step{2}{\case{\( \faddtrace{\sblist_0}{\svalue_1} \feq \ehist{\sblist_0}{\svalue_1} \)\\and \( \svalue_0 \not\in \ehist{\sblist}{\svalue} \)}} \begin{pfproof} \step{2.0}{\( \svalue_1 \eeq \obbars{\svalue_2}{\sownerlist_2} \)\\and \( \fbndeqowners{\sblist_0}{\sownerlist_2} \)} \begin{pfproof} by inversion $\snil; \sowner_0 \sWLA \esuffix{\sblist_0}{\svalue_1}$ \end{pfproof} \qedstep \end{pfproof} \end{lamportproof*}} \begin{lemma}\label{A-label-substitution}\leavevmode If\/ $\fcons{\tann{\svar_0}{\stype_0}}{\stypeenv_0} \sWTA \sexpr_1 : \toptional$ and\/ $\fcons{\tann{\svar_0}{\sowner_0}}\sownerenv_0; \sowner_1 \sWLA \sexpr_1$ and\/ $\snil \sWTA \svalue_0 : \toptional'$ and\/ $\snil; \sowner_0 \sWLA \svalue_0$ then\/ $\stypeenv_0 \sWTA \esubst{\sexpr_1}{\svar_0}{\svalue_0} : \toptional$ and\/ $\sownerenv_0 \sWLA \esubst{\sexpr_1}{\svar_0}{\svalue_0}$. \end{lemma}{ \newcommand{\shortproof}{By induction on the structure of $\sexpr_0$.} \begin{lamportproof*} \shortproof \mainproof \shortproof \step{0}{$\sexpr_0 \eeq \svar_2$} \begin{pfproof} \step{0.0}{\scase{$\svar_0 \eeq \svar_2$}} \begin{pfproof} \qedstep \end{pfproof} \step{0.1}{\scase{$\svar_0 \neq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} $\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$ \end{pfproof} \end{pfproof} \end{pfproof} \step{1}{\case{$\sexpr_0 \in \sint$}} \begin{pfproof} \qedstep \begin{pfproof} $\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$ \end{pfproof} \end{pfproof} \step{2}{\case{\( \sexpr_0 \eeq \efun{\svar_2}{\sexpr_2} \)\\or\( \sexpr_0 \eeq \efun{\tann{\svar_2}{\stype_2}}{\sexpr_2} \)}} \begin{pfproof} \step{2.0}{\scase{$\svar_0 \eeq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} $\esubst{\sexpr_1}{\svar_0}{\svalue_0} \eeq \sexpr_1$ \end{pfproof} \end{pfproof} \step{2.1}{\scase{$\svar_0 \neq \svar_2$}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{pfproof} \step{3}{\case{\(\sexpr_0 \eeq \epair{\sexpr_1}{\sexpr_2}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{4}{\case{\(\sexpr_0 \eeq \eapp{\toptional}{\sexpr_1}{\sexpr_2}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{5}{\case{\(\sexpr_0 \eeq \eunopt{\stoptional}{\sexpr_1}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{6}{\case{\(\sexpr_0 \eeq \ebinopt{\stoptional}{\sexpr_1}{\sexpr_2}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{7}{\case{\(\sexpr_0 \eeq \edynb{\sbnd_1}{\sexpr_1}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{8}{\case{\(\sexpr_0 \eeq \estab{\sbnd_1}{\sexpr_1}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{9}{\case{\(\sexpr_0 \eeq \obars{\sexpr_1}{\sowner_1}\)}} \begin{pfproof} \step{9.0}{$\sowner_0 \eeq \sowner_1$} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{10}{\case{\(\sexpr_0 \eeq \eprehist{\sblist_1}{\obbars{\sexpr_1}{\sownerlist_1}}\)}} \begin{pfproof} \step{10.0}{$\snil; \flast{\sownerlist_1} \sWLA \sexpr_1$} \begin{pfproof} by inversion $\sWLA$ \end{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \step{11}{\case{\(\sexpr_0 \eeq \ehist{\sblist_1}{\obbars{\sexpr_1}{\sownerlist_1}}\)}} \begin{pfproof} \qedstep \begin{pfproof} by \pfih \end{pfproof} \end{pfproof} \end{lamportproof*}} \begin{lemma}[boundary preservation]\label{A-source-boundary}\leavevmode If\/ $\fwellformedO{\sexpr_0}{\stoptional}$ and\/ $\sexpr_0 \rredA \ctx_0[{\edynb{\sbnd_1}{\svalue_1}}]$ then either\/ $\fhasbnd{\sexpr_0}{\sbnd_1}$ or $\fhasbnd{\sexpr_0}{\fflip{\sbnd_1}}$. \end{lemma}{ \newcommand{\shortpf}{By case analysis of $\nredAS$ and $\nredAD$, evaluation does not create new labels and only creates a new boundary by flipping an existing boundary.} \begin{lamportproof*} \shortpf \mainproof \shortpf \end{lamportproof*}} \begin{lemma}\label{A-mon-compat}\leavevmode If\/ $\fwellformedO{\sexpr_0}{\stoptional}$ and\/ $\sexpr_0 \rredA \ctx[\emon{\obnd{\sowner_0}{\stype_0}{\sowner_2}}{\svalue_0}]$ and\/ $\fshallow{\tagof{\tpair{\stype_1}{\stype_2}}}{\svalue_0}$ then\/ $\stype_0 \in \obars{\tpair{\stype}{\stype}}{\sowner}$ \end{lemma}{ \begin{lamportproof} Surface expressions do not contain monitors, and $\nredAD$ and $\nredAS$ only create monitors with compatible types and values. \end{lamportproof}}
{ "alphanum_fraction": 0.5539949501, "avg_line_length": 33.4196224256, "ext": "tex", "hexsha": "62e70600cdce14f77dfd0c5f70d103764570e9fb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9b77f6e1b1660bdbd78aa1d76ce9c019261054df", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nuprl/gfd-oopsla-2019", "max_forks_repo_path": "tr-A-proof.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9b77f6e1b1660bdbd78aa1d76ce9c019261054df", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nuprl/gfd-oopsla-2019", "max_issues_repo_path": "tr-A-proof.tex", "max_line_length": 312, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9b77f6e1b1660bdbd78aa1d76ce9c019261054df", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nuprl/gfd-oopsla-2019", "max_stars_repo_path": "tr-A-proof.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-24T12:19:27.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-24T12:19:27.000Z", "num_tokens": 46349, "size": 116835 }
\documentclass[conference]{IEEEtran} \usepackage{multirow} \usepackage{subfigure} \usepackage{graphicx} \usepackage{graphics} \usepackage{rotating} \usepackage{verbatim} \usepackage{float} \usepackage{url} \usepackage{listings} \usepackage{color} \definecolor{javared}{rgb}{0.6,0,0} % for strings \definecolor{javagreen}{rgb}{0.25,0.5,0.35} % comments \definecolor{javapurple}{rgb}{0.5,0,0.35} % keywords \definecolor{javadocblue}{rgb}{0.25,0.35,0.75} % javadoc \lstset{language=Java, basicstyle=\ttfamily, keywordstyle=\color{javapurple}\bfseries, stringstyle=\color{javared}, commentstyle=\color{javagreen}, morecomment=[s][\color{javadocblue}]{/**}{*/}, %numbers=left, numberstyle=\tiny\color{black}, stepnumber=2, numbersep=10pt, tabsize=4, showspaces=false, showstringspaces=false} \restylefloat{table} \ifCLASSINFOpdf \else \fi \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} \title{Dirt Spot Sweeping Random Strategy} \author{\IEEEauthorblockN{Mian Asbat Ahmad} \IEEEauthorblockA{Department of Computer Science\\ University of York\\ York, United Kingdom\\ Email: [email protected]} \and \IEEEauthorblockN{Manuel Oriol} \IEEEauthorblockA{\begin{tabular}{c c} ABB Corporate Research & Department of Computer Science\\ Industrial Software Systems & University of York\\ Baden-Dattwil, Switzerland & York, United Kingdom\\ Email: [email protected] & [email protected] \end{tabular}}} \maketitle %%%%%%%%%%%%%%%%% ABSTRACT %%%%%%%%%%%%%%%%%%%% \begin{abstract} While random testing has recently gained momentum, very little has been known on failure domains and their shape. This paper presents an enhanced and improved form of automated random testing, called the Dirt Spot Sweeping Random (DSSR) strategy. DSSR is a new strategy that takes the assumption that a number of failure domains are contiguous. DSSR starts as a regular random+ testing session --- a random testing session with some preference for boundary values. When a failure is found, it increases the chances of using neighbouring values in subsequent tests, thus slowly sweeping around the failure found in hope of finding failures from a different kind in its vicinity. DSSR was implemented within the YETI random testing tool. We evaluate DSSR against random+ and pure random strategies by testing 80 classes with $10^5$ calls for each session 30 times for each strategy. We found that for 68\% of the classes all three strategies find the same unique failures, for 9\% of the classes random+ performs better, for 14\% pure random performs better, and for 9\% DSSR performs better. Overall, DSSR also found 2.3\% more unique failures than random and .3\% more unique failures than random+. \end{abstract} \IEEEpeerreviewmaketitle %Several (enhanced/new/efficient) random strategies are based on the presence of point, block and strip patterns across the input domain. Emphasis of each one is to select test input farthest away from each other to increase the chances of targeting these faulty patterns to produce better results than pure random strategy. However no strategy has tried to expose these contiguous fault pattern once they are discovered during testing. % %In this paper, we propose DSSR, a new random strategy that discovers contiguous faults and evaluate it against Random and pure random. Results show that DSSR is better in some cases and random+ in most other cases. Because all strategies have the same potential, they exhibit similar numbers, but none of them is fundamentally better than the others. %%%%%%%%%%%%%%%%% INTRODUCTION %%%%%%%%%%%%%%%%%%%% \section{Introduction}\label{sec:intro} Success of a software testing technique is mainly based on the number of faults it discovers in the Software Under Test (SUT). An efficient testing process discovers the maximum number of faults in a minimum possible amount of time. Exhaustive testing, where software is tested against all possible inputs, is in most cases not feasible because of the size of the input domain, limited resources and strict time constraints. Therefore, strategies in automated software testing tools are developed with the aim to select more fault-finding test input from the input domain for a given SUT. Producing such targeted test input is difficult because each system has its own requirements and functionality. Chan et al.~\cite{Chan1996} discovered that there are patterns of failure-causing inputs across the input domain. They divided the patterns into point, block and strip patterns on the basis of their occurrence across the input domain. Chen et al.~\cite{Chen2008} also found that the performance of random testing can be increased by slightly altering the technique of test case selection. In adaptive random testing, they found that the performance of random testing increases by up to 50\% when test input is selected evenly across the whole input domain. This was mainly attributed to the better distribution of input which increased the chance of selecting inputs from failure patterns. Similarly Restricted Random Testing \cite{Chan2002}, Feedback directed Random Test Generation \cite{Pacheco2007a}, Mirror Adaptive Random Testing \cite{Chen2003} and Quasi Random Testing \cite{Chen2005} also stress the need for test case selection covering the whole input domain to improve results. In this paper we take the assumption that for a significant number of classes failure domains are contiguous or are very close by. From this assumption, we devised the Dirt Spot Sweeping\footnote{The name refers to the cleaning robots strategy which insists on places where dirt has been found in large amount.} Random strategy (DSSR) which starts as a random+ strategy --- a random strategy focusing more on boundary values. When it finds a new failure, it then increases the chances of finding more faults using neighbouring values. Note that, similarly to previous studies~\cite{Oriol2012} we approximate faults with unique failures. Of course, since this strategy is also a random testing strategy, it has the potential to find all unique failures in the program, but we expect it to be faster at finding unique failures for classes in which failure domains are contiguous than the pure random (R) and random+ (R+) strategies only. We implemented DSSR as a strategy for the random testing tool YETI\footnote{\url{http://www.yetitest.org}}. To evaluate our approach, we tested thirty times each one of 80 classes from the Qualitas Corpus\footnote{\url{http://www.qualitascorpus.com}} with each of the three strategies DSSR, R, and R+. We found that for 68\% of the classes all three strategies find the same unique failures, for 9\% of the classes random+ performs better, for 14\% pure random performs better, and for 9\% DSSR performs better. Overall, DSSR also found 2.3\% more unique failures than random and .3\% more unique failures than random+. %//MANUEL: WHAT IS THIS?? %Motivated by research work underlying Proportional Sampling Strategy of the patterns of failure causing inputs across the input domain, we thought an enhanced random testing technique called Dirt Spot Sweeping Random (DSSR) strategy. The main emphasis in DSSR strategy is focused on the patterns of failures for better performance. Experiments were conducted to address the following research issues: %\begin{enumerate} % %\item To get highly efficient algorithm for coping with the combination of strategies including pure random, random plus and spot sweeping. % %\item To get high number of unique faults in the SUT. % %\item To get lower number of unique faults and higher number of similar faults in the SUT. % %\item To examine no/negative improvement in test results of the SUT. % %\item To determine the processing time involved in DSSR strategy. % %\end{enumerate} The rest of this paper is organized as follows: Section~\ref{sec:dssr} describes the DSSR strategy. Section~\ref{sec:imp} presents our implementation of the strategy. Section~\ref{sec:eval} explains our experimental setup. Section~\ref{sec:res} shows the results of our experiments. Section~\ref{sec:discussion} discusses the results. Section~\ref{sec:rw} presents related work and we conclude in Section~\ref{sec:conc}. %%%%%%%%%%%%%%%%% DIRT SPOT SWEEPING STRATEGY %%%%%%%%%%%%%%% \section{Dirt Spot Sweeping Random Strategy}\label{sec:dssr} The Dirt Spot Sweeping Random strategy (DSSR) is a new strategy which combines the random+ strategy with a dirt spot sweeping functionality. The strategy is based on two intuitions. First, boundaries have interesting values and using these values in isolation can provide high impact on test results. Second, faults and unique failures reside in contiguous blocks and stripes. If this is the case, DSSR increases the performance of the test strategy. Each strategy is briefly explained as follows. \subsection{Random (R)} The pure random strategy is a black-box testing technique in which the SUT is executed using randomly selected test data. Test results obtained are compared to the defined oracle, using SUT specifications in the form of contracts or assertions. In the absence of contracts and assertions the exceptions defined by the programming language are used as test oracles. %According to Beizer \cite{Beizer1990}, software performance is directly dependent on the combination of two main factors, correctness and robustness. Correctness is the expected behaviour of the software based on its specifications while robustness is the behaviour of the software that is not defined in its specifications. %Since random testing generates test data randomly, without any specific pattern, it effectively tests the performance of software by evaluating it for both correctness and robustness. Because of its black-box testing nature, this strategy is particularly effective in testing softwares where the developers want to keep the source code secret~\cite{Chen2010}. The generation of random test data is comparatively cheap and does not require too much intellectual and computation efforts~\cite{Ciupa2009, Ciupa2008}. It is mainly for this reason that various researchers have recommended this strategy for automatic testing tools \cite{Ciupa2008a}. YETI \cite{Oriol2010a, Oriol2010}, AutoTest \cite{Leitner2007, Ciupa2007}, QuickCheck \cite{Claessen2000}, Randoop \cite{Pacheco2007}, Jartage \cite{Oriat2004} are some of the most common automated testing tools based on random strategy.\\ \indent Efficiency of random testing was made suspicious with the intuitive statement of Myers \cite{Myers2004} who termed random testing as one of the poorest methods for software testing. However, experiments performed by various researchers, \cite{Ciupa2007, Duran1981, Duran1984, Hamlet1994, Ntafos2001} have experimentally proven that random testing is simple to implement, cost effective, highly efficient and free from human bias as compared to its rival techniques. Because programs tested at random typically fail a large number of times (there are a large number of calls), it is necessary to cluster failures that likely represent the same fault. The traditional way of doing it is to compare the full stack traces and error types and use this as an equivalence class~\cite{Ciupa2007,Oriol2012} called a unique failure. This way of grouping failures is also used for random+ and DSSR. \subsection{Random Plus Strategy (R+)} The random+ strategy~\cite{Leitner2007} is an extension of the pure random strategy. It uses some special pre-defined values which can be simple boundary values or values that have high tendency of finding faults in the SUT. Boundary values~\cite{Beizer1990} are the values on the start and end of a particular type. For instance, such values for \verb+int+ could be \verb+MAX_INT+, \verb+MAX_INT-1+, \verb+MIN_INT+, \verb-MIN_INT+1-, \verb+-1+, \verb+0+, \verb+1+. %For instance, if input for a SUT is days of an year which is expressed in numbers from 1 to 365 then -3, -2, -1, 0, 1, 2, 3, 362, 363, 364, 365, 366, 367, 368 can be considered as border values as shown in Figure \ref{fig:boundaryValues}. % %\begin{figure}[ht] %\centering %\includegraphics[width= 9cm,height=2cm]{boundary.png} %\caption{Boundary values for input domain from 1 to 365} %\label{fig:boundaryValues} %\end{figure} Similarly, the tester might also add some other special values that he considers effective in finding faults for the current SUT. For example, if a program under test has a loop from -50 to 50 then the tester can add -55 to -45, -5 to 5, 45 to 55 etc., to the pre-defined list of special values in order to be selected for a test. This static list of interesting values is manually updated before the start of the test and has slightly high priority than selection of random values because of more relevance and high chances of finding faults for the given SUT. These special values have high impact on the results particularly detecting problems in specifications~\cite{Ciupa2008}. \subsection{Dirt Spot Sweeping} Chan et al.~\cite{Chan1996} found that there are patterns of failure-causing inputs across the input domain. Figure \ref{fig:patterns} shows these patterns for two dimensional input domain. They divided these patterns into three types called points, block and strip patterns. The black area (Points, block and strip) inside the box show the input which causes the system to fail while white area inside the box represent the genuine input. Boundary of the box (black solid line) surrounds the complete input domain and also represents the boundary values. They also argue that a strategy has more chances of hitting these fault patterns if test cases far away from each other are selected. Other researchers~\cite{Chan2002, Chen2003, Chen2005}, also tried to generate test cases further away from one another targeting these patterns and achieved higher performance. \begin{figure}[ht] \centering \includegraphics[width= 8cm,height=2.5cm]{ART_Patterns.png} \caption{Failure patterns across input domain~\cite{Chen2008}} \label{fig:patterns} \end{figure} Dirt spot sweeping is the part of DSSR strategy that comes into action when a failure is found in the system. On finding a failure, it immediately adds the value causing the failure and its neighbouring values to the already existing list of interesting values. For example in a program if the \verb+int+ type value 50 causes a failure in the system then spot sweeping will add values from 47 to 53 to the list of interesting values. If the failure lies in the block or strip pattern, then adding its neighbours will explore other failures present in that block or strip. As against random plus where the list of interesting values remain static, the list of interesting values is dynamic and changes during the test execution of each program in the DSSR strategy. \begin{figure}[ht] \centering \includegraphics[width=8cm,height=2.2cm]{block2.png} \caption{DSSR covering block and strip pattern} \label{fig:block2} \end{figure} Figure \ref{fig:block2} shows how dirst spot sweeping explores the failures residing in the block and strip patterns of a program. The failure coverage from the pattern is shown in spiral form because first failure will lead to second, second to third and so on till the end. In case the failure is positioned on the point pattern then the added values will not be very effective because point pattern is only an arbitrary failure point in the whole input domain. \subsection{Structure of the Dirt Spot Sweeping Random Strategy} The DSSR strategy is explained with the help of a flow-chart in Figure~\ref{fig:Working_DSSS}. In this process, the strategy continuously tracks the number of failures during the execution of the test session. To keep the system fast this tracking is done in a very effective way with zero or minimum overhead~\cite{Leitner2009}. The execution of test is performed normally until a failure is found in the SUT. Then the program does not only copy the values that lead to the failure, but also copies its surrounding values to the variable list of interesting values. As presented in the flowchart, if the failure finding value is of primitive type then the DSSR finds the type of the value and add values only of that particular type to the interesting values. Addition of these values increases the size of the list of interesting values that provide relevant test data for the remaining test session and the new generated test cases are more targeted towards finding new failures in the given SUT around pre-existing failures. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{flowchart1.pdf} \caption{Working mechanism of DSSR Strategy} \label{fig:Working_DSSS} \end{figure} Boundary values and other special values that have a high tendency of finding faults in the SUT are added to the list by random plus strategy prior to the start of test session where as to sweep the failure pattern, the fault-finding value and its surrounding values are added at runtime after a failure is found. Table \ref{table:addvalues} presents the values that are added to the list of interesting values when a failure is found. In the table the test value is represented by X where X can be int, double, float, long, byte, short, char and String. All values are converted to their respective types before adding to the list of interesting values and vice versa. \begin{table}[ht] %\scriptsize \caption{Neighbouring values for primitive types and String} % title of Table \centering % used for centering table \begin{tabular}{| l | l |} % centered columns (4 columns) \hline\hline %inserts double horizontal lines Type & Values to be added\\ [0.5ex] % inserts table %heading \hline % inserts single horizontal line \multirow{1}{*}{X is int, double, float, } & ~ X, X+1, X+2, X-1, X-2 \\ % inserting body of the \multirow{1}{*}{long, byte, short \& char} & \\ \hline \multirow{8}{*}{X is String} & ~ X\\ % inserting body of the table & ~ X + `` "\\ % inserting body of the table & ~ `` " + X \\ % inserting body of the table & ~ X.toUpperCase() \\ & ~ X.toLowerCase() \\ & ~ X.trim() \\ & ~ X.substring(2) \\ & ~ X.substring(1, X.length()-1) \\[1ex] \hline \hline %inserts single line \end{tabular} \label{table:addvalues} % is used to refer this table in the text \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% EXPLANATION OF DSSR STRATEGY %%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Explanation of DSSR on a concrete example} The DSSR strategy is explained through a simple program seeded with at least three faults. The first fault is a division by zero exception denoted 1 while the second and third are failing assertion statements denoted 2 and 3 in the following program. Below we describe how the DSSR strategy perform execution when the following class is expose to testing. \begin{lstlisting} /** * Calculate square of given number * and verify results. * The code contain 3 faults. * @author (Mian and Manuel) */ public class Math1 { public void calc (int num1) { // Square num1 and store result. int result1 = num1 * num1; int result2 = result1 / num1; // 1 assert Math.sqrt(result1) == num1; // 2 assert result1 >= num1; // 3 } } \end{lstlisting} % %\begingroup % % \fontsize{7pt}{8pt}\selectfont % %\noindent %/\textasteriskcentered \textasteriskcentered \\* %\textasteriskcentered ~ Calculate square of given number and verify results. \\* %\textasteriskcentered ~ Code contain 3 faults.\\* %\textasteriskcentered ~ @author (Mian and Manuel) \\* %\textasteriskcentered ~ @version (1.1, 11/07/12)\\* %\textasteriskcentered / \\* % %\noindent public class Math1 \{\\ %\indent public void calc (int num1) \{\\ % %\indent // Square num1 and store result.\\* %\indent int result1 = num1 * num1;\\* % %%\indent \textbackslash\textbackslash Divide result1 by num1 and store result.\\* %\indent int result2 = result1 / num1;............................................................. Fault 1\\ % %%\indent \textbackslash\textbackslash To check that the revert of result is the received value.\\* %\indent assert Math.sqrt(result1) == num1;.................................................. Fault 2\\ % %%\indent \textbackslash\textbackslash To check that the value of result is positive.\\* %\indent assert result1 $>$= num1;................................................................... Fault 3\\ %\indent \} \\* %\noindent\}\\ % %\endgroup In the above code, one primitive variable of type \verb+int+ is used, therefore, the input domain for DSSR strategy is from \verb+-2,147,483,648 to 2,147,483,647+. The strategy further selects some values (\verb+0, Integer.MIN\_VALUE+ and \verb+Integer.MAX\_VALUE+) as interesting values which are prioritised for selection as inputs. As the test starts, three faults are quickly discovered by DSSR strategy in the following order. \indent \textbf{Fault 1:} The DSSR strategy might select value \verb+0+ for variable \verb+num1+ in the first test case because \verb+0+ is available in the list of interesting values and therefore its priority is higher than other values. This will cause Java to generate division by zero exception. \indent \textbf{Fault 2:} After catching the first fault, the strategy adds it and its surrounding values to the list of interesting values which includes \verb+0, 1, 2, 3 and -1, -2, -3+ in this case. In the second test case DSSR strategy may pick \verb+-3+ as a test value and lead to the second fault where assertion (2) fails because the square root of \verb+9+ will be \verb+3+ instead of the input value -3. \indent \textbf{Fault 3:} After few tests DSSR strategy may select \verb+Integer.MAX\_VALUE+ for variable \verb+num1+ from the list of interesting values which will lead to the 3rd fault because \verb+result1+ will not be able to store the square of \verb+Integer.MAX\_VALUE+. Instead of the actual square value Java assigns a negative value (Java language rule) to variable result1 that will lead to the violation of the next assertion (3). The process above explains that the pre-defined values including border values, fault-finding values and the surrounding values lead to the available faults quickly and in small number of tests as compared to random and random+ strategy. Random and random+ takes longer to discover the second and third fault because they start again searching for new unique failures randomly although the remaining faults are very close to the first one. %%%%%%%%%%%%%%%%% IMPLEMENTATION OF DSSR STRATEGY %%%%%%%%%%%% \section{Implementation of the DSSR strategy}\label{sec:imp} As mentioned previously, the implementation of the DSSR strategy is made in the YETI open-source automated random testing tool. YETI is developed in Java and capable of testing systems developed in procedural, functional and object-oriented languages. Its language-agnostic meta model enables it to test programs written in multiple languages including Java, C\#, JML and .Net. The core features of YETI include easy extensibility for future growth, speed of up to one million calls per minute on java code, real time logging, real time GUI support, ability to test programs using multiple strategies, and auto generation of test report at the end of the testing sessions. For large-scale testing there is a cloud-enabled version of YETI that is capable of executing parallel test sessions in Cloud~\cite{Oriol2010}. A number of hitherto faults have successfully been found by YETI in various production softwares~\cite{Oriol2012, Oriol2011}. YETI can be divided into three decoupled main parts: the core infrastructure, language-specific bindings and strategies. The core infrastructure contains representation for routines, a group of types and a pool of specific type objects. The language specific bindings contain the code to make the call and process the results. The strategies section defines the procedure of how to select the modules (classes) from the project, how to select routines (methods) from these modules and how to generate values for the instances involved inside these routines. The most common strategies are random and random+. By default, YETI uses the random+ strategy if no particular strategy is defined during test initialization. It also enables the user to control the probability of using null values and the percentage of newly created objects for each test session. YETI provides an interactive Graphical User Interface (GUI) in which users can see the progress of the current test in real time. In addition to the GUI, YETI also provides extensive logs of the test session for more in-depth analysis. The DSSR strategy has then been added as an extension of YetiRandomStrategy, which in itself is an extension of an abstract class YetiStrategy. The class hierarchy is shown in Figure \ref{fig:hierarchyofDSSR}. \begin{figure}[h] \centering \includegraphics[width=4cm,height=4.5cm]{hierarchy.pdf} \caption{Class Hierarchy of DSSR in YETI} \label{fig:hierarchyofDSSR} \end{figure} %%%%%%%%%%%%%%%%% EVALUATION %%%%%%%%%%%%%%%%%%%% \section{Evaluation}\label{sec:eval} To evaluate the DSSR strategy, we compare its performances to the performances of both pure random testing (R) and the random+ (R+)~\cite{Leitner2007} strategy. General factors such as system software and hardware as well as the YETI specific factors like percentage of null values, percentage of newly created objects and interesting value injection probability have the same values for the experiments. \subsection{Research questions} To evaluate the DSSR strategy and its usefulness, we set out to answer the following research questions: \begin{enumerate} \item Is any of the three strategies R, R+ and DSSR provide better results than the other two? \item Is there a subset of the classes for which R, R+, or DSSR provide better results than the other two? \item If such categories exist, what are their sizes and how do they compare and can we pick a default strategy according to this citerion? \end{enumerate} \subsection{Experiments} To evaluate the performances of DSSR we performed extensive testing of programs from the Qualitas Corpus~\cite{Tempero2010a}. The Qualitas Corpus is a curated collection of open source java projects built with the aim of helping empirical research on software engineering. These projects are collected in an organized form containing both the source and binary forms. The present evaluation uses version 20101126 which contains 106 open source java projects. We picked 32 projects at random and picked 80 classes at random that produced at least 1 failure and did not timeout with a testing session of maximum 10 minutes. We tested each of the 80 classes thirty times with each strategy. Names and versions of the projects to which these classes belong are given in table~\ref{table:projects}. %It is available in two distributions. The release version ``r'' and the evolution version ``e''. The release version is compact size that contain only the recent version of the projects while the evolution version is more detailed which consists of more than 10 different versions of each project.\\ %Extensive experiments were carried out to evaluate the performance of DSSR strategy. Every class was tested 30 times by random, random plus and DSSR strategy. %The total number of testing sessions performed is 80 x 30 x 3 = 7200. Each class is evaluated through $10^5$ calls in each testing session.\footnote{The total number of tests is thus $80\times 30\times 3 \times 10^5 = 720\times 10^6~tests$.} Because of the absence of the contracts and assertions in the code under test, similarly to previous approaches~\cite{Oriol2012}, we use undeclared exceptions to compute unique failures found. \begin{table}[h] \caption{Name and versions of 32 Projects randomly selected from the Qualitas Corpus for the experiments} \centering \begin{tabular}{l} ant-1.8.1\\ antlr-3.2\\ aoi-2.8.1\\ argouml-0.30.2\\ artofillusion281\\ aspectj-1.6.9\\ axion-1.0-M2,\\ azureus\\ castor-1.3.1\\ cayenne-3.0.1\\ cobertura-1.9.4.1\\ colt-1.2.0\\ emma-2.0.5312\\ freecs-1.3.20100406\\ hibernate-3.6.0\\ hsqldb-2.0.0\\ itext-5.0.3\\ jasml-0.10\\ jmoney-0.4.4\\ jruby-1.5.2\\ jsXe-04\_beta\\ quartz1.8.3\\ sandmark3.4\\ squirrel-sql-3.1.2\\ tapestry-5.1.0.5\\ tomcat-7.0.2\\ trove-2.1.0\\ velocity-1.6.4\\ weka-3.7.2\\ xalan-2.7.1\\ xerces-2.10.0\\ xmojo-5.0.0\\ \end{tabular} \label{table:projects} \end{table} %\indent Commands for executing the experiments using pure random, random plus and DSSR strategies were as follows. Prog1 is the name of the class and nTests is the number of tests set to be executed during this experiment.\\ % %\begingroup % \fontsize{7pt}{10pt}\selectfont %\begin{itemize} %\item java yeti.Yeti -java -testModules=Prog1 -nTests=10000 -nologs -gui -random. %\item java yeti.Yeti -java -testModules=Prog1 -nTests=10000 -nologs -gui -randomPlus. %\item java yeti.Yeti -java -testModules=Prog1 -nTests=10000 -nologs -gui -DSSR.\\ %\end{itemize} %\endgroup % % All tests are performed using a 64-bit Mac OS X Lion Version 10.7.4 running on 2 x 2.66 GHz 6-Core Intel Xeon with 6.00 GB (1333 MHz DDR3) of RAM. YETI runs on top of the Java\texttrademark SE Runtime Environment [version 1.6.0\_35]. The machine took approximately 100 hours to process the experimental data. %\subsection{Stability of experiments} %Random strategies are characterized by using random input. In random strategy all the faults found in one test run may not necessarily be found in the second test run. Thus, the performance of random strategy cannot be evaluated with a few test sessions. To minimize the random behaviour of random testing every class was tested 30 times by each pure random, random plus and DSSR strategy. This was achieved by creating a batch executable script with the handy feature of YETI called Compact Report which logs each test report to a file for later evaluation. \subsection{Performance measurement criteria} Various measures including the E-measure, P-measure and F-measure have been used by researchers to find the effectiveness of the random test strategy. The E-measure (expected number of failures detected) and P-measure (probability of detecting at least one failure) were heavily criticized~\cite{Chen2008} and are not considered effective techniques for measuring efficiency of test strategy. The F-measure (number of test cases used to find the first fault) has been often used by researchers~\cite{Chen1996,Chen2004}. In our initial experiments the F-measure was used to evaluate the efficiency. Soon after a few experiments, it was realised that this was not the right choice because in some experiments the first strategy found the first fault quickly than the second strategy but on the completion of test session the first strategy found lower number of total faults than the second strategy. The preference to a strategy only because it found the first fault better without giving due consideration to the total number of faults was not fair~\cite{Liu2012}. %%%%%%%%%% REMOVED as it is also present in future work. %%%%%%%%%%%%%%%% %Moreover, for random testing the F-measure is quite unpredictable because its value can be easily increased by adding more narrow conditional statements in the SUT. For example in the following program it is difficult for random testing to generate the exact number (3.3338) quickly and therefore the F-measure will be high.\\* %\begingroup % \fontsize{7pt}{8pt}\selectfont %\noindent %\{ \\* %\indent if ( (value $>$ 3.3337) \&\& (value $<$ 3.3339) )\\* %\indent \{ 10/0 \} \\* %\} \\* %\endgroup %%%%%%%%%%%%%%%%%%%%%%%%%%%%% The literature review revealed that the F-measure is used where testing stops after identification of the first fault and the system is given back to the developers to remove the fault found. In such cases it make sense but now a days automated random testing tools test the whole system and print all of the faults found in one go therefore F-measure is not the favorable choice. Therefore in our experiments, performance of the strategy was measured by the maximum number of faults in a particular number of test calls \cite{Pacheco2007a}, \cite{Ciupa2007}, \cite{Ciupa2008b}. This measurement was found effective because it clearly measured the performance of the strategy when all the other factors were kept constant. %%%%%%%%%%%%%%%%% RESULTS %%%%%%%%%%%%%%%%%%%% \begin{figure*}[ht] \centering \includegraphics[width=18cm]{StackedBar100PercentMean.png} \caption{Normalized stacked bar diagram of all tested classes.} \label{fig:stackedbar} \end{figure*} \begin{table*} [htp!] \scriptsize \caption{Complete results for R, R+ and DSSR. Results present mean, max, min and relative standard deviation.} \begin{minipage}[h]{\textwidth}\centering \begin{tabular}{cl c c c c c c c c c c c c} %\hline \multirow{2}{*}{} & \multirow{2}{*}{Class Name} & \multicolumn{4}{c}{R} & \multicolumn{4}{c}{R+} & \multicolumn{4}{c}{DSSR} \\ %\cline{3-14} & & mean & max & min & rel std dev & mean & max & min & rel std dev & mean & max & min & rel std dev \\ % \hline 1 & Routine & 7 & 7 & 7 & 0 & 7 & 7 & 7 & 0 & 7 & 7 & 7 & 0 \\ 2 & Response & 6 & 6 & 6 & 0 & 6 & 6 & 6 & 0 & 6 & 6 & 6 & 0 \\ 3 & Repository & 31 & 31 & 31 & 0 & 40 & 40 & 40 & 0 & 40 & 40 & 40 & 0 \\ 4 & Rectangle & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 \\ 5 & Project & 64.7& 71 & 60 & 0.03 & 66.36 & 78 & 62 & 0.04 & 68.53 & 78 & 64 & 0.04 \\ 6 & ProjectFactory & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ 7 & PersistentSet & 36 & 36 & 36 & 0 & 36 & 36 & 36 & 0 & 36 & 36 & 36 & 0 \\ 8 & PersistentMap & 47 & 47 & 47 & 0 & 47 & 47 & 47 & 0 & 47 & 47 & 47 & 0 \\ 9 & PersistentList & 65 & 65 & 65 & 0 & 65 & 65 & 65 & 0 & 65 & 65 & 65 & 0 \\ 10 & PersistentBag & 68 & 68 & 68 & 0 & 68 & 68 & 68 & 0 & 68 & 68 & 68 & 0 \\ 11 & Coverage & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 \\ 12 & NodeSet & 28.06& 29 & 26 & 0.02 & 27.86 & 29 & 26 & 0.03 & 27.65 & 29 & 26 & 0.03 \\ 13 & NodeSequence & 38 & 46 & 30 & 0.09 & 36.65 & 45 & 30 & 0.09 & 36.62 & 44 & 30 & 0.11 \\ 14 & NameEntry & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0\\ 15 & Response & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0\\ 16 & Mat4 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0\\ 17 & List & 5.27& 6 & 4 & 0.16 & 5.65 & 6 & 4 & 0.09 & 5.34 & 6 & 2 & 0.09\\ 18 & JmxUtilities & 7.68& 8 & 6 & 0.06 & 7.89 & 8 & 7 & 0.03 & 7.86 & 8 & 7 & 0.04\\ 19 & JavaWrapper & 2 & 2 & 2 & 0 & 3.93 & 4 & 3 & 0.25 & 3.96 & 4 & 3 & 0.18\\ 20 & ItemSet & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0\\ 21 & IntStack & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0\\ 22 & IntHolder & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0\\ 23 & InstrumentTask & 1.93& 2 & 1 & 0.13 & 1.96 & 2 & 1 & 0.09 & 2 & 2 & 2 & 0.09\\ 24 & Image & 13.89& 18 & 7 & 0.15 & 12.37 & 14 & 4 & 0.20 & 12.89 & 15 & 5 & 0.13\\ 25 & HttpAuth & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0\\ 26 & Group & 11 & 11 & 11 & 0 & 10.03 & 4 & 11 & 0.24 & 11 & 11 & 11 & 0\\ 27 & Generator & 17 & 17 & 17 & 0 & 17 & 17 & 17 & 0 & 17 & 17 & 17 & 0\\ 28 & FPGrowth & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0\\ 29 & Font & 11.86& 12 & 11 & 0.02 & 11.86 & 12 & 11 & 0.02 & 11.96 & 12 & 11 & 0.01\\ 30 & FileUtil & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0\\ 31 & Files & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 31 & FileHandler & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0\\ 33 & Facade & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 34 & Entry & 6 & 6 & 6 & 0 & 6 & 6 & 6 & 0 & 6 & 6 & 6 & 0\\ 35 & EntryComparator & 13 & 13 & 13 & 0 & 13 & 13 & 13 & 0 & 13 & 13 & 13 & 0\\ 36 & EntryDecoder & 7.93& 9 & 7 & 0.08 & 8.10 & 9 & 7 & 0.09 & 8.13 & 9 & 7 & 0.08\\ 37 & Entities & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 38 & DOMParser & 6.75& 7 & 0 & 0 & 7 & 7 & 7 & 0 & 7 & 7 & 7 & 0.18\\ 39 & DiskIO & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0\\ 40 & DirectoryScanner & 32.68& 39 & 0 & 0.27 & 35.13 & 38 & 31 & 0.04 & 35.41 & 39 & 32 & 0.04\\ 41 & Debug & 4.62& 6 & 4 & 0.13 & 4.58 & 6 & 4 & 0.12 & 4.86 & 8 & 4 & 0.18\\ 42 & ColumbaClient & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 43 & ClassLoaderLogMan & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 44 & CheckAssociator & 7.06& 8 & 2 & 0.16 & 6.44 & 9 & 2 & 0.33 & 6.96 & 9 & 2 & 0.18\\ 45 & CatalogManager & 7 & 7 & 7 & 0 & 7 & 7 & 7 & 0 & 7 & 7 & 7 & 0\\ 46 & Capabilities & 1.27& 2 & 1 & 0.35 & 1.51 & 2 & 1 & 0.33 & 1.37 & 2 & 1 & 0.36\\ 47 & BitSet & 9 & 9 & 9 & 0 & 9 & 9 & 9 & 0 & 9 & 9 & 9 & 0\\ 48 & BaseColor & 14 & 14 & 14 & 0 & 14 & 14 & 14 & 0 & 14 & 14 & 14 & 0\\ 49 & ArchiveUtil & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0\\ 50 & Apriori & 3.10& 4 & 3 & 0.09 & 3.24 & 4 & 3 & 0.13 & 3.17 & 4 & 3 & 0.11\\ 51 & AntTypeDefinition & 2.89& 4 & 2 & 0.27 & 2.75 & 4 & 2 & 0.29 & 2.79 & 4 & 2 & 0.23\\ 52 & AjTypeImpl & 79.89& 83 & 79 & 0.01 & 80.06 & 83 & 79 & 0.01 & 79.62 & 83 & 79 & 0.01\\ 53 & AdminCore & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0\\ 54 & ActionTranslator & 95.86& 96 & 96 & 0 & 96 & 96 & 96 & 0 & 96 & 96 & 96 & 0\\ 55 & RubyBigDecimal & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0 & 4 & 4 & 4 & 0\\ 56 & Scanner & 3.27& 5 & 2 & 0.19 & 2.79 & 5 & 2 & 0.27 & 3.06 & 5 & 2 & 0.28\\ 57 & Scene & 26.10& 27 & 1 & 0.18 & 25.93 & 27 & 1 & 0.18 & 26 & 27 & 1 & 0.18\\ 58 & SelectionManager & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 59 & Server & 15.51& 21 & 11 & 0.20 & 16.93 & 12 & 21 & 0.16 & 16.93 & 12 & 21 & 0.17\\ 60 & Sorter & 1.96& 2 & 1 & 0.09 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 61 & Sorting & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 62 & SSL &13 & 13 & 13 & 0 & 13 & 13 & 13 & 0 & 13 & 13 & 13 & 0\\ 63 & Statistics & 14.75& 17 & 12 & 0.04 & 23.37 & 25 & 22 & 0.03 & 23.44 & 25 & 22 & 0.04\\ 64 & Status & 53 & 53 & 53 & 0 & 53 & 53 & 53 & 0 & 53 & 53 & 53 & 0\\ 65 & Storpwords & 7.03& 8 & 7 & 0.02 & 7.68 & 8 & 7 & 0.06 & 7.65 & 8 & 7 & 0.06\\ 66 & StringHelper & 43.41& 45 & 41 & 0.01 & 44 & 46 & 42 & 0.02 & 43.55 & 45 & 42 & 0.02\\ 67 & StringUtils &19 & 19 & 19 & 0 & 19 & 19 & 19 & 0 & 19 & 19 & 19 & 0\\ 68 & TextImpl & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0 & 2 & 2 & 2 & 0\\ 69 & TouchCollector & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0 & 3 & 3 & 3 & 0\\ 70 & Trie & 21.17& 22 & 21 & 0.01 & 21.10 & 22 & 21 & 0.01 & 21.03 & 22 & 21 & 0\\ 71 & URI & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0\\ 72 & Itextpdf & 8 & 8 & 8 & 0 & 8 & 8 & 8 & 0 & 8 & 8 & 8 & 0\\ 73 & WebMacro & 5 & 5 & 5 & 0 & 5.06 & 6 & 5 & 0.05 & 5.06 & 7 & 5 & 0.07\\ 74 & XMLAttributesImpl & 8 & 8 & 8 & 0 & 8 & 8 & 8 & 0 & 8 & 8 & 8 & 0\\ 75 & XMLChar & 13 & 13 & 13 & 0 & 13 & 13 & 13 & 0 & 13 & 13 & 13 & 0\\ 76 & XMLEntityManger & 17.03& 18 & 17 & 0.01 & 16.95 & 17 & 16 & 0.01 & 16.96 & 17 & 16 & 0.01\\ 77 & XMLEntityScanner & 12 & 12 & 12 & 0 & 12 & 12 & 12 & 0 & 12 & 12 & 12 & 0\\ 78 & XMLErrorReporter & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0 & 5 & 5 & 5 & 0\\ 79 & XObject & 19 & 19 & 19 & 0 & 19 & 19 & 19 & 0 & 19 & 19 & 19 & 0\\ 80 & XString & 23.86& 24 & 23 & 0.01 & 23.55 & 24 & 23 & 0.02 & 23.75 & 24 & 23 & 0.01\\ %\hline \multicolumn{2}{l}{\textbf{Total}} &1165.53 & 1181 & 1055 & 0.1069 & 1188.73 & 1224 & 1127 & 0.1153 & 1192.55 & 1234 & 1126 & 0.1085\\ %\hline \end{tabular} \end{minipage} \label{table:Results} \end{table*} \section{Results}\label{sec:res} \subsection{Is there an absolute best for DSSR, R+ and R?} Figure~\ref{fig:stackedbar} presents the results of 80 randomly selected classes evaluated by the R, R+ and DSSR strategies in an intuitive normalized stacknar representation where projects are ranked according to the relative number of unique failures found by DSSR. As a first visual interpretation, it seems that, except in rare cases, all strategies find significantly the same number of uniques failures. Table~\ref{table:Results} contains more detailed information: name of the classes, mean value, maximum number of unique failures, minimum number of unique failures and relative standard deviation for each of the 80 classes tested using R, R+ and DSSR strategy. The total value (table \ref{table:Results} last row) shows DSSR detects slightly more unique failuress (1192.55), on average, than R (1165.53) and R+ (1188.73). This represents 2.3\% on average than R and .3\% more than R+. It also shows that DSSR found a higher number of maximum unique failures (1234) and minimum unique failures (1126) than R (1181), (1055) and R+ (1224), (1127) respectively. This represents: 4.5\% improvements over R and .8\% over R+ for the maximum and 6.7\% improvement over R and .1\% decrease over R+ for the minimum. Eventually, the standard deviations are all of the order of magnitude of .1\% for all strategies. The answer to this research question is thus that whereas DSSR produces a slightly higher number of unique failures, this is not significantly higher than R+. We can thus say that R+ and DSSR are better choices than R as an absolute strategy, but that neither significantly outperforms the other. \begin{figure*}[ht] \centering \includegraphics[width=10cm]{pie2.png} \caption{Result Categories.} \label{fig:pie} \end{figure*} \subsection{Are there classes for which either one of the strategies provides better results?} Results can be split into six different categories as shown in figure~\ref{fig:pie}. The first category is the largest where each strategy performed equally well and found the same number of unique failures after $10^5$ tests. It contain 50 classes (62\% of the experiments). % in table \ref{table:equal} The second category contains 11 (14\%) classes where R performed better than DSSR and R+. %It contain 11 classes (14\% experiments). %given in table \ref{fig:Randombetter}, figure \ref{fig:Randombetter}. The third category contains 7 classes where (9\%) R+ performed better than DSSR and R. %It contain 7 classes (9\% experiments). % given in table \ref{table:RandomPlusbetter}, figure \ref{fig:RandomPlusbetter}. The fourth category contain 7 classes (9\%) where DSSR performed better than R and R+. %It contain 7 classes (9\% experiments) given in table \ref{table:Randombetter}, figure \ref{fig:Randombetter}. The fifth category contain only one class (1\%) where both DSSR and R found an equal number of unique failures and performed better than R+. %shown in table \ref{table:DSSRequaltoRandom}, figure \ref{fig:DSSRequaltoRandom} . The sixth category contain 4 classes (5\%) where DSSR and R+ found an equal number of unique failures and performed better than R. %There are 4 classes (5\%) as shown in table \ref{table:DSSRequaltoRandomPlus}, figure \ref{fig:DSSRequaltoRandomPlus}. No class is found for which R performs equal to R+. The answer to this research question is that most classes (62\%) did not exhibit significantly different behaviors independent of the strategy. In 38\% of the cases, though, one or two strategies work better than the other(s). In particular, 14\% of the classes performed better with R, 9\% of the classes performed better with R+ and 9\% with DSSR. This shows that the assumptions made when developing R+ --- some border values are more bug-prone --- and DSSR --- failure domains are connected --- only verify in a minority of cases and are code-dependent. \subsection{Can we pick the best default strategy between R, R+ and DSSR?} With the data presented in this section, it is not possible to pick a best strategy for all classes. In most cases, results are not different from one strategy to another, but in the other cases, the best strategy is very much dependent on the tested code and none of R, R+ and DSSR are signifficantly better than the other two on larger classes. In the next section we also discuss other factors that influence the outcome of such a question such as time and % // HERE IS THE NEW STRUCTURE % Present the data with the graphs that Mian generated using Excel. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Present each of the 6 categories (best with R, best with R+, best with DSSR, R=R+>DSSR, DSSR=R>R+, and DSSR=R+>R) % Add data about the number of classes that are best served with each strategy and which are equivalently in the caption of the figure and in the text. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[H] %\caption{Category 1: 50 Experiments where each strategy performed equally well and found same number of faults} %\centering %\begin{tabular}{|l|c|} %\hline\hline %No of Experiments & 50 \\ %Mean & 12.32 \\ %Median & 5 \\ %Standard Deviation & 17.73 \\ %Min No of Faults & 0 \\ %Max No of Faults & 96 \\ %\hline %\end{tabular} %\label{table:equal} %\end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[H] %\caption{Category 2: 11 out of 80 Experiments where Random strategy performed better than DSSR and Random Plus.} %\centering %\begin{tabular}{|l|c|c|c|} %\hline\hline % & R & R+ & DSSR \\ %\hline %Mean & 16.78 & 16.6 & 16.65 \\ %Median & 17 & 17 & 17 \\ %Standard Deviation & 8.91 & 9.03 & 8.95 \\ %Min No of Faults & 1 & 1 & 1\\ %Max No of Faults & 17 & 17 & 17\\ %\hline %\end{tabular} %\label{table:Randombetter} %\end{table} % %\begin{figure}[ht] %\centering %\includegraphics[width=5cm,height=4cm]{Randombetter5.png} %\caption{Random better.} %\label{fig:Randombetter} %\end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[H] %\caption{Category 3: 7 out of 80 Experiments where Random Plus strategy performed better than Random and DSSR} %\centering %\begin{tabular}{|l|c|c|c|} %\hline\hline % & R & R+ & DSSR \\ %\hline %Mean & 26.32 & 26.95 & 26.68\\ %Median & 7 & 8 & 7.5 \\ %Standard Deviation & 31.39 & 31.15 & 31.02\\ %Min No of Faults & 1 & 1 & 2\\ %Max No of Faults & 83 & 83 & 83\\ %\hline %\end{tabular} %\label{table:RandomPlusbetter} %\end{table} % % %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{RandomPlusbetter5.png} %\caption{Random Plus better.} %\label{fig:RandomPlusbetter} %\end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[H] %\caption{Category 4: 7 out of 80 Experiments where DSSR strategy performed better than Random and Random Plus} %\centering %\begin{tabular}{|l|c|c|c|} %\hline\hline % & R & R+ & DSSR \\ %\hline %Mean & 23.44 & 26.32 & 26.36\\ %Median & 12 & 12 & 12 \\ %Standard Deviation & 15.81 & 14.85 & 14.70\\ %Min No of Faults & 0 & 4 & 4\\ %Max No of Faults & 45 & 46 & 45\\ %\hline %\end{tabular} %\label{table:DSSRbetter} %\end{table} % % %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{DSSRbetter5.png} %\caption{DSSR Better.} %\label{fig:DSSRbetter} %\end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[H] %\caption{Category 5: 4 out of 80 Experiments where Random Plus and DSSR performed equally better} %\centering %\begin{tabular}{|l|c|c|} %\hline\hline % & R+ = DSSR & R \\[1ex] %\hline %Mean & 13.76 & 11.18\\ %Median & 7 & 5\\ %Standard Deviation & 15.22 & 11.60\\ %Min No of Faults & 3 & 0\\ %Max No of Faults & 40 & 31\\ %\hline %\end{tabular} %\label{table:DSSRequaltoRandomPlus} %\end{table} % %\begin{figure}[ht] %\centering %\includegraphics[width=5cm,height=4cm]{DSSRequaltoRandomPlus5.png} %\caption{DSSR equal to RandomPlus.} %\label{fig:DSSRequaltoRandomPlus} %\end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[H] %\caption{Category 6: 1 out of 80 Experiments where Random and DSSR performed equally better} %\centering %\begin{tabular}{|l|c|c|} %\hline\hline % & P = DSSR & R+ \\ %\hline %Mean & 11 & 10.06\\ %Median & 11 & 11\\ %Standard Deviation & 0 & 2.42\\ %Min No of Faults & 11 & 4\\ %Max No of Faults & 11 & 11\\ %\hline %\end{tabular} %\label{table:DSSRequaltoRandom} %\end{table} % %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{DSSRequaltoPureRandom5.png} %\caption{DSSR equal to Pure Random.} %\label{fig:DSSRequaltoRandom} %\end{figure} % %Conclusions: %- we show that for roughly 10\% of classes R+ works best, for 10\% DSSR works best, 10\% RT works best %- overall DSSR finds more faults in the same time or number of tests (but this is marginally more than other methods) %Please check if there are more conclusions something with the standard deviation (is it higher, lower etc...) and the minimum as well. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{combineMean.png} %\caption{Combined Mean.} %\label{fig:Mean} %\end{figure} %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{combineMin.png} %\caption{Combined Min.} %\label{fig:Min} %\end{figure} %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{combineMax.png} %\caption{Combined Max.} %\label{fig:Max} %\end{figure} %\begin{figure}[H] %\centering %\includegraphics[width=5cm,height=4cm]{combineStdDev.png} %\caption{Combined StdDev.} %\label{fig:StdDev} %\end{figure} %// END NEW STRUCTURE %DSSR has the highest Mean value of finding faults which means that DSSR performs better then random and Random plus. The reason for small improvement instead of 10 and 20\% is described in detail in Discussion section. Similarly the other noticeable improvement is the minimum number of faults DSSR can find is 376 while for random and random plus it is 340 and 344 respectively which means that DSSR strategy always find some of the faults which random and random plus might not. On the other hand DSSR finds maximum 574 faults versus 579 faults of random and random plus but this difference is very small and can be ignored. During the experiments we also found that in some classes like AntClassLoader (Ant project), Server (Freecs project), BaseFont (itext project) and Util (JsXe project) DSSR strategy found higher number of minimum and maximum faults where as in the same classes random and random plus found 0 or very few faults. \\ %\begin{figure}[ht] %\centering %\includegraphics[width=9cm,height=7cm]{newResults.png} %\caption{Test Results of 34 classes from 16 Java projects.} %\label{fig:Result1} %\end{figure} %Figure \ref{fig:Result1} show the results of each experiments using bar chart. From the figure we can see that in few of the cases all the three strategies found equal number of faults while in most cases if not all DSSR performs better than random and random plus strategy. %%%%%%%%%%%%%%%%% DISCUSSION %%%%%%%%%%%%%%%%%%%% \section{Discussion}\label{sec:discussion} \textbf{Time taken by DSSR strategy, Random strategy and Random plus strategy to execute tests:} To execute an equal number of test cases, DSSR takes slightly more time (between 5 and 10\% overhead) than both pure random and random plus. This is due to maitaining sets of interesting values. The overhead is dependent on our implementation and could also be reduced if needed. \textbf{Effect of test duration and number of tests on the results:} All three techniques have the same potential for finding bugs. If testing infinitely, all techniques should find the same number of unique failures. So the results will converge the longer (resp. the more tests) testing sessions contain. We suspect however that some of the unique failures found would be extremely long to find using random or random+ only. Further experiments should confirm this point. %We found that test duration increases either because of increase in time or number of test cases which results in improving the performance of DSSR strategy than random and random plus. It is because when test duration or number of tests increases, the list of interesting values also increases and in turn DSSR strategy get enough relevant values in the list of interesting values and can easily pick one from the list instead of selecting it randomly or from static list of random plus.\\ \textbf{Effect of number of faults on results:} We found that the DSSR strategy performs better when the number of faults is higher in the code. The reason seems to be that when there are more faults, their domains are more connected and DSSR then works better. Further studies might use historical data to pick the best strategy. %\indent \textbf{Can Pure Random and Random Plus Testing perform better than DSSR strategy:} %The experimental results indicated that pure random and random plus testing can perform better than DSSR strategy if the SUT contain point pattern of failures rather than block and strip pattern. It is due to the fact that in such cases faults don't lay in the neighbourhood of found fault and adding neighbouring values of the founded fault dont make any impact on performance therefore the extra computational time becomes a liability.\\ \textbf{DSSR strategy dependence on finding the first unique failures early enough:} During the experiments we found that if the unique failures is not found quickly enough there is no value added to the list of interesting values and then the test is equivalent random testing. This means that better ways of populating failure-inducing values are needed to leverage DSSR better. As an example, the following piece of code would be unlikely to fail under the current setting: \begin{lstlisting} public void test(float value){ if(value == 34.4445) { 10/0; } } \end{lstlisting} In this case, we could add constant literals from the SUT to the list of interesting values in a dynamic fashion. These literals can be obtained from the constant pool in the class files of the SUT. In the example above the value 34.4445 and its surrounding values would be added to the list of interesting values before the test starts and the DSSR strategy would find the unique failure right away. \textbf{DSSR strategy and coverage:} Random strategies typically achieve high level of coverage~\cite{Oriol2010}. It might also be interesting to compare R, R+ and DSSR with respect to the achieved coverage or even to use a DSSR variant that adds a new interesting value and its neighbors when a new branch is reached. \textbf{Threats to validity:} As usual with such empirical studies, the present work might suffer from a non-representative selection of classes. The present selection was however made through random selection and objective criteria and it seems unlikely that they would not be representative. The parameters of the study might also have prompted incorrect results. This is however unlikely due to previous results on random testing~\cite{Oriol2012}. %%%%%%%%%%%%%%%%% RW %%%%%%%%%%%%%%%%%%%% \section{Related Work}\label{sec:rw} Random testing is a popular technique with simple algorithm but proven to find subtle faults in complex programs and Java libraries~\cite{Pacheco2005, Csallner2004, Claessen2000a}. Its simplicity, ease of implementation and efficiency in generating test cases make it a best choice for test automation~\cite{Hamlet1994}. Few of the well known automated tools based on random strategy includes Jartege~\cite{Oriat2004}, Eclat~\cite{Pacheco2005}, JCrasher~\cite{Csallner2004}, AutoTest \cite{Ciupa2007},~\cite{Ciupa2008a} and YETI~\cite{Oriol2010, Oriol2012} which was used to conduct this research study. In pursuit of better results and lower overhead, many variations of random strategy have been proposed~\cite{Chen2010, Chen2005, Chan2002, Chen2004a, Chen2003}. Adaptive random testing (ART), Quasi-random testing (QRT) and Restricted Random testing (RRT) achieved better results by selecting test inputs random but evenly spread across the input domain. Similarly Mirror ART and ART through dynamic partitioning increased the performances by reducing the overhead of ART. One of the main reason behind the better performance of these strategies is that even spread of test input increases the chance of exploring the fault patterns present in the input domain. The random+ (R+) strategy~\cite{Leitner2007} is a variation of the random strategy in which interesting values, beside pure random values, are added to the list of test inputs. These interesting values includes border values~\cite{Beizer1990} which have high tendency of finding faults in the given SUT. Results conducted with R+ strategy show significant improvement of pure random strategy. DSSR strategy also rely on R+ strategy in the start until any fault is found where it switches to spot sweeping strategy. It is interesting that numerous efforts have been made to discover the fault patterns~\cite{Chen2010, Chen2005, Chan2002, Chen2004a, Chen2003}, etc. but in our knowledge, none has been published on covering/sweeping all the faults lying in a specific pattern once it has been discovered. A common practice to evaluate performance of newly created or existing strategies is to compare the results obtained (theoretically and empirically) after applying them to similar programs~\cite{Gutjahr1999, Duran1984, Hamlet1990}. Arcuri et al., stress on the use of random testing as a comparison baseline to assess other test strategies \cite{Arcuri2012}. We followed similar procedure and evaluated DSSR against R and R+ under constant conditions. Qualitas Corpus~\cite{Tempero2010} is a collection of open source java programs maintained for independent empirical research~\cite{Oriol2012, Tempero2010a, Tempero2008}. These projects are carefully selected that spans across the whole set of java applications. %%%%%%%%%%%%%%%%% CONCLUSIONS %%%%%%%%%%%%%%%%%%%% \section{Conclusions}\label{sec:conc} The main goal of the present study was to develop a new random strategy which could find more faults in lower number of test cases that would leverage on the assumption that in a significant number of classes, failure domains are contiguous or are very close by. The result is the dirt spot sweeping strategy, a strategy which adds neighbouring values of failure values to a set of preferred values. We implemented DSSR as a strategy for the random testing tool YETI and tested thirty times each one of 80 classes from the Qualitas Corpus with each of the three strategies DSSR, R, and R+.We found that for 68\% of the classes all three strategies find the same unique failures, for 9\% of the classes random+ performs better, for 14\% pure random performs better, and for 9\% DSSR performs better. Overall, DSSR also found 2.3\% more unique failures than random and .3\% more unique failures than random+. Overall DSSR is a strategy that uncovers more unique failures than both random and random+ strategies. It however achieves this with a 5-10\% overhead which makes it an unlikely candidate as a default strategy for a random testing tool. It however yields encouraging results and advocates to develop the technique further for settings in which it is significantly better than both R+ and R. %\indent Improvement in performance of DSSR strategy over random strategy was achieved by taking advantage of Random Plus and fault neighbouring values. Random plus incorporated not only border values but it also added values having higher chances of finding faults in the SUT to the list of interesting values.\\ %\indent The DSSR strategy is highly effective in case of systems containing block and strip pattern of failure across the input domain.\\ %\indent Due to the additional steps of scanning the list of interesting values for better test values and addition of fault finding test value and its neighbour values, the DSSR strategy takes up to 5\% more time to execute equal number of test cases than pure random and random plus. \\ %\indent In the current version of DSSR strategy, it might depend on random or random plus strategy for finding the first fault if the fault test value was not in the list of interesting values. Once the first fault is found only then DSSR strategy could make an impact on the performance of test strategy. %\indent The limitation of random plus strategy is that it maintains a static list of interesting values which remains the same for each program under test, and can be effective in many cases but not always. The better approach will be to have a dynamic list of interesting values that is automatically updated for every program which can be achieved by adding the program literals and its surrounding values to the list of interesting values prior to starting every new test session. %%%%%%%%%%%%%%%%% ACKNOWDLEGEMENT %%%%%%%%%%%%%%%%%%%% \section{Acknowledgments} The authors thank the Department of Computer Science, University of York for its financial support through the Departmental Overseas Research Scholarship (DORS) award. Authors also thanks Richard Page, for his valuable help and generous support. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Present basic statistical data (max min, mean, std deviation) about each strategy aggregated over all classes. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \bibliographystyle{IEEEtran} \bibliography{bare_conf} \end{document}
{ "alphanum_fraction": 0.6869763545, "avg_line_length": 72.0134078212, "ext": "tex", "hexsha": "b32a36b893c752babe432b6160fc6cc058b1fc16", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_forks_repo_licenses": [ "BSD-4-Clause" ], "max_forks_repo_name": "maochy/yeti-test", "max_forks_repo_path": "papers/ICST2012/bare_conf.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-4-Clause" ], "max_issues_repo_name": "maochy/yeti-test", "max_issues_repo_path": "papers/ICST2012/bare_conf.tex", "max_line_length": 1064, "max_stars_count": null, "max_stars_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_stars_repo_licenses": [ "BSD-4-Clause" ], "max_stars_repo_name": "maochy/yeti-test", "max_stars_repo_path": "papers/ICST2012/bare_conf.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 19671, "size": 64452 }
\documentclass[11pt]{article} \usepackage[margin=1in]{geometry} \usepackage{setspace} \onehalfspacing \usepackage{graphicx} \usepackage{listings} % DOCUMENT INFORMATION ================================================= \title {ECEN 429: Introduction to Digital Systems Design Laboratory \\ North Carolina Agricultural and Technical State University \\ Department of Electrical and Computer Engineering} % Declare Title \author{Chris Cannon} % Declare authors \date{February 1, 2018} % ====================================================================== \begin{document} \maketitle \begin{center} Lab 2 Prelab \end{center} \pagebreak \section{Introduction} The objective of this lab is to further master the use of the Vivado Design Suite in conjunction with the Basys3 Development Board. We will also practice implementing circuits of a given design on the Basys3 board. This lab consists of three parts, utilizing the seven segment displays for the first time. I anticipate that the seven segment display will offer a new technical challenge to complete. \section{Background, Design Solution, and Results} \subsection{Problem 1 Seven Segment Display} Using out1-out7 as the outputs from my VHDL program, I can map these to pins a-g on a seven segment display. See Table 1. \begin{table}[h] \begin{center} \begin{tabular}{| l | l | l |} \hline Output & Segment & Pin \\ \hline out1 & a & W7 \\ \hline out2 & b & W6 \\ \hline out3 & c & U8 \\ \hline out4 & d & V8 \\ \hline out5 & e & U5 \\ \hline out6 & f & V5 \\ \hline out7 & g & U7 \\ \hline \end{tabular} \caption{\label{tab:table-name}FPGA pin assignments for seven segment display.} \end{center} \end{table} \subsection{Problem 2 1:2 Decoder} This decoder will convert a single bit into a 2-bit one hot format. The truth table for this simple circuit is Table 2. In order to perform this in VHDL, I would use the following simple program: \begin{lstlisting}[language=VHDL] library IEEE; use IEEE.STD_LOGIC_1164.ALL; -- This declares an entity that will have the 2 needed inputs and 2 outputs entity decoder is port (a, sel : in bit; out1, out2 : out bit); end entity decoder; architecture decoder_arch of decoder is begin -- out1 is the result of 'A' AND the opposite of 'SEL' out1 <= a and (not sel); -- out2 is the result of 'A' AND 'SEL' out2 <= a and sel; end architecture decoder_arch; \end{lstlisting} \begin{table}[h] \begin{center} \begin{tabular}{| l | l | l | l |} \hline a & sel & out1 & out2 \\ \hline 0 & 0 & 0 & 0 \\ \hline 0 & 1 & 0 & 0 \\ \hline 1 & 0 & 1 & 0 \\ \hline 1 & 1 & 0 & 1 \\ \hline \end{tabular} \caption{\label{tab:table-name}Truth table for 1:2 decoder.} \end{center} \end{table} \subsection{Problem 3 SUM of a Full Adder} I will derive the expression for SUM by first reference the truth table of the full-adder in Table 3. From table 3 I can derive that SUM will be equal to: \begin{lstlisting}[language=VHDL] ((not a) and b and ci) or (a and (not b) and ci) or (a and b and (not ci)) or (a and b and ci) \end{lstlisting} \begin{table}[h] \begin{center} \begin{tabular}{| l | l | l | l | l |} \hline A & B & CI & SUM & CO \\ \hline 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 1 & 1 & 0 \\ \hline 0 & 1 & 0 & 1 & 0 \\ \hline 0 & 1 & 1 & 0 & 1 \\ \hline 1 & 0 & 0 & 1 & 0 \\ \hline 1 & 0 & 1 & 0 & 1 \\ \hline 1 & 1 & 0 & 0 & 1 \\ \hline 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \caption{\label{tab:table-name}Truth table for full adder.} \end{center} \end{table} \section{Conclusion} After completion of the exercises, I feel well prepared for the lab. While the seven-segment display mapping was new, it was an easy challenge to overcome by simply employing the documentation. I think writing the truth table for the full adder when it wasn't strictly required will give me an additional resource that will be useful as I work through the lab. \end{document}
{ "alphanum_fraction": 0.6703882263, "avg_line_length": 33.1176470588, "ext": "tex", "hexsha": "4025a95563c839e91b539061de29fcb5b58f8182", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "ccannon94/ncat-ecen429-repository", "max_forks_repo_path": "ChrisPrelabs/Lab2Prelab.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "ccannon94/ncat-ecen429-repository", "max_issues_repo_path": "ChrisPrelabs/Lab2Prelab.tex", "max_line_length": 399, "max_stars_count": null, "max_stars_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "ccannon94/ncat-ecen429-repository", "max_stars_repo_path": "ChrisPrelabs/Lab2Prelab.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1241, "size": 3941 }
\lesson{2}{Sep 30 2021 Thur (09:00:12)}{The Constitution}{Unit 2} \subsubsection*{What Was the Articles of Confederation?} When the colonies declared their independence from Britain, they no longer had a government. They had to create a new one to organize the nation and fight the Revolutionary War for independence. In $1777$, The Articles of Confederation was written and became the first plan for governing the United States. To prevent tyranny from rising over them, early American leaders made a weak government by restricting or limiting the powers of the new government. This first government was a \bf{confederation}, or group of loosely allied states. The states agreed to form a partnership that was called a \bf{"firm league of friendship"} under the \bf{Articles of Confederation}. The lack of a strong centralized government meant that each state had the final authority over its own area. During the time the Articles of Confederation were in force, Americans won the Revolutionary War and signed a peace treaty in $1783$. The Articles of Confederation were in place for about $10$ years. Though it was a brief government, its leaders achieved some important goals. \subsubsection*{Why Did the Articles of Confederation Fail?} The content within the document placed a high value on federalism, which upheld the supremacy of state governments and equality between the states. The Articles of Confederation created a limited national government without separate branches, as its role was to be small overall. State legislatures chose delegates to send to Congress. Though it saw Americans through the fight for independence and reflected many of the ideals and principles they valued, the Articles of Confederation did not last. The problems of supplying troops sufficiently for national defense and internal rebellion after the war revealed the need for a stronger central government. Here are some of the reasons weaknesses in the Article of Confederation: \begin{center} \begin{table}[htbp] \begin{tabular}{ p{0.35\linewidth} | p{0.6\linewidth} } \hline Weakness & Why It Caused a Problem \\ \hline Each state had one vote in the legislative branch. & This gave each state equal power in the central government. However, states with larger populations, like Virginia, did not think it was fair to have the same amount of power as a state with fewer people. \\ \hline The government did not have an executive branch. & When Congress passed a law, the state governments were supposed to enforce it. Congress had no power to make sure each state enforced the law. Congress could not force a state to send a criminal to another state to face charges. The Articles of Confederation expected each state government to use its own executive power in good faith for the good of all states. \\ \hline The government did not have a separate judicial branch. & If a law's meaning was in question, there was no one who could fairly settle the issue and make sure all states followed the ruling. In addition, while people could move freely between states, no court existed to settle disputes between states or people in different states. The Articles of Confederation did create a process by which Congress could raise a temporary court, but it was a long and inefficient process. \\ \hline Congress could not create taxes. & Taxes pay for a government's functions. Congress could only request money from the state legislatures. If the states did not send any, Congress was powerless as it had no executive branch to enforce payment. \\ \hline Congress could not raise a national military separate from the states. & During the Revolutionary War, General George Washington requested money for supplies. Congress could not force the states to pay. The troops often went without, starving at times. States were expected to train and equip their own militias and send troops when Congress requested, yet there was no one to coordinate or enforce this requirement. \\ \hline \end{tabular} \caption{Why the Article of Confederation was weak} \label{table:why-was-the-article-of-confederation-was-weak} \end{table} \end{center} \newpage \subsubsection*{Why Did the United States Need a New Constitution?} Representatives from most of the $13$ states met in May $1787$ to fix the Articles of Confederation. The meeting was called the \bf{Constitutional Convention}, or the \bf{Philadelphia Convention} for the city where it took place. Most representatives agreed that a stronger central government was necessary. James Madison proposed his idea for a new constitution to replace the Articles of Confederation, which became the basis for the convention discussion and debate. By this time, the Articles' weaknesses were common topics of conversation among state leaders. Yet not everyone agreed they should be replaced rather than simply revised. Shays' Rebellion, which ended only a few months earlier, likely influenced some of the delegates. Daniel Shays and several thousand other citizens in Massachusetts protested against state tax policies. Some protests turned violent, but the federal government couldn't fund an army to restore order. It took an army funded by private citizens to end the rebellion. The need to strengthen the national government was clear, but the delegates spent four days debating whether to revise the Articles of Confederation or start over. They ultimately chose to create a new constitution. \subsubsection*{What Are the Parts of the Constitution?} The Constitution begins with the Preamble, or introduction of purpose, and then outlines government structure and function in seven main articles. You can remember the main sections of the Constitution by using the acronym \bf{\it{LEJ RASR}}. The document reflects many important political principles. In the Preamble, the beginning phrase \bf{"We the People"} establishes popular sovereignty as a basic essential principle in U.S. government. The Constitution reflects separation of powers by distributing power among three branches of government, outlined in the first three articles. The fourth article talks about the reserved powers of the states, reflecting federalism. Examine the main idea and quotes from each section of the Constitution. \begin{center} \begin{table}[htbp] \begin{tabular}{ p{0.35\linewidth} | p{0.6\linewidth} } \hline Description & Quote \\ \hline \bf{\it{(L–Legislative Branch)}} The legislative branch makes the laws. The Constitution names the national legislature \bf{“Congress”} and separates it into two houses, the House of Representatives and the Senate. & Article. I. Section. 1. All legislative Powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives. \\ \hline \bf{\it{(E–Executive Branch)}} The executive branch executes or enforces the laws. The Constitution states a president will be the head of the executive branch. & Article. II. Section. 1.–The executive Power shall be vested in a President of the United States of America. \\ \hline \bf{\it{(J–Judicial Branch)}} The judicial branch interprets laws and settles disputes. The Constitution names the highest court in the nation the “Supreme Court” and gives Congress power to create lower federal courts. & Article III. Section. 1.–The judicial Power of the United States shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. \\ \hline \bf{\it{(R–Reserved Powers of the States)}} Powers that are not expressly given to the federal government are reserved to the states. & Article. IV. Section. 1.–Full Faith and Credit shall be given in each State to the public Acts, Records, and judicial Proceedings of every other State. \\ \hline \bf{\it{(A–Amendment Process)}} The amendment process requires support from state legislatures, as well as the federal government, to make a change to the Constitution. & Article. V.–The Congress, whenever two thirds of both Houses shall deem it necessary, shall propose Amendments to this Constitution, or, on the Application of the Legislatures of two thirds of the several States, shall call a Convention for proposing Amendments, which, in either Case, shall be valid to all Intents and Purposes, as Part of this Constitution, when ratified by the Legislatures of three fourths of the several States, or by Conventions in three fourths thereof. \\ \hline \bf{\it{(S–Supremacy Clause)}} The supremacy clause states that the Constitution and federal laws are the highest laws in the nation. & Article. VI.–This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land. \\ \hline \bf{\it{(R–Ratification)}} Ratification of the Constitution requires approval from nine of the 13 state legislatures. & Article. VII.–The Ratification of the Conventions of nine States, shall be sufficient for the Establishment of this Constitution between the States so ratifying the Same. \\ \hline \end{tabular} \caption{Articles and Descriptions} \label{table:articles-and-descriptions} \end{table} \end{center} \newpage \subsubsection*{How Does the Constitution Reflect Valued Principles and Create a Stronger Government?} Many of the same principles that led the colonists to declare independence framed how they thought government should work. However, they did not all agree on how to best express each ideal they valued in the powers of government. For example, they agreed that the people should be the ultimate source of political power, meaning they believed in democracy and popular sovereignty. However, a nation so large had to have some system of representation. It needed a form of indirect democracy based on republicanism, the idea that the people will elect officials to make decisions for them. How to set up that representation was a point of debate for the framers. \subsubsection*{How Do Checks and Balances Limit Government?} Most of the framers of the Constitution believed they had created a government limited to the powers described in the document, but they knew it left room for expansion. For example, the \bf{"elastic"} or \bf{"necessary and proper"} clause in Article I allows Congress to make other laws considered necessary for the welfare of the nation. This broad statement has allowed actions not directly named in the Constitution, such as creating an air force branch of the military. In addition, while \bf{"rule by the people"} is a valued principle, it would be possible for a majority of voters to pass laws that deny rights to those who do not agree with them. \begin{marginfigure} \centering \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{images/how-do-checks-and-balances-limit-government} \sidecaption{The three branches of government} \label{fig:the-three-branches-of-government} \end{marginfigure} To add an amendment to the Constitution, the idea must go through a formal two-step process of proposal and ratification. It involves both the states and the federal government, reflecting the principle of federalism. Elected representatives in the state and federal legislatures propose and ratify amendments. The amendment process has two main steps: \begin{itemize} \item Proposal—the amendment idea is officially presented for debate by a two-thirds vote of Congress, or a national convention called by two-thirds of the state legislatures \item Ratification—the amendment idea is passed and becomes part of the Constitution by a three-fourths vote of the state legislatures or special state conventions \end{itemize} The Bill of Rights is the name for the first $10$ amendments to the Constitution. It is not part of the original document. The states ratified these amendments within a few years after the ratification of the Constitution. The $10$th Amendment itself reflects federalism because it protects powers of the states not expressly given to the federal government. For example, since the Constitution does not mention public education, the state governments have the right to create and maintain public schools. \begin{center} \begin{table}[htbp] \begin{tabular}{ p{0.35\linewidth} | p{0.6\linewidth} } \hline Amendment & Description \\ \hline The First Amendment & The First Amendment guarantees freedom of speech, freedom of religion, and freedom of the press. The press includes newspapers, television and web-based communication such as blogs and websites. \\ \hline The Second Amendment & The Second Amendment protects the right of the people to \bf{"keep and bear arms"}. In this case, arms means weapons, or \bf{"firearms"}. \\ \hline The Third Amendment & Third Amendment says that the government cannot force people to keep soldiers in their homes. It was written because people remembered what it was like when the British used to force colonists to house British soldiers. \\ \hline The Fourth Amendment & The Fourth Amendment says that people's homes are private. Government officials cannot enter them without a search warrant. \\ \hline The Fifth Amendment & The Fifth Amendment contains important rights of a person accused of a crime. These guarantees were added to the Constitution because dishonest governments in the past had accused people falsely so they could put them in jail. \\ \hline The Sixth Amendment & The most important part of this amendment is its promise of a jury trial. A jury is a group of ordinary citizens who decide together whether a person accused of a crime is guilty or not guilty. \\ \hline The Seventh Amendment & The Seventh Amendment applies the right to a jury trial to civil cases. Civil cases do not deal with crimes. They are disagreements between individuals. If a case involves more than $20$, the parties have a right to a jury trial. \\ \hline The Eighth Amendment & Bail is money that the court requires before a person accused of a crime can be released from jail. The Eighth Amendment states that the amount of bail can't be too high. It also says that punishment for crime cannot be \bf{"cruel"} or \bf{"unusual"}. \\ \hline The Ninth Amendment & Some men in Congress worried about including a list of rights in the Constitution. Could the government later use this list to say that people had only those rights listed? To solve this problem, James Madison wrote the Ninth Amendment. \\ \hline The Tenth Amendment & The Tenth Amendment says that the federal government can only use those powers given to it in the Constitution. All other powers belong to the states. This amendment satisfied the Anti-Federalists, who thought the federal government could get so powerful that it would destroy the states. \\ \hline \end{tabular} \caption{Articles and Descriptions} \label{table:articles-and-descriptions} \end{table} \end{center} \newpage
{ "alphanum_fraction": 0.7782787689, "avg_line_length": 62.2073170732, "ext": "tex", "hexsha": "3e264faed34c7d8a540c571430225b61feb95105", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_forks_repo_licenses": [ "Info-ZIP" ], "max_forks_repo_name": "SingularisArt/notes", "max_forks_repo_path": "Grade-10/semester-1/hs-government/unit-2/lesson-2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Info-ZIP" ], "max_issues_repo_name": "SingularisArt/notes", "max_issues_repo_path": "Grade-10/semester-1/hs-government/unit-2/lesson-2.tex", "max_line_length": 139, "max_stars_count": 6, "max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_stars_repo_licenses": [ "Info-ZIP" ], "max_stars_repo_name": "SingularisArt/notes", "max_stars_repo_path": "Grade-10/semester-1/hs-government/unit-2/lesson-2.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z", "num_tokens": 3506, "size": 15303 }
%------------------------- % Resume in Latex % Author : Sourabh Bajaj % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[pdftex]{hyperref} \usepackage{fancyhdr} \usepackage{ragged2e} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.375in} \addtolength{\evensidemargin}{-0.375in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeItemm}[2]{ \item\small{ {#1}{#2 \vspace{-2pt}} } } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \newcommand{\jsk}[2]{\resumeItemm{#1}\\{#2}\vspace{-4pt}} %For Reference section \newcommand{\aff}[2]{ \item\small{ {#1}{#2 \vspace{-2pt}} } } \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} \usepackage{makecell} \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small #3} & \textit{\small #4} \end{tabular*}\vspace{-5pt} } \newcommand{\jk}[2]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} {#1} & #2 \end{tabular*}\vspace{-5pt} } \newcommand{\msj}[2]{ \vspace{-1pt}\item \begin{tabular*}{0.96666\textwidth}[t]{l@{\extracolsep{\fill}}r} {#1} & #2 \end{tabular*}\vspace{-5pt}} \newcommand{\zk}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.96666\textwidth}[t]{l@{\extracolsep{\fill}}r} {#1} & #2\\ {#3} & {#4} \end{tabular*}\vspace{-5pt}} \newcommand{\heading}[2][\relax]{{#2}\hfill#1\par\nobreak} %\pagestyle{fancy} %\fancyhf{} % clear all header and footer fields %\fancyfoot{} %\renewcommand{\headrulewidth}{0pt} %\renewcommand{\footrulewidth}{0pt} %%\pagestyle{fancy} %\fancyhf{} %\fancyhead[L]{Curriculum Vitae} %\fancyhead[R]{Junaid S. Khan} %\fancyfoot[CE,CO]{\leftmark} %\fancyfoot[LE,RO]{\thepage} % %\renewcommand{\headrulewidth}{2pt} %\renewcommand{\footrulewidth}{1pt} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%% \begin{document} %-----------------------------------------------% %------------------- HEADING -------------------% \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{\LARGE Syed Hassan Abbas Bukhari} & Email : \href{mailto:[email protected]}{[email protected]}\\ & \href{mailto:[email protected]}{[email protected]\,\,\,\,\,}\\ {Taj Garh \& P.O Iqbalabad (64310), District Rahim Yar Khan, Punjab, Pakistan.} & Mobile : +92--306--375--7284 \\ \end{tabular*} %-----------------------------------------------% %------------------ EDUCATION ------------------% \begin{comment} \section{\textbf{Objective}} To utilize my teaching skills towards a challenging career in growth oriented and leading edge that will provide mutual benefits and where from I can utilize my capabilities to the fullest benefits of the organization and society. \end{comment} \section{\textbf{Education}} \resumeSubHeadingListStart \resumeSubheading {Lahore University of Management Sciences}{Lahore, Pakistan} {\makecell[tl]{MS Physics(18-years); CGPA: $3.75/4.00$ \\ Dissertation Title: Measurement-Induced Qubit Cooling \\ Dissertation Supervisor: Dr. Adam Zaman Chaudhry (Assistant Professor)}}{Jul. 2020} \resumeSubheading {Quaid-i-Azam University}{Islamabad , Pakistan} {M.Sc. Physics(16-years); Percentage: $69 \%$}{May. 2018} \resumeSubheading {Islamia University Bahawalpur}{Bahawal Pur, Pakistan} {B.Sc. Physics, Mathematics (14-years); Percentage: $83\%$}{Aug. 2015} \resumeSubHeadingListEnd %-------------Standardized Tests---------------% % %\section{\textbf{Standardized Tests}} % %\resumeSubHeadingListStart % \resumeSubheading % {GRE Physics Test}{} % {\makecell[tl]{\textbf{Scaled Score: $990$} \\ %\textbf{Percentile below: $95$}}}{} %\resumeSubHeadingListEnd %-----------------------------------------------% %----------- RESEARCH EXPERIENCE ---------------% \section{\textbf{Research Interests}} \resumeSubHeadingListStart \resumeSubheading {\normalfont Open Quantum Systems, Condensed Matter Theory, Quantum Information, Quantum Optics}{}{\normalfont }{} \resumeSubHeadingListEnd %-----------------------------------------------% %\section{\textbf{Publication}} %\resumeSubHeadingListStart % \resumeSubheading % {\normalfont Zia, M., Mirza, A.R., and Chaudhry, A.Z., (2020). Master Equations with initial system-environment }{\\ \normalfont correlations} % {Manuscript in preparation}{} % \resumeSubHeadingListEnd %----------- RESEARCH EXPERIENCE ---------------% \section{\textbf{Research Experience}} \resumeSubHeadingListStart \resumeSubheading {Research Assistant}{Jul. 2019 -- Dec. 2019} {Department of Physics, Lahore University of Management Sciences}{Lahore, Pakistan} \resumeItemListStart \aff{Studied the effect of measurements on an open quantum system.}{} \aff{Studied and developed an optimal way for measurement-induced qubit cooling to perform less noisy computation.}{} \resumeItemListEnd \resumeSubheading {Master's Dissertation}{Sep. 2019 -- July. 2020} {Lahore University of Management Sciences}{Lahore, Pakistan} \resumeItemListStart \aff{\normalfont Measurement-Induced Qubit Cooling.}{} \resumeItemListEnd \resumeSubHeadingListEnd %-----------------------------------------------% %------------- TEACHING EXPERIENCE -------------% \section{\textbf{Teaching Experience}} \resumeSubHeadingListStart \resumeSubheading {Lahore University of Management Sciences}{Lahore, Pakistan} {Graduate Teaching Assistant}{Spring 2020} \resumeItemListStart \resumeItem{Modern Physics} {My job was to conduct weekly tutorials to help out students with the subject material and grade quizzes and assignments and later on everything was done online due to COVID-19.} \resumeItemListEnd \resumeSubheading {Lahore University of Management Sciences}{Lahore, Pakistan} {Lab Instructor}{Spring 2019} \resumeItemListStart \resumeItem{Physics Laboratory} {My job was to assist students in laboratory experiments and grade lab notebooks.} \resumeItemListEnd \resumeSubHeadingListEnd %-----------------------------------------------% \section{\textbf{Job Experience}} \resumeSubHeadingListStart \resumeSubheading {IQ era}{Lahore, Pakistan} {Instructional Designer}{Dec. 2020 -- Present} \resumeItemListStart \resumeItem{Physics} {IQ era is a US-based education startup. At IQ era, I am a Physics team leader and work with international colleagues from the UK, Hong Kong, Canada, and the USA. We are working on a non-traditional education system that mainly covers Cambridge's international education system.} \resumeItemListEnd \resumeSubHeadingListEnd %-----------------------------------------------% %------------------- PROJECTS ------------------% \section{\textbf{Academic Research Projects}} \resumeSubHeadingListStart \msj{\textbf{Quantum computing using Qiskit} }{\textit{Summer 2020}} \resumeItemListStart \aff{In this international summer school. I worked on the topic given below and implemented algorithms based on quantum circuits using Qiskit to run it on a quantum simulator and IBM quantum computer. . \begin{enumerate} \item Grover’s Search Algorithm and Quantum Oracle \item Quantum Phase Estimation \item Single-Qubit and Multi-Qubit States, Quantum Teleportation \item Shor’s Algorithm \item Quantum Error Correction \item Qubit Spectroscopy \item Quantum Chemistry \end{enumerate}}{} \resumeItemListEnd \msj{\textbf{Measuring the value of Boltzmann constant}}{\textit{Fall 2018}} \resumeItemListStart \aff{I studied the statistical behavior of the random motion of particles suspended in a fluid resulting from their collision with the fast-moving molecules in the fluid, and Boltzmann's constant was measured by observing the Brownian motion of polystyrene spheres in water.}{} \resumeItemListEnd \msj{\textbf{Magnetic Sensor } }{\textit{Fall 2018}} \resumeItemListStart \aff{In this project, we studied the effect of magnets by measuring the magnetic field (transverse and longitudinal) using calibrated TMR sensors.}{} \resumeItemListEnd \msj{\textbf{Power Method}}{\textit{Fall 2017}} \resumeItemListStart \aff{It was a theoretical and computational project. In this project, we developed FORTRAN language code to calculate the eigenvalue and eigenvector of a given $n \times n$ matrix.}{} \resumeItemListEnd \msj{\textbf{Skin Effect} }{\textit{Spring 2017}} \resumeItemListStart \aff{In this project, we studied the skin effect. We made an experimental setup for calculating the skin effect and its parameters like the resistance of transmission cables at different frequencies. We also observed and studied the decrease of a magnetic field as it penetrates a thickness of metal and decreases current with the depth of wire.}{} \resumeItemListEnd \resumeSubHeadingListEnd %-----------------------------------------------% %---------- Certifications/Trainings ----------% \section{\textbf{Summer Schools/Workshops}} \resumeSubHeadingListStart \resumeSubheading {$4^{th}$ National Symposium on Laser Matter Interaction}{Pakistan} {Participant}{Sept. 2020} \resumeSubheading {Qiskit Global Summer School}{Pakistan} {Participant \& Lab Student}{July. 2020} \resumeSubheading {Workshop on In isotropic media and meta materials}{Islamabad, Pakistan} {Participant}{Jan. 2019} \resumeSubHeadingListEnd %-----------------------------------------------% %---------------- Presentations ----------------% %\section{\textbf{Presentations}} %\resumeSubHeadingListStart % \zk{Master Equations with Initial Correlations} % {\textit{Dec. 2018}}{Department of Physics, Lahore University of Management Sciences, Lahore}{} % \zk{Flavor Physics and its anomalies} % {\textit{Aug. 2018}}{7$^{th}$ School on LHC Physics, National Centre for Physics, Islamabad}{} % \zk{Axion as a Dark Matter Candidate (course requirement)} % {\textit{May. 2018}}{Department of Physics, Lahore University of Management Sciences, Lahore}{} % \zk{Dilute Magnetic Semiconductors (course requirement)} % {\textit{May. 2018}}{Department of Physics, Lahore University of Management Sciences, Lahore}{} % \zk{Measuring the lifetime of cosmic ray muons(course requirement)} % {\textit{Feb. 2017}}{Department of Physics, Lahore University of Management Sciences, Lahore}{} % \zk{Grover’s Search Algorithm(course requirement)} % {\textit{Dec. 2016}}{Department of Physics, Quaid-i-Azam University, Islamabad}{} % \zk{20 Strangest Things Physics Taught Us} % {\textit{Dec. 2016}}{Department of Physics, Quaid-i-Azam University, Islamabad}{} % \zk{The Black Hole Information Paradox} % {\textit{Nov. 2016}}{Department of Physics, Quaid-i-Azam University, Islamabad}{} % \zk{From Classical to Quantum Computation } % {\textit{Nov. 2016}}{Department of Physics, Quaid-i-Azam University, Islamabad}{} % \zk{Insight to Dark Matter} % {\textit{Oct. 2016}}{Department of Physics, Quaid-i-Azam University, Islamabad}{} % \zk{What is Light?} % {\textit{Oct. 2016}}{Department of Physics, Quaid-i-Azam University, Islamabad}{} %\resumeSubHeadingListEnd %-----------------------------------------------% %---------- Professional Affiliations ----------% \section{\textbf{Professional Affiliations}} \resumeSubHeadingListStart \resumeSubheading {Official YouTube Channel Quaid-i-Azam University}{Islamabad, Pakistan} {Founder \& Producer}{Jan. 2017} \resumeSubheading {QAU Physics Club}{Islamabad, Pakistan} {Co-Founder}{Sep. 2016 \textbf{--} Present} \resumeSubheading {SPIE Student Chapter LUMS}{Lahore, Pakistan} {Member}{Jan. 2019 \textbf{--} Dec. 2019} \resumeSubheading {SPIE Student Chapter LUMS}{Lahore, Pakistan} {Treasurer}{Spring 2020} \resumeSubheading {Lahore Science Mela}{Lahore, Pakistan} {Demonstrator}{Oct. 2019} \resumeSubheading {Q Pakistan}{Pakistan} {Member}{Oct. 2020} \resumeSubHeadingListEnd %-----------------------------------------------% %----------- Awards, Grants & Honours ----------% \section{\textbf{Awards \& Honours}} \resumeSubHeadingListStart \msj{Placed in Dean's Honour Roll for MS in Physics at LUMS.}{\textit{2020}} \msj{Granted $100\%$ merit scholarship for MS in Physics at LUMS.}{\textit{2018-20}} \msj{Got the first position in bachelor of science at Khawaja Fareed Govt. P/G College.}{\textit{2015}} %\msj{1$^{st}$ position in B.Sc at PGC.}{\textit{2014}} \resumeSubHeadingListEnd %-----------------------------------------------% %----------- General Publications --------------% \begin{comment} \section{\textbf{General Publications}} \resumeItemListStart \aff{Azam, Muhammad Bilal. ``Nayi Hukoomat aur Scienci Shaoor''. HumSub. Published $05$ August $2018$, from \url{http://www.humsub.com.pk/157453/bilal-azam/}}{} \aff{Azam, Muhammad Bilal. ``An Interview with Dr. Muhammed Sameed (CERN, Switzerland)''. HumSub. Published $05$ April $2018$, from \url{http://en.humsub.com.pk/258/muhammad-bilal-azam/}}{} \aff{Azam, Muhammad Bilal. ``Sciencii Maidaan me Pakistani Science-daanoun ke Karnaamey''. Dawn News Television. Published $26$ July $2017$, from \url{https://www.dawnnews.tv/news/1061688}}{} \aff{Azam, Muhammad Bilal. ``Dr. I. H. Usmani: The Common Heritage of All Mankind''. Technology Times 2015. Web. 15 July 2016.}{} \aff{Ahmad, Rehan and Azam, Muhammad Bilal. Physics (Intermediate Part--I,II) Chapter Wise Solution of Punjab Boards. Sahiwal: A Plus, $2016$. Print.}{} \resumeItemListEnd \end{comment} %-----------------------------------------------% %------------ Skills and Interests -------------% \section{\textbf{Skills and Interests}} \resumeSubHeadingListStart \begin{comment} \resumeSubItem{Laboratory} {Dimensional Analysis, Notebook Skills, Logger Pro, Mach\textbf{--}Zehnder Interferometer, Fabry\textbf{--}Perot Interferometer, alpha\textbf{--}SE Ellipsometer, Oscilloscope.} \end{comment} \resumeSubItem{Programming} {FORTRAN, Python (Qiskit, Qutip), Linux, {\LaTeX}.} \resumeSubItem{Softwares} {Mathematica, MATLAB, Microsoft Office, OriginPro, PeakFit, LabView.} \resumeSubItem{Interest} {Table Tennis, Swimming, Chess, Music, Physics Simulations, Psychology, Magic} \resumeSubHeadingListEnd %-----------------------------------------------% %----------------- REFERENCES ------------------% \section{\textbf{Reference}} \resumeSubHeadingListStart \jsk{Dr. Adam Zaman Chaudhry} {Assistant Professor and Chairman of Physics Department, LUMS, Lahore. \\Email: \href{mailto:[email protected]}{[email protected]}} \jsk{Dr. Ata Ulhaq} {Assistant Professor of Physics, LUMS, Lahore. \\. Email: \href{mailto:[email protected]}{[email protected]}} \resumeSubHeadingListEnd %-----------------------------------------------% \end{document}
{ "alphanum_fraction": 0.6299900346, "avg_line_length": 38.4211711712, "ext": "tex", "hexsha": "55e31ad9d26e2e95c0126ae896377c529e81e8c5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "de714ef4444bb51a2996e3de6587e470d759f6dd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "qchi2020/resume", "max_forks_repo_path": "sourabh_bajaj_resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "de714ef4444bb51a2996e3de6587e470d759f6dd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "qchi2020/resume", "max_issues_repo_path": "sourabh_bajaj_resume.tex", "max_line_length": 360, "max_stars_count": null, "max_stars_repo_head_hexsha": "de714ef4444bb51a2996e3de6587e470d759f6dd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "qchi2020/resume", "max_stars_repo_path": "sourabh_bajaj_resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4570, "size": 17059 }
\section{Graphs} \begin{code}{Topological Sort}{$\mathcal{O}(|V| + |E|)$}{graphs/topoSort.cc} A priority queue can be used if further sorting is necessary. \end{code} \lst{SCC Tarjan}{$\mathcal{O}(|V| + |E|)$}{graphs/scc.cc} \lst{Articulation Points}{$\mathcal{O}(| V | + | E |)$}{graphs/articulationPoints.cc} \lst{Bridges}{$\mathcal{O}(|V|+|E|)$}{graphs/bridges.cc} \lst{Minimal Spanning Tree -- Kruskal}{$\mathcal{O}(|E|\log|V|)$}{graphs/kruskal.cc} \subsection{Shortest Paths} \lst{Dijkstra}{$\mathcal((|E| + |V|)\log|V|)$}{graphs/dijkstra.cc} \begin{code}{Bellman Ford}{$\mathcal{O}(|E||V|)$}{graphs/bellmanFord.cc} Check for negative cycles: \\ $dist$ still changes in a $\lvert V \rvert$'th relaxation step with $dist_i = 0$ initially for all $i$. \end{code} \begin{code}{Bellman Ford with Queue}{$\mathcal{O}(|E||V|)$}{graphs/bellmanFordQueue.cc} This approach may be faster \end{code} \lst{Floyd Warshall}{$\mathcal{O}(|V|^3)$}{graphs/floydWarshall.cc} \subsection{Forest} \lst{Lowest Common Ancestor}{build: $\mathcal{O}(|V| \log|V|)$ query: $\mathcal{O}(1)$}{graphs/LCA.cc} \lst{LCA with binary lifting}{build: $\mathcal{O}(|V|\log |V|)$ query: $\mathcal{O}(\log|V|)$}{graphs/LCABL.cc} \lst{Heavy-light decomposition}{build: $\mathcal{O}(|V|)$, query/update: $\mathcal{O}(\log^2|V|)$/$\mathcal{O}(\log|V|)$}{graphs/HLD.cc} \subsection{Flow} \textbf{Max-flow min-cut theorem.} The maximum value of an $s$-$t$ flow is equal to the minimum capacity over all $s$-$t$ cuts \lst{Edges for flow algorithms}{}{graphs/flowedge.cc} \lst{Edmonds Karp}{$\mathcal{O}(|V||E|^2)$}{graphs/edmondsKarp.cc} \lst{Dinic}{$\mathcal{O}(|V|^2|E|)$}{graphs/dinic.cc} \lst{Push Relabel}{$\mathcal{O}(|V|^3)$}{graphs/pushRelabel.cc} \subsubsection{Minimum s-t cut} To find a minimal $s$-$t$ cut find all nodes that are reachable in the residual network for a network w/ maximum flow from $s$. This is the $s$ part of the cut. All other nodes belong to the $t$ part. \subsubsection{Closure Problem} A closure of a directed graph is a set of vertices with no outgoing edges. The closure problem is the task to find the maximum weighted closure. Solvable through reduction to a maximum flow problem: Add source and target, connect all the vertices with positive weight $w$ to the source with capacity $w$ and connect all the vertices with negative weight $w$ to the target with capacity $-w$. All of the edges in the original graph have infinite capacity in the new graph. The weight of the maximum weighted closure is equal to the sum of all positive weighted vertices in the original graph minus the maximum flow in the constructed graph.
{ "alphanum_fraction": 0.7044427711, "avg_line_length": 40.2424242424, "ext": "tex", "hexsha": "cd5f77d293ec07d708d89e06d1eca56b76b7718e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Zeldacrafter/CompProg", "max_forks_repo_path": "document/graphs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Zeldacrafter/CompProg", "max_issues_repo_path": "document/graphs.tex", "max_line_length": 136, "max_stars_count": 4, "max_stars_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Zeldacrafter/CompProg", "max_stars_repo_path": "document/graphs.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-21T03:51:21.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-06T15:44:57.000Z", "num_tokens": 862, "size": 2656 }
%% ------------------------------------------------------------------------- %% \chapter{Correlation rules and proposed extension} \label{cap:proposed-solution} \noindent A state of the art skin detection method has been recently developed by~\cite{brancati:17}. In this chapter, we review the method and extend it\footnote{All the implementations can be found at \url{https://bitbucket.org/rodrigoadfaria/skin-detector/}.} adding more rules to enforce the constraints and seeking for a better accuracy in terms of false positive rate without hurting the performance of the original method. %% ------------------------------------------------------------------------- %% \section{Correlation rules on YCrYCb colormap} \label{sec:correlation_rules_ycrycb} The pixels of human skin have a very particular color. They fall into a restricted range of hues and they are not deeply saturated. This phenomenon is due the appearance of skin: formed by a combination of blood (red) and melanin (brown, yellow), which leads the human skin color to be clustered within a small area in the color space~\citep{fleck:96}. Although this cluster can be seen in different color spaces, authors often use those where it is possible to split the chrominance from the luminance information. YCbCr is one of these color spaces. \citet{chai:99}~firstly observed this cluster within this particular color space (see Fig.~\ref{fig:dataset_sfa_ycbcr}). Based on a given set of training images, they~\citep{chai:99} built a skin color map using a histogram approach. In this map, the Cr and Cb distributions of skin color fall into the ranges [133, 173] and [77, 127], respectively, regardless the skin color variation in different races (see Fig.~\ref{fig:dataset_sfa_ycbcr_hist}). \begin{figure}[!ht] \centering \begin{minipage}{0.485\textwidth} \includegraphics[width=\textwidth]{sfa/sfa_ycbcr} \end{minipage} ~ % space \begin{minipage}{0.485\textwidth} \includegraphics[width=\textwidth]{sfa/sfa_ycbcr_skin_only} \end{minipage} \caption[3-dimensional view of the YCbCr channels of some image patches of the SFA dataset]{3-dimensional view of the YCbCr channels of some image patches of the SFA dataset. We used the patches with skin samples of size $15 \times 15$. The blue points are skin samples and the green ones are non-skin. On the right (skin samples only), we can clearly see a narrow and thin cluster. Source: adapted from~\citet{chai:99}.} \label{fig:dataset_sfa_ycbcr} \end{figure} \begin{figure}[!ht] \centering \begin{minipage}{0.485\textwidth} \includegraphics[width=\textwidth]{sfa/sfa_cb_histogram} \end{minipage} ~ % space \begin{minipage}{0.485\textwidth} \includegraphics[width=\textwidth]{sfa/sfa_cr_histogram} \end{minipage} \caption[Histogram of Cb and Cr channels of some image patches of the SFA dataset]{Histogram of Cb and Cr channels of some image patches of the SFA dataset. We used the patches skin samples of size $15 \times 15$. Clearly, the samples (pixels) fall into the intervals observed by~\citet{chai:99}. Source: adapted from~\citet{chai:99}.} \label{fig:dataset_sfa_ycbcr_hist} \end{figure} Therefore, a very simple and practical approach to detect human skin pixels would be to create a set of rules, based in those ranges, that identify the presence of chrominance (Cr, Cb) values who fit into the rules. In fact, this was the approach used by~\citet{chai:99}. Another important finding regarding the skin color clusters in the YCbCr color space is their behavior when looking for the compositions of YCb and YCr separately. In other words, where rely this distribution into the YCb and YCr subspaces. In Figure~\ref{fig:obama_trapezoids}, we can see the distribution (clusters) for an image of the Pratheepan dataset. We can clearly see their shapes as taking a trapezoidal form~\citep{hsu:02}. In fact, the skin color pixels distribution in the YCb and YCr subspaces is a pattern. However, this trapezoidal shape and size will change according to many factors. \citet{brancati:17} observed that change and identified that they are caused mainly due illumination conditions (i.e. the lighting of the scene when the image was acquired influences the size, height, and position of these trapezoidal shape). Moreover, they~\citep{brancati:17} observed a proportional behavior of the chrominance components (Cr, Cb) that could be fitted into a model for skin pixels detection. We will explain in details how this model has been created in Section~\ref{sec:original_method}. \begin{figure*}[!htb] \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{pra/ori/obama} \caption{} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{pra/gtc/obama} \caption{} \end{subfigure} \begin{subfigure}[t]{0.88\textwidth} \includegraphics[width=\textwidth]{image_trap_plot} \caption{} \end{subfigure} \caption[Skin pixels distribution in the YCr and YCb subspaces of a sample image]{Skin pixels distribution in the YCr and YCb subspaces of a sample image. Each image is, respectively, (a) sample image from Pratheepan (b) ground truth (c) skin pixels distribution in YCr (orange) and YCb (blue) subspaces. We can clearly see a trapezoidal shape of the pixels distribution. These trapezoids are inversely positioned reflecting the proportional behavior of the chrominance components (Cr, Cb). Source: adapted from~\citet{brancati:17}.} \label{fig:obama_trapezoids} \end{figure*} %% ------------------------------------------------------------------------- %% \section{Original method} \label{sec:original_method} In order to describe the proposed extensions, we will first present the original method that is based on the definition of image-specific trapezoids, named $T_{YCb}$ and $T_{YCr}$, in the \textit{YCb} and \textit{YCr} subspaces, respectively. The trapezoids are essential to verify a relationship between the chrominance components $Cb$ and $Cr$ in these subspaces~\citep{brancati:17}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{trapezoids} \caption[Graphical representation of the trapezoids as well as their parameters]{Graphical representation of the trapezoids as well as the parameters $Y_{min} = 0$, $Y_{max} = 255$, $Y_{0}$, $Y_{1}$, $Y_{2}$, $Y_{3}$, $Cr_{min}$, $Cr_{max}$, $Cb_{min}$, $Cb_{max}$, $h_{Cr}$, $h_{Cb}$, $H_{Cr}(P_Y)$, $H_{Cb}(P_Y)$. Source: adapted from~\citep{brancati:17}.} \label{fig:trapezoids} \end{figure} To show the correlations, Brancati et. al. present the YCbCr space as a 2D graph where the $Y$ is presented in the abscissa and the $Cr$ and $Cb$ components is in the ordinate (see Fig.~\ref{fig:trapezoids}). The base of the trapezoids $T_{YCr}$ and $T_{YCb}$ are given by the coordinates $(Y_{min}, Cr_{min})$ and $(Y_{min}, Cb_{max})$ in the $YCr$ and $YCb$ , respectively~\citep{brancati:17}. The values $Cr_{min}$ = 133, $Cb_{max}$ = 128 were selected according to~\citet{chai:99} where a skin color map was designed using a histogram approach based on a given set of training images. Chai and Ngan observed that the Cr and Cb distributions of skin color fall in the ranges [133, 173] and [77, 127], respectively, regardless of the skin color variation in different races (see details in Section~\ref{sec:correlation_rules_ycrycb}). The $Cr_{max}$ parameter is calculated dynamically, taking into account the histogram of the pixels with $Cr$ values in the range $[Cr_{min}, 183]$, looking for the maximum value of $Cr$ associated with at least 0.1\% \footnote{In \citet{brancati:17} this rate is reported to be equal to 10\%. However, in the distributed source code we found the value 0.1\%, that we are using in the experiments.} of pixels in the image. The same applies to $Cb_{min}$, taking the histogram with $Cb$ values in the range $[77, Cb_{max}]$. $Y_0$ and $Y_1$ (shorter base of the upper trapezoid) are, respectively, the 5${th}$ and 95$th$ percentile of the luminance values associated with the pixels of the image with $Cr = Cr_{max}$~\citep{brancati:17}. A similar procedure is used to find the values of the shorter base of the other trapezoid, $Y_2$ and $Y_3$ (see Fig.~\ref{fig:crmax_computation} for an example). \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{crmax_computation} \caption[Computation of $Cr_{max}$ based on $Cr$ values histogram of a 724 x 526 image]{Computation of $Cr_{max} = 162$ based on $Cr$ values histogram of a 724 x 526 image. Source: adapted from~\citep{brancati:17}.} \label{fig:crmax_computation} \end{figure} The correlation rules' parameters between the chrominance components $P_{Cr}$ and $P_{Cb}$ of a pixel $P$ are specified as~\citep{brancati:17}: \begin{itemize} \item the minimum difference between the values $P_{Cr}$ and $P_{Cb}$, denoted $I_P$; \item an estimated value of $P_{Cb}$, namely $P_{Cb_s}$; \item the maximum distance between the points $(P_Y, P_{Cb})$ and $(P_Y, P_{Cb_s})$, denoted $J_P$. \end{itemize} Therefore, to determine if $P$ is skin, the following correlation rules, expressed in terms of equations, must hold~\citep{brancati:17}: \begin{equation} P_{Cr} - P_{Cb} \geq I_P \label{condition_c0} \end{equation} \begin{equation} |P_{Cb} - P_{Cb_s}| \leq J_P \label{condition_c1} \end{equation} The estimated value $P_{Cb_{s}}$ is given by \footnote{$dP_{Cb_{s}}$ is the distance between the points $(P_Y, P_{Cb_{s}})$ and $(P_Y, Cb_{max})$ in the $YCb$ subspace, calculated on the basis of $dP_{Cr}$, observing the proportional behavior of the components. $\alpha$ is the rate between the normalized heights of the trapezoids in relation to the $P_Y$ value~\citep{brancati:17}.}: \begin{equation} P_{Cb_s} = Cb_{max} - dP_{Cb_s} \end{equation} where \footnote{$dP_{Cr}$ is the distance between $(P_Y, P_{Cr})$ and $(P_Y, Cr_{min})$ points in the $YCr$ subspace~\citep{brancati:17}.}: \begin{align} dP_{Cb_s} &= \alpha \cdot dP_{Cr} \\ dP_{Cr} &= P_{Cr} - Cr_{min} \end{align} The coordinates of the other sides of the trapezoids are given by $[P_Y, H_{Cr}(P_Y)]$ and $[P_Y, H_{Cb}(P_Y)]$, such that~\citep{brancati:17}: \begin{align} H_{Cr}(Y) &= \begin{cases} Cr_{min} + h_{Cr}\big(\frac{Y - Y_{min}}{Y_0 - Y_{min}}\big) & Y \in [Y_{min},\ Y_0] \\ Cr_{max} & Y \in [Y_0,\ Y_1] \\ Cr_{min} + h_{Cr}\big(\frac{Y - Y_{max}}{Y_1 - Y_{max}}\big) & Y \in [Y_1,\ Y_{max}] \end{cases} \\ H_{Cb}(Y) &= \begin{cases} Cb_{min} + h_{Cb}\big(\frac{Y - Y_2}{Y_{min} - Y_2}\big) & Y \in [Y_{min},\ Y_2] \\ Cb_{min} & Y \in [Y_2,\ Y_3] \\ Cb_{min} + h_{Cb}\big(\frac{Y - Y_3}{Y_{max} - Y_3}\big) & Y \in [Y_3,\ Y_{max}] \end{cases} \end{align} \noindent where $h_{Cr} = Cr_{max} - Cr_{min}$ and $h_{Cb} = Cb_{max} - Cb_{min}$, which are the heights of $T_{YCr}$ and $T_{YCb}$, respectively. The computation of those points are useful for the calculation of $\alpha$. We first compute the distances $\Delta_{Cr}(P_Y)$ and $\Delta_{Cb}(P_Y)$ between the points $(P_Y, H_{Cr}(P_Y))$, $(P_Y, H_{Cb}(P_Y))$ and the base of the trapezoids~\citep{brancati:17}: \begin{align} \Delta_{Cr}(P_Y) &= H_{Cr}(P_Y) - Cr_{min} \\ \Delta_{Cb}(P_Y) &= Cb_{max} - H_{Cb}(P_Y) \end{align} Next, the distances are normalized with respect to the difference in size of the trapezoids \citep{brancati:17}: \begin{align} \Delta^{'}_{Cr}(P_Y) &= \begin{cases} \Delta_{Cr}(P_Y) \cdot \frac{A_{T_{YCb}}} {A_{T_{YCr}}} &\quad \text{if}\ A_{T_{YCr}} \geq A_{T_{YCb}} \\ \Delta_{Cr}(P_Y) &\quad \text{otherwise} \end{cases} \\ \Delta^{'}_{Cb}(P_Y) &= \begin{cases} \Delta_{Cb}(P_Y) &\quad \text{if}\ A_{T_{YCr}} \geq A_{T_{YCb}} \\ \Delta_{Cb}(P_Y) \cdot \frac{A_{T_{YCr}}} {A_{T_{YCb}}} &\quad \text{otherwise} \end{cases} \end{align} where $A_{T_{YCr}}$ and $A_{T_{YCb}}$ are the areas of trapezoid ${T_{YCr}}$ and ${T_{YCb}}$, respectively. Then, the value of $\alpha$ is given by~\citep{brancati:17}: \begin{equation} \alpha = \frac{\Delta^{'}_{Cb}(P_Y)} {\Delta^{'}_{Cr}(P_Y)} \end{equation} Finally, $I_P$ \footnote{There is a difference between the source code and the equation that defines $I_P$ in~\citet{brancati:17}. Basically, part of the equation must be taken its absolute value, which we have fixed here.} and $J_P$ are given by~\citep{brancati:17}: \begin{equation} I_P = sf \cdot |(\Delta^{'}_{Cr}(P_Y) - dP_{Cr}) + (\Delta^{'}_{Cb}(P_Y) - dP_{Cb_s})| \label{eq:ip} \end{equation} \begin{equation} J_P = dP_{Cb_s} \cdot \frac{dP_{Cb_s} + dP_{Cr}} {\Delta^{'}_{Cb}(P_Y) + \Delta^{'}_{Cr}(P_Y)} \label{eq:jp} \end{equation} where: \begin{equation} sf = \frac{min( (Y_1 - Y_0), (Y_3 - Y_2) )} {max( (Y_1 - Y_0), (Y_3 - Y_2) )} \end{equation} % Acho que um gráfico mostrando um ponto, ou alguns pontos, no trapézio superior e o respectivo ponto no trapézio inferior seria muito didático. %% ------------------------------------------------------------------------- %% \section{Complementary method} \label{sec:proposed_method} The hypothesis assumed in the original method is based on rules that an estimated value of the point $P_{Cb}$, namely $P_{Cb_s}$, must hold in order for the correlation to be valid. On the basis of the proportional behavior of the chrominance components, we will rewrite the correlation rules with respect to the $P_{Cr}$ point. Thus, we have to refactor the correlation rules' parameters to put them in terms of the estimated value of $P_{Cr}$, that we denote as $P_{Cr_s}$ \footnote{$dP_{Cr_s}$ is the distance between the points $(P_Y, P_{Cr_s})$ and $(P_Y, Cr_{min})$ in the $YCr$ subspace, calculated on the basis of $dP_{Cb}$, observing the proportional behavior of the components. $\alpha$ is the rate between the normalized heights of the trapezoids in relation to the $P_Y$ value.}: \begin{equation} P_{Cr_s} = dP_{Cr_s} + Cr_{min} \end{equation} where \footnote{$dP_{Cb}$ is the distance between $(P_Y, P_{Cb})$ and $(P_Y, Cb_{max})$ points in the $YCb$ subspace.}: \begin{equation} dP_{Cr_s} = \alpha \cdot dP_{Cb} \end{equation} \begin{equation} dP_{Cb} = Cb_{max} - P_{Cb} \end{equation} Next, the constraints given by $I_P$ and $J_P$ in the Eq. \ref{eq:ip} and \ref{eq:jp} respectively, can be redefined as: \begin{equation} I^{'}_P = sf \cdot |(\Delta^{'}_{Cr}(P_Y) - dP_{Cr_s}) + (\Delta^{'}_{Cb}(P_Y) - dP_{Cb})| \end{equation} \begin{equation} J^{'}_P = dP_{Cr_s} \cdot \frac{dP_{Cb} + dP_{Cr_s}} {\Delta^{'}_{Cb}(P_Y) + \Delta^{'}_{Cr}(P_Y)} \end{equation} Therefore, to determine if the pixel $P$ is skin, we have to modify the correlations rules given by Eq. \ref{condition_c0} and \ref{condition_c1}: \begin{equation} P_{Cr} - P_{Cb} \geq I^{'}_P \label{condition_c00} \end{equation} \begin{equation} |P_{Cr} - P_{Cr_s}| \leq J^{'}_P \label{condition_c11} \end{equation} Doing this simple extension, we need now to apply the method to the same sets of images to evaluate, in fact, the proportional behavior of the chrominance components. More than that, we can combine all these constraints, given by the pair equations \ref{condition_c0} and \ref{condition_c1}, \ref{condition_c00} and \ref{condition_c11}, to reinforce the firstly defined hypothesis. \begin{figure}[!htp] \centering \includegraphics[width=0.25\textwidth]{pixel_neighborhood} \caption[Neighbors evaluation with respect to a pixel $P$]{Neighbors evaluation with respect to $P$. If the image is scanned in raster order, $N_8^-(P)$ is the set of points that can be reached before $P$ in an 8-\textit{neighbors} window. In other words, $N_8^-(P)$ are the blue points which we already have evaluated. Source: proposed by the author.} \label{fig:pixel_neighborhood} \end{figure} %% ------------------------------------------------------------------------- %% \section{Neighborhood extended method} \label{sec:neighborhood_extended_method} Both methods presented in Sections~\ref{sec:original_method} and \ref{sec:proposed_method} can be applied to detect skin pixels, either separated or combined (i.e. the four equations of the correlation rules of each method -- original and complementary -- must hold). However, skin pixels do not usually appear isolated and we could improve the method using some of the already processed neighbors of a pixel $P$, in order to decide if $P$ represents human skin, or not. To do that, let $N_8^-(P)$ be the 8-\textit{neighbors} of $P$ that can be reached before $P$ when scanning the image in raster order~\citep{rosenfeld:66}. We can see this idea graphically represented by the blue points in Figure~\ref{fig:pixel_neighborhood}. Thus, we classify $P$ as skin in the following manner: if the constraints given by the pair of equations \ref{condition_c0} and \ref{condition_c1}, as well as \ref{condition_c00} and \ref{condition_c11} hold, then $P$ is classified as skin. When only one of the conditions is satisfied, then we check the decision in $N_8^-(P)$. If three or more pixels are skin, then $P$ will also be classified as a skin pixel. Figure~\ref{fig:n8-flowchart} shows a flowchart of the aforementioned procedure described. \begin{figure}[ht] \centering % Define block styles \tikzstyle{decision} = [diamond, draw, fill=blue!20, text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt] \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm, minimum height=2em] \begin{tikzpicture}[node distance = 3cm, auto] % Place nodes \node [block] (pcrs) {calculate \ref{condition_c00} and \ref{condition_c11} rules}; \node [block, left of=pcrs] (pcbs) {calculate \ref{condition_c0} and \ref{condition_c1} rules}; \node [decision, below of=pcrs] (bothtrue) {both true?}; \node [block, right of=bothtrue, node distance=4cm, fill=gray!20] (isskin) {$P$ is skin}; \node [decision, below of=bothtrue] (bothfalse) {both false?}; \node [block, right of=bothfalse, node distance=4cm, fill=gray!20] (noskin) {$P$ is non skin}; \node [decision, right of=noskin, node distance=4cm] (n8decision) {skin pixels $\geq 3$}; \node [block, below of=n8decision, node distance=3cm] (n8) {check decision in $N_8^-(P)$}; % Draw edges \path [line] (pcrs) -- (bothtrue); \path [line] (bothtrue) -- node {no} (bothfalse); \path [line] (bothfalse) -- node {yes} (noskin); \path [line] (bothfalse) |- node [near start] {no} (n8); \path [line] (bothtrue) -- node {yes} (isskin); \path [line] (n8) -- (n8decision); \path [line] (n8decision) |- node [near start] {yes} (isskin); \path [line] (n8decision) -- node [near start] {no} (noskin); \path [line] (pcbs) |- (bothtrue); \end{tikzpicture} \caption[Flowchart of our proposed neighbors method]{Flowchart of our proposed neighbors method. In \textbf{both false} decision, the \textbf{no} path means that one of the rules is true and we are in doubt if $P$ is skin or not -- here is where the neighbors are used to find out the label of $P$. Source: proposed by the author.} \label{fig:n8-flowchart} \end{figure} %% ------------------------------------------------------------------------- %% \section{Heuristics to fix neighborhood extended method} \label{sec:sup_neighborhood_operations} The neighborhood extended method presented in Section~\ref{sec:neighborhood_extended_method} will end up with an undesired behavior on the output images that we called \textit{diagonal effect} (see Fig.~\ref{fig:diagonal_effect}). In addition, besides being visually undesirable, the \textit{diagonal effect} phenomenon causes us to have an increase in the false positive rate. This is caused due to the shape of the window being used. Once we look only for the four already visited pixels of the 8-\textit{neighbors} window, the operation is so based in a non-symmetrical mask. Ideally, we could use another neighborhood strategy and look for all the eight neighbors of the pixel $P$ being evaluated. However, this particular implementation can add extra computational time and affect the performance of the method. Therefore, we created an adaptation of the neighborhood method shown in Section~\ref{sec:neighborhood_extended_method}. In this version, we scan the image, with a size of $W \times H$, in the raster order, and apply the original and the extended complementary correlation rules for every single pixel. We keep both results in a matrix of the same size ($W \times H$) of the input image. For each coordinate of this output matrix, we will have a two-position vector with the result of the original and complementary rules answer for this pixel. Next, we read each position of this output matrix and we apply an 8-\textit{neighbors} operations in four different implementations, looking for the majority (five at least) neighbors: \begin{enumerate}[label={(\arabic*)}] \item we look in the correlation rules answer performing an AND. In other words, if both original and complementary correlation rules are saying this pixel is skin, then we classify it as skin; \item we look in the correlation rules answer performing an OR. In other words, if one of the correlation rules (original or complementary) is saying this pixel is skin, then we classify it as skin; \item we look in the neighbors only querying the original ($P_{Cb_s}$) correlation rules; \item we look in the neighbors only querying the complementary ($P_{Cr_s}$) correlation rules. \end{enumerate} Of course, this variation will add some additional computational cost once we will scan the image one more time. This implementation can be enhanced, but the idea here is to only explore better the connectivity of the 8-\textit{neighbors} window and check, on the basis of a symmetric mask window, if the \textit{diagonal effect} is gone as well as the measures are improved. Some experiments can be seen further in Section~\ref{sec:sno_experiments}. \begin{figure*}[!htb] \centering \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=2.6cm]{sfa/ori/img14} \includegraphics[width=2.6cm]{pra/ori/chenhao0017me9} \includegraphics[width=2.6cm]{hgr/ori/N_P_hgr1_id04_5} \includegraphics[width=2.6cm]{cpq/ori/1923132} \includegraphics[width=2.6cm]{cpq/ori/2226882} \caption{} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=2.6cm]{sfa/gt/img14} \includegraphics[width=2.6cm]{pra/gt/chenhao0017me9} \includegraphics[width=2.6cm]{hgr/gt/N_P_hgr1_id04_5} \includegraphics[width=2.6cm]{cpq/gt/1923132} \includegraphics[width=2.6cm]{cpq/gt/2226882} \caption{} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=2.6cm]{sfa/cmb/img14} \includegraphics[width=2.6cm]{pra/cmb/chenhao0017me9} \includegraphics[width=2.6cm]{hgr/cmb/N_P_hgr1_id04_5} \includegraphics[width=2.6cm]{cpq/cmb/1923132} \includegraphics[width=2.6cm]{cpq/cmb/2226882} \caption{} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=2.6cm]{sfa/ngh/dgn/img14} \includegraphics[width=2.6cm]{pra/ngh/dgn/chenhao0017me9} \includegraphics[width=2.6cm]{hgr/ngh/dgn/N_P_hgr1_id04_5} \includegraphics[width=2.6cm]{cpq/ngh/dgn/1923132} \includegraphics[width=2.6cm]{cpq/ngh/dgn/2226882} \caption{} \end{subfigure} \caption[Image samples with the diagonal effect after the neighbors method segmentation]{Image samples with the diagonal effect after the neighbors method segmentation. Each image is from (top-down) SFA, Pratheepan, HGR, and Compaq (latest two) datasets, respectively, where: (a) original image (b) ground truth (c) combined method (f) neighbors method. Independently of the classification accuracy, we can clearly see the diagonal effect present in the output of the neighbors method segmentation in comparison with combined. Besides being a visually undesirable effect, this phenomenon causes us to have an increase in the false positive rate.} \label{fig:diagonal_effect} \end{figure*}
{ "alphanum_fraction": 0.7003315275, "avg_line_length": 77.0529595016, "ext": "tex", "hexsha": "bb6d4de664028d9ac3b98e54994bcda3f0bdfb6c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "771fbd5005b71484319496f9c481dc843513ce9d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rodrigoadfaria/master-dissertation", "max_forks_repo_path": "cap-proposed-solution.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "771fbd5005b71484319496f9c481dc843513ce9d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rodrigoadfaria/master-dissertation", "max_issues_repo_path": "cap-proposed-solution.tex", "max_line_length": 898, "max_stars_count": 1, "max_stars_repo_head_hexsha": "771fbd5005b71484319496f9c481dc843513ce9d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rodrigoadfaria/master-dissertation", "max_stars_repo_path": "cap-proposed-solution.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-22T19:07:16.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-22T19:07:16.000Z", "num_tokens": 7325, "size": 24734 }
% Copyright 2019 by Till Tantau % % This file may be distributed and/or modified % % 1. under the LaTeX Project Public License and/or % 2. under the GNU Free Documentation License. % % See the file doc/generic/pgf/licenses/LICENSE for more details. \section{Making Trees Grow} \label{section-trees} \subsection{Introduction to the Child Operation} \emph{Trees} are a common way of visualizing hierarchical structures. A simple tree looks like this: % \begin{codeexample}[] \begin{tikzpicture} \node {root} child {node {left}} child {node {right} child {node {child}} child {node {child}} }; \end{tikzpicture} \end{codeexample} Admittedly, in reality trees are more likely to grow \emph{upward} and not downward as above. You can tell whether the author of a paper is a mathematician or a computer scientist by looking at the direction their trees grow. A computer scientist's trees will grow downward while a mathematician's tree will grow upward. Naturally, the \emph{correct} way is the mathematician's way, which can be specified as follows: % \begin{codeexample}[] \begin{tikzpicture} \node {root} [grow'=up] child {node {left}} child {node {right} child {node {child}} child {node {child}} }; \end{tikzpicture} \end{codeexample} In \tikzname, there are two ways of specifying trees: Using either the |graph| path operation, which is covered in Section~\ref{section-library-graphs}, or using the |child| path operation, which is covered in the present section. Both methods have their advantages. In \tikzname, trees are specified by adding \emph{children} to a node on a path using the |child| operation: \begin{pathoperation}{child}{\opt{\oarg{options}}% \opt{|foreach|\meta{variables}|in|\marg{values}}\opt{\marg{child path}}} This operation should directly follow a completed |node| operation or another |child| operation, although it is permissible that the first |child| operation is preceded by options (we will come to that). When a |node| operation like |node {X}| is followed by |child|, \tikzname\ starts counting the number of child nodes that follow the original |node {X}|. For this, it scans the input and stores away each |child| and its arguments until it reaches a path operation that is not a |child|. Note that this will fix the character codes of all text inside the child arguments, which means, in essence, that you cannot use verbatim text inside the nodes inside a |child|. Sorry. Once the children have been collected and counted, \tikzname\ starts generating the child nodes. For each child of a parent node \tikzname\ computes an appropriate position where the child is placed. For each child, the coordinate system is transformed so that the origin is at this position. Then the \meta{child path} is drawn. Typically, the child path just consists of a |node| specification, which results in a node being drawn at the child's position. Finally, an edge is drawn from the first node in the \meta{child path} to the parent node. The optional |foreach| part (note that there is no backslash before |foreach|) allows you to specify multiple children in a single |child| command. The idea is the following: A |\foreach| statement is (internally) used to iterate over the list of \meta{values}. For each value in this list, a new |child| is added to the node. The syntax for \meta{variables} and for \meta{values} is the same as for the |\foreach| statement, see Section~\ref{section-foreach}. For example, when you say % \begin{codeexample}[code only] node {root} child [red] foreach \name in {1,2} {node {\name}} \end{codeexample} % the effect will be the same as if you had said % \begin{codeexample}[code only] node {root} child[red] {node {1}} child[red] {node {2}} \end{codeexample} % When you write % \begin{codeexample}[code only] node {root} child[\pos] foreach \name/\pos in {1/left,2/right} {node[\pos] {\name}} \end{codeexample} % the effect will be the same as for % \begin{codeexample}[code only] node {root} child[left] {node[left] {1}} child[right] {node[right] {2}} \end{codeexample} You can nest things as in the following example: % \begin{codeexample}[] \begin{tikzpicture} [level distance=4mm,level/.style={sibling distance=8mm/#1}] \coordinate child foreach \x in {0,1} {child foreach \y in {0,1} {child foreach \z in {0,1}}}; \end{tikzpicture} \end{codeexample} The details and options for this operation are described in the rest of this present section. \end{pathoperation} \subsection{Child Paths and Child Nodes} For each |child| of a root node, its \meta{child path} is inserted at a specific location in the picture (the placement rules are discussed in Section~\ref{section-tree-placement}). The first node in the \meta{child path}, if it exists, is special and called the \emph{child node}. If there is no first node in the \meta{child path}, that is, if the \meta{child path} is missing (including the curly braces) or if it does not start with |node| or with |coordinate|, then an empty child node of shape |coordinate| is automatically added. Consider the example |\node {x} child {node {y}} child;|. For the first child, the \meta{child path} has the child node |node {y}|. For the second child, no child node is specified and, thus, it is just |coordinate|. As for any normal node, you can give the child node a name, shift it around, or use options to influence how it is rendered. % \begin{codeexample}[preamble={\usetikzlibrary{shapes.geometric}}] \begin{tikzpicture}[sibling distance=15mm] \node[rectangle,draw] {root} child {node[circle,draw,yshift=-5mm] (left node) {left}} child {node[ellipse,draw] (right node) {right}}; \draw[dashed,->] (left node) -- (right node); \end{tikzpicture} \end{codeexample} In many cases, the \meta{child path} will just consist of a specification of a child node and, possibly, children of this child node. However, the node specification may be followed by arbitrary other material that will be added to the picture, transformed to the child's coordinate system. For your convenience, a move-to |(0,0)| operation is inserted automatically at the beginning of the path. Here is an example: % \begin{codeexample}[] \begin{tikzpicture} \node {root} child {[fill] circle (2pt)} child {[fill] circle (2pt)}; \end{tikzpicture} \end{codeexample} At the end of the \meta{child path} you may add a special path operation called |edge from parent|. If this operation is not given by yourself somewhere on the path, it will be automatically added at the end. This option causes a connecting edge from the parent node to the child node to be added to the path. By giving options to this operation you can influence how the edge is rendered. Also, nodes following the |edge from parent| operation will be placed on this edge, see Section~\ref{section-edge-from-parent} for details. To sum up: % \begin{enumerate} \item The child path starts with a node specification. If it is not there, it is added automatically. \item The child path ends with a |edge from parent| operation, possibly followed by nodes to be put on this edge. If the operation is not given at the end, it is added automatically. \end{enumerate} \subsection{Naming Child Nodes} Child nodes can be named like any other node using either the |name| option or the special syntax in which the name of the node is placed in round parentheses between the |node| operation and the node's text. If you do not assign a name to a child node, \tikzname\ will automatically assign a name as follows: Assume that the name of the parent node is, say, |parent|. (If you did not assign a name to the parent, \tikzname\ will do so itself, but that name will not be user-accessible.) The first child of |parent| will be named |parent-1|, the second child is named |parent-2|, and so on. This naming convention works recursively. If the second child |parent-2| has children, then the first of these children will be called |parent-2-1| and the second |parent-2-2| and so on. If you assign a name to a child node yourself, no name is generated automatically (the node does not have two names). However, ``counting continues'', which means that the third child of |parent| is called |parent-3| independently of whether you have assigned names to the first and/or second child of |parent|. Here is an example: % \begin{codeexample}[] \begin{tikzpicture}[sibling distance=15mm] \node (root) {root} child child { child {coordinate (special)} child }; \node at (root-1) {root-1}; \node at (root-2) {root-2}; \node at (special) {special}; \node at (root-2-2) {root-2-2}; \end{tikzpicture} \end{codeexample} \subsection{Specifying Options for Trees and Children} \label{section-tree-options} Each |child| may have its own \meta{options}, which apply to ``the whole child'', including all of its grandchildren. Here is an example: % \begin{codeexample}[] \begin{tikzpicture} [thick,level 1/.style={sibling distance=15mm}, level 2/.style={sibling distance=10mm}] \coordinate child[red] {child child} child[green] {child child[blue]}; \end{tikzpicture} \end{codeexample} The options of the root node have no effect on the children since the options of a node are always ``local'' to that node. Because of this, the edges in the following tree are black, not red. % \begin{codeexample}[] \begin{tikzpicture}[thick] \node [red] {root} child child; \end{tikzpicture} \end{codeexample} % This raises the problem of how to set options for \emph{all} children. Naturally, you could always set options for the whole path as in |\path [red] node {root} child child;| but this is bothersome in some situations. Instead, it is easier to give the options \emph{before the first child} as follows: % \begin{codeexample}[] \begin{tikzpicture}[thick] \node [red] {root} [green] % option applies to all children child child; \end{tikzpicture} \end{codeexample} Here is the set of rules: % \begin{enumerate} \item Options for the whole tree are given before the root node. \item Options for the root node are given directly to the |node| operation of the root. \item Options for all children can be given between the root node and the first child. \item Options applying to a specific child path are given as options to the |child| operation. \item Options applying to the node of a child, but not to the whole child path, are given as options to the |node| command inside the \meta{child path}. \end{enumerate} % \begin{codeexample}[code only] \begin{tikzpicture} \scoped [...] % Options apply to the whole tree \node[...] {root} % Options apply to the root node only [...] % Options apply to all children child[...] % Options apply to this child and all its children { node[...] {} % Options apply to the child node only ... } child[...] % Options apply to this child and all its children ; \end{tikzpicture} \end{codeexample} There are additional styles that influence how children are rendered: % \begin{stylekey}{/tikz/every child (initially \normalfont empty)} This style is used at the beginning of each child, as if you had given the style's contents as options to the |child| operation. \end{stylekey} \begin{stylekey}{/tikz/every child node (initially \normalfont empty)} This style is used at the beginning of each child node in addition to the |every node| style. \end{stylekey} \begin{stylekey}{/tikz/level=\meta{number} (initially \normalfont empty)} This style is executed at the beginning of each set of children, where \meta{number} is the current level in the current tree. For example, when you say |\node {x} child child;|, then |level=1| is used before the first |child|. The style or code of this key will be passed \meta{number} as its first parameter. If this first |child| has children itself, then |level=2| would be used for them. % \begin{codeexample}[] \begin{tikzpicture}[level/.style={sibling distance=20mm/#1}] \node {root} child { child child } child { child child child }; \end{tikzpicture} \end{codeexample} % \end{stylekey} \begin{stylekey}{/tikz/level \meta{number} (initially \normalfont empty)} This style is used in addition to the |level| style. So, when you say |\node {x} child child;|, then the following key list is executed: |level=1,level 1|. % \begin{codeexample}[] \begin{tikzpicture} [level 1/.style={sibling distance=20mm}, level 2/.style={sibling distance=5mm}] \node {root} child { child child } child { child child child }; \end{tikzpicture} \end{codeexample} % \end{stylekey} \subsection{Placing Child Nodes} \label{section-tree-placement} \subsubsection{Basic Idea} Perhaps the most difficult part in drawing a tree is the correct layout of the children. Typically, the children have different sizes and it is not easy to arrange them in such a manner that not too much space is wasted, the children do not overlap, and they are either evenly spaced or their centers are evenly distributed. Calculating good positions is especially difficult since a good position for the first child may depend on the size of the last child. In basic \tikzname, when you do not make use of the graph drawing facilities explained in Part~\ref{part-gd}, a comparatively simple approach is taken to placing the children. In order to compute a child's position, all that is taken into account is the number of the current child in the list of children and the number of children in this list. Thus, if a node has five children, then there is a fixed position for the first child, a position for the second child, and so on. These positions \emph{do not depend on the size of the children} and, hence, children can easily overlap. However, since you can use options to shift individual children a bit, this is not as great a problem as it may seem. Although the placement of the children only depends on their number in the list of children and the total number of children, everything else about the placement is highly configurable. You can change the distance between children (appropriately called the |sibling distance|) and the distance between levels of the tree. These distances may change from level to level. The direction in which the tree grows can be changed globally and for parts of the tree. You can even specify your own ``growth function'' to arrange children on a circle or along special lines or curves. \subsubsection{Default Growth Function} The default growth function works as follows: Assume that we are given a node and five children. These children will be placed on a line with their centers (or, more generally, with their anchors) spaced apart by the current |sibling distance|. The line is orthogonal to the current \emph{direction of growth}, which is set with the |grow| and |grow'| option (the latter option reverses the ordering of the children). The distance from the line to the parent node is given by the |level distance|. % {\catcode`\|=12 \begin{codeexample}[] \begin{tikzpicture}[sibling distance=15mm, level distance=15mm] \path [help lines] node (root) {root} [grow=-10] child {node {1}} child {node {2}} child {node {3}} child {node {4}}; \draw[|<->|,thick] (root-1.center) -- node[above,sloped] {sibling distance} (root-2.center); \draw[|<->|,thick] (root.center) -- node[above,sloped] {level distance} +(-10:\tikzleveldistance); \end{tikzpicture} \end{codeexample} } \begin{key}{/tikz/level distance=\meta{distance} (initially 15mm)} This key determines the distance between different levels of the tree, more precisely, between the parent and the line on which its children are arranged. When given to a single child, this will set the distance for this child only. % \begin{codeexample}[] \begin{tikzpicture} \node {root} [level distance=20mm] child child { [level distance=5mm] child child child } child[level distance=10mm]; \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture} [level 1/.style={level distance=10mm}, level 2/.style={level distance=5mm}] \node {root} child child { child child[level distance=10mm] child } child; \end{tikzpicture} \end{codeexample} % \end{key} \begin{key}{/tikz/sibling distance=\meta{distance} (initially 15mm)} This key specifies the distance between the anchors of the children of a parent node. % \begin{codeexample}[] \begin{tikzpicture} [level distance=4mm, level 1/.style={sibling distance=8mm}, level 2/.style={sibling distance=4mm}, level 3/.style={sibling distance=2mm}] \coordinate child { child {child child} child {child child} } child { child {child child} child {child child} }; \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture} [level distance=10mm, every node/.style={fill=red!60,circle,inner sep=1pt}, level 1/.style={sibling distance=20mm,nodes={fill=red!45}}, level 2/.style={sibling distance=10mm,nodes={fill=red!30}}, level 3/.style={sibling distance=5mm,nodes={fill=red!25}}] \node {31} child {node {30} child {node {20} child {node {5}} child {node {4}} } child {node {10} child {node {9}} child {node {1}} } } child {node {20} child {node {19} child {node {1}} child[missing] } child {node {18}} }; \end{tikzpicture} \end{codeexample} % \end{key} \begin{key}{/tikz/grow=\meta{direction}} This key is used to define the \meta{direction} in which the tree will grow. The \meta{direction} can either be an angle in degrees or one of the following special text strings: |down|, |up|, |left|, |right|, |north|, |south|, |east|, |west|, |north east|, |north west|, |south east|, and |south west|. All of these have ``their obvious meaning'', so, say, |south west| is the same as the angle $-135^\circ$. As a side effect, this option installs the default growth function. In addition to setting the direction, this option also has a seemingly strange effect: It sets the sibling distance for the current level to |0pt|, but leaves the sibling distance for later levels unchanged. This somewhat strange behavior has a highly desirable effect: If you give this option before the list of children of a node starts, the ``current level'' is still the parent level. Each child will be on a later level and, hence, the sibling distance will be as specified originally. This will cause the children to be neatly aligned in a line orthogonal to the given \meta{direction}. However, if you give this option locally to a single child, then ``current level'' will be the same as the child's level. The zero sibling distance will then cause the child to be placed exactly at a point at distance |level distance| in the direction \meta{direction}. However, the children of the child will be placed ``normally'' on a line orthogonal to the \meta{direction}. These placement effects are best demonstrated by some examples: % \begin{codeexample}[] \tikz \node {root} [grow=right] child child; \end{codeexample} \begin{codeexample}[] \tikz \node {root} [grow=south west] child child; \end{codeexample} \begin{codeexample}[] \begin{tikzpicture}[level distance=10mm,sibling distance=5mm] \node {root} [grow=down] child child child[grow=right] { child child child }; \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture}[level distance=2em] \node {C} child[grow=up] {node {H}} child[grow=left] {node {H}} child[grow=down] {node {H}} child[grow=right] {node {C} child[grow=up] {node {H}} child[grow=right] {node {H}} child[grow=down] {node {H}} edge from parent[double] coordinate (wrong) }; \draw[<-,red] ([yshift=-2mm]wrong) -- +(0,-1) node[below]{This is wrong!}; \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture} \node[rectangle,draw] (a) at (0,0) {start node}; \node[rectangle,draw] (b) at (2,1) {end}; \draw (a) -- (b) node[coordinate,midway] {} child[grow=100,<-] {node[above] {the middle is here}}; \end{tikzpicture} \end{codeexample} % \end{key} \begin{key}{/tikz/grow'=\meta{direction}} This key has the same effect as |grow|, only the children are arranged in the opposite order. \end{key} \subsubsection{Missing Children} Sometimes one or more of the children of a node are ``missing''. Such a missing child will count as a child with respect to the total number of children and also with respect to the current child count, but it will not be rendered. \begin{key}{/tikz/missing=\meta{true or false} (default true)} If this option is given to a child, the current child counter is increased, but the child is otherwise ignored. In particular, the normal contents of the child is completely ignored. % \begin{codeexample}[] \begin{tikzpicture}[level distance=10mm,sibling distance=5mm] \node {root} [grow=down] child { node {1} } child { node {2} } child { node {3} } child[missing] { node {4} } child { node {5} } child { node {6} }; \end{tikzpicture} \end{codeexample} % \end{key} \subsubsection{Custom Growth Functions} \begin{key}{/tikz/growth parent anchor=\meta{anchor} (initially center)} This key allows you to specify which anchor of the parent node is to be used for computing the children's position. For example, when there is only one child and the |level distance| is |2cm|, then the child node will be placed two centimeters below the \meta{anchor} of the parent node. ``Being placed'' means that the child node's anchor (which is the anchor specified using the |anchor=| option in the |node| command of the child) is two centimeters below the parent node's \meta{anchor}. In the following example, the two red lines both have length |1cm|. % \begin{codeexample}[] \begin{tikzpicture}[level distance=1cm] \node [rectangle,draw] (a) at (0,0) {root} [growth parent anchor=south] child; \node [rectangle,draw] (b) at (2,0) {root} [growth parent anchor=north east] child; \draw [red,thick,dashed] (a.south) -- (a-1); \draw [red,thick,dashed] (b.north east) -- (b-1); \end{tikzpicture} \end{codeexample} In the next example, the top and bottom nodes are aligned at the top and the bottom, respectively. % \begin{codeexample}[] \begin{tikzpicture} [level distance=2cm,growth parent anchor=north, every node/.style={anchor=north,rectangle,draw} every child node/.style={anchor=south}] \node at (0,0) {root} child {node {small}}; \node at (2,0) {big root} child {node {\large big}}; \end{tikzpicture} \end{codeexample} % \end{key} \begin{key}{/tikz/growth function=\meta{macro name} (initially \normalfont an internal function)} This rather low-level option allows you to set a new growth function. The \meta{macro name} must be the name of a macro without parameters. This macro will be called for each child of a node. The initial function is an internal function that corresponds to downward growth. The effect of executing the macro should be the following: It should transform the coordinate system in such a way that the origin becomes the place where the current child should be anchored. When the macro is called, the current coordinate system will be set up such that the anchor of the parent node is in the origin. Thus, in each call, the \meta{macro name} must essentially do a shift to the child's origin. When the macro is called, the \TeX\ counter |\tikznumberofchildren| will be set to the total number of children of the parent node and the counter |\tikznumberofcurrentchild| will be set to the number of the current child. The macro may, in addition to shifting the coordinate system, also transform the coordinate system further. For example, it could be rotated or scaled. Additional growth functions are defined in the library, see Section~\ref{section-tree-library}. \end{key} \subsection{Edges From the Parent Node} \label{section-edge-from-parent} Every child node is connected to its parent node via a special kind of edge called the |edge from parent|. This edge is added to the \meta{child path} when the following path operation is encountered: \begin{pathoperation}{edge from parent}{\opt{\oarg{options}}} This path operation can only be used inside \meta{child paths} and should be given at the end, possibly followed by \meta{node specifications} like |node {a}|. If a \meta{child path} does not contain this operation, it will be added at the end of the \meta{child path} automatically. By default, this operation does the following: % \begin{enumerate} \item The following style is executed: % \begin{stylekey}{/tikz/edge from parent (initially draw)} This style is inserted right before the |edge from parent path| and before the \meta{options} are inserted. % \begin{codeexample}[] \begin{tikzpicture} [edge from parent/.style={draw,red,thick}] \node {root} child {node {left} edge from parent[dashed]} child {node {right} child {node {child}} child {node {child} edge from parent[draw=none]} }; \end{tikzpicture} \end{codeexample} \end{stylekey} \item Next, the \meta{options} are executed. \item Next, the text stored in the following key is inserted: % \begin{key}{/tikz/edge from parent path=\meta{path} (initially \normalfont code shown below)} This option allows you to set the |edge from parent path| to a new path. Initially, this path is the following: % \begin{codeexample}[code only] (\tikzparentnode\tikzparentanchor) -- (\tikzchildnode\tikzchildanchor) \end{codeexample} % The |\tikzparentnode| is a macro that will expand to the name of the parent node. This works even when you have not assigned a name to the parent node, in this case an internal name is automatically generated. The |\tikzchildnode| is a macro that expands to the name of the child node. The two |...anchor| macros are empty by default. So, what is essentially inserted is just the path segment |(\tikzparentnode) -- (\tikzchildnode)|; which is exactly an edge from the parent to the child. You can modify this edge from parent path to achieve all sorts of effects. For example, we could replace the straight line by a curve as follows: % \begin{codeexample}[] \begin{tikzpicture}[level distance=15mm, sibling distance=15mm, edge from parent path= {(\tikzparentnode.south) .. controls +(0,-1) and +(0,1) .. (\tikzchildnode.north)}] \node {root} child {node {left}} child {node {right} child {node {child}} child {node {child}} }; \end{tikzpicture} \end{codeexample} Further useful |edge from parent path|s are defined in the tree library, see Section~\ref{section-tree-library}. The nodes in a \meta{node specification} following the |edge from parent| path command get executed as if the |pos| option had been added to all these nodes, see also Section~\ref{section-pos-option}. As an example, consider the following code: % \begin{codeexample}[code only] \node (root) {} child {node (child) {} edge to parent node {label}}; \end{codeexample} % The |edge to parent| operation and the following |node| operation will, together, have the same effect as if we had said: % \begin{codeexample}[code only] (root) -- (child) node [pos=0.5] {label} \end{codeexample} Here is a more complicated example: % \begin{codeexample}[] \begin{tikzpicture} \node {root} child { node {left} edge from parent node[left] {a} node[right] {b} } child { node {right} child { node {child} edge from parent node[left] {c} } child {node {child}} edge from parent node[near end] {x} }; \end{tikzpicture} \end{codeexample} As said before, the anchors in the default |edge from parent path| are empty. However, you can set them using the following keys: % \begin{key}{/tikz/child anchor=\meta{anchor} (initially border)} Specifies the anchor where the edge from parent meets the child node by setting the macro |\tikzchildanchor| to |.|\meta{anchor}. If you specify |border| as the \meta{anchor}, then the macro |\tikzchildanchor| is set to the empty string. The effect of this is that the edge from the parent will meet the child on the border at an automatically calculated position. % \begin{codeexample}[] \begin{tikzpicture} \node {root} [child anchor=north] child {node {left} edge from parent[dashed]} child {node {right} child {node {child}} child {node {child} edge from parent[draw=none]} }; \end{tikzpicture} \end{codeexample} \end{key} \begin{key}{/tikz/parent anchor=\meta{anchor} (initially border)} This option works the same way as the |child anchor|, only for the parent. \end{key} \end{key} \end{enumerate} All of the above describes the standard functioning of the |edge from parent| command. You may, however, sometimes need even more fine-grained control (the graph drawing engine needs it, for instance). In such cases the following key gives you complete control: % \begin{key}{/tikz/edge from parent macro=\meta{macro}} The \meta{macro} gets expanded each time the |edge from parent| path operation is used. This \meta{macro} must take two parameters and must expand to some text that is subsequently parsed by the parser. The first parameter will be the set of \meta{options} that where passed to the |edge from parent| command, the second parameter will be the \meta{node specifications} that following the command. The standard behavior of drawing a straight line from the parent node to the child node could be achieved by setting the \meta{macro} to the following: % \begin{codeexample}[code only] \def\mymacro#1#2{ [style=edge from parent, #1] (\tikzparentnode\tikzparentanchor) -- #2 (\tikzchildnode\tikzchildanchor) } \end{codeexample} % Note that |#2| is placed between |--| and the node to ensure that nodes are put ``on top'' of the line. \end{key} \end{pathoperation} %%% Local Variables: %%% mode: latex %%% TeX-master: "pgfmanual-pdftex-version" %%% End:
{ "alphanum_fraction": 0.6831593706, "avg_line_length": 36.9332566168, "ext": "tex", "hexsha": "d53ae5516cc18d27984b3d0d8402626d301fc677", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-tikz-trees.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-tikz-trees.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-tikz-trees.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8471, "size": 32095 }
\chapter{External Tools} \label{chap:external_tools} This chapter provides some information on using QMCPACK with external tools. \section{Intel VTune} Intel's VTune profiler has an API that allows program control over collection (pause/resume) and can add information to the profile data (e.g. delineating tasks). \subsection{VTune API} If the variable \texttt{USE\_VTUNE\_API} is set, QMCPACK will check that the include file (\texttt{ittnotify.h}) and the library (\texttt{libittnotify.a}) can be found. To provide CMake with the VTune paths, add the include path to \texttt{CMAKE\_CXX\_FLAGS} and the the library path to \texttt{CMAKE\_LIBRARY\_PATH}. An example of options to be passed to CMake \begin{shade} -DCMAKE_CXX_FLAGS=-I/opt/intel/vtune_amplifier_xe/include \ -DCMAKE_LIBRARY_PATH=/opt/intel/vtune_amplifier_xe/lib64 \end{shade} \section{NVIDIA Tools Extensions (NVTX)} NVIDIA's Tools Extensions (NVTX) API enables programmers to annotate their source code when used with the NVIDIA profilers. \subsection{NVTX API} If the variable \texttt{USE\_NVTX\_API} is set, QMCPACK will add the library (\texttt{libnvToolsExt.so}) to the qmcpack target. To add NVTX annotations to a function, it is necessary to include the \texttt{nvToolsExt.h} header file and then make the appropriate calls into the NVTX API. For more information about the NVTX API, see \url{https://docs.nvidia.com/cuda/profiler-users-guide/index.html#nvtx}. Any additional calls to the NVTX API should be guarded by the \texttt{USE\_NVTX\_API} compiler define. \subsection{Timers as Tasks} To aid in connecting the timers in the code to the profile data, the start/stop of timers will be recorded as a task if \texttt{USE\_VTUNE\_TASKS} is set. In addition to compiling with \texttt{USE\_VTUNE\_TASKS}, an option needs to be set at runtime to collect the task API data. In the GUI, select the checkbox labeled "Analyze user tasks" when setting up the analysis type. For the command line, set the \texttt{enable-user-tasks} knob to \texttt{true}. For example, \begin{shade} amplxe-cl -collect hotspots -knob enable-user-tasks=true ... \end{shade} Collection with the timers set at "fine" can generate too much task data in the profile. Collection with the timers at "medium" collects a more reasonable amount of task data. \section{Scitools Understand} Scitools Understand (\url{https://scitools.com/}) is a tool for static code analysis. The easiest configuration route is to use the JSON output from CMake which the Understand project importer can read directly: \begin{enumerate} \item Configure QMCPACK by running cmake with CMAKE\_EXPORT\_COMPILE\_COMMANDS=ON, e.g. \texttt{ cmake -DCMAKE\_C\_COMPILER=clang -DCMAKE\_CXX\_COMPILER=clang++ \\ -DQMC\_MPI=0 -DCMAKE\_EXPORT\_COMPILE\_COMMANDS=ON ../qmcpack/ } \item Run Understand and create a new C++ project. At the import files and settings dialog, import the compile\_commands.json created by cmake in the build directory. \end{enumerate}
{ "alphanum_fraction": 0.7810729757, "avg_line_length": 49.1967213115, "ext": "tex", "hexsha": "af56ef8d3bd2012b36986d8a68c7958a8ca18d2e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cd09fc54b36de2579c9802f5e64b7ec15506f3c3", "max_forks_repo_licenses": [ "NCSA" ], "max_forks_repo_name": "bwvdg/qmcpack", "max_forks_repo_path": "manual/external_tools.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "cd09fc54b36de2579c9802f5e64b7ec15506f3c3", "max_issues_repo_issues_event_max_datetime": "2020-04-10T15:35:59.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-10T15:33:28.000Z", "max_issues_repo_licenses": [ "NCSA" ], "max_issues_repo_name": "bwvdg/qmcpack", "max_issues_repo_path": "manual/external_tools.tex", "max_line_length": 162, "max_stars_count": null, "max_stars_repo_head_hexsha": "cd09fc54b36de2579c9802f5e64b7ec15506f3c3", "max_stars_repo_licenses": [ "NCSA" ], "max_stars_repo_name": "bwvdg/qmcpack", "max_stars_repo_path": "manual/external_tools.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 818, "size": 3001 }
\documentclass[11pt]{article} \input{../_preamble.tex} \preparetitle{1} \begin{document} \maketitle \section*{Introduction} All subtasks of this homework assignment have been integrated into a single Unity scene. No third-party Unity packages were used to create this scene. All code and scene assets including models and sounds were created by myself for the purpose of this assignment. The viewer can use the mouse to interact with certain objects in the scene. If an object is interactable, a stylized mouse icon will appear in the bottom center of the screen. This is typically used to \quotes{peer} at a particular object in order to get a closer look (with two exceptions). In order to \quotes{lean back} from the \quotes{peering} view and return to the main scene view, right click with the mouse. The terminal object can be interacted with using the keyboard, navigation controls are displayed on each screen of the terminal. The subtask demonstrations of this homework assignment are accessed through the terminal. \makesection{A}{Three Objects and Three Lights} \begin{center} \includegraphics[width=0.3\linewidth]{sphere.png} \quad \includegraphics[width=0.3\linewidth]{cube.png} \quad \includegraphics[width=0.3\linewidth]{crystal.png} \end{center} The scene contains three lights: One emitted by the hologram projector object, one that revolves around the displayed model, and the lightbulb which may be toggled on or off by left-clicking on the lightswitch object. To access the individual shader demonstrations, activate the terminal and select them from the main menu. The relevant object will be added to the scene as a hovering \quotes{hologram} above the hologram projector object. To inspect the individual objects more closely, left-click on the hologram projector. Click the links in the table below to view the code. \begin{center}\begin{tabular}{p{1in}p{2in}p{3in}} \toprule \textbf{Name} & \textbf{Link to Shader Code} & \textbf{Description} \\ \midrule Error Sphere & \shaderlink{Dissolve} & This is an unlit shader that applies a pixelated texture to a sphere, but also a pixelated dissolve effect that changes over time. \\ \midrule Cube & \shaderlink{VertexDisplacement} & This is an unlit shader that applies a texture to an object, and deforms the vertices using a sine function. \\ \midrule Crystal & \shaderlink{PhongLighting} & This shader applies a texture and the Phong lighting process to an object. It supports up to four point lights. \\ \bottomrule \end{tabular}\end{center} \makesection{B}{Image Processing Shader} \begin{center} \includegraphics[width=0.5\linewidth]{screeneffect.png} \end{center} The kernel-based image processing effect is implemented on the terminal screen object itself. Effectively, it is a standard box blur using a simple $3 \times 3$ convolution matrix. The effect increases over time and may be reset by \quotes{hitting} the terminal by left-clicking on the upper left corner of the terminal object. The intention is to simulate a CRT monitor with malfunctioning deflection coils. The convolution matrix for this effect is defined as: \[ K = \begin{bmatrix} 1-S & 1-S & 1-S \\ 1-S & S & 1-S \\ 1-S & 1-S & 1-S \end{bmatrix} \] where $S$ is an arbitrary \quotes{sharpness} value. This value is increased over time, thereby biasing the blur effect towards the neighboring pixels. A secondary \code{ScanIntensity} value is dispatched to the shader to control the intensity of the scan lines (themselves being provided by the \code{ScanLines} texture). The shader code can be viewed here: \shaderlink{ScreenEffect} \makesection{C}{Game of Life \quotes{Ping Pong} Shader} \begin{center} \includegraphics[width=0.5\linewidth]{gameoflife.png} \end{center} This shader is implemented as a \quotes{program} within the terminal object and may be accessed by selecting the \quotes{Game of Life} option in the terminal's main menu. The rules of the simulation were discovered as a happy accident during implementation and produce a pleasing, almost mazelike end result. The simulation may be affected within the terminal \quotes{program} by using the controls described on that screen. There are several interoperating parts that are used to accomplish this effect: \begin{center}\begin{tabular}{p{1in}p{2in}p{3in}} \toprule \textbf{Name} & \textbf{Link to Code} & \textbf{Description} \\ \midrule Game of Life Shader & \shaderlink{GameOfLife} & This is the shader that contains the actual Game of Life Logic. \\ \midrule Mask Shader & \shaderlink{SimpleUnlitMasked} & This is the shader that is used to draw the Game of Life result texture onto a field \code{Image} on the terminal screen \code{Canvas}. It simply draws the texture, using the alpha value from a second provided texture. This allows the Game Of Life result texture to be masked. \\ \midrule Terminal Screen Code & \codelink{GameOfLifeScreen} & This is the actual \quotes{ping-pong} code that generates and maintains the references to the \code{Texture2D} and \code{RenderTexture} objects and swaps-between them; see the methods \code{GenerateTexture()} \code{OnScreenUpdate()}, respectively. \\ \bottomrule \end{tabular}\end{center} \makesection{D}{Visual Effect Discussion} \begin{center} \includegraphics[width=\linewidth]{PoEFX.png} \end{center} Depicted above is the \quotes{Celestial Character Effect} from the game \textit{Path of Exile}. A video of the effect may be viewed by clicking the following \href{https://www.youtube.com/watch?v=tiFfWGNF_l8}{\texttt{[YouTube Link]}}. I find this effect to be visually interesting as it contains some pleasing soft particles and strongly invokes the \quotes{celestial} aesthetic. There are several components to the effect: \begin{itemize} \item \textbf{Character Model Highlighting:} This effect appears to be accomplished by the additive application of a bluish color to fragments whose normals are facing away from the camera. \item \textbf{Small Soft Particles:} These are simply, as stated, small soft particles of various colors. They appear to be additively blended to the scene during render. \item \textbf{Large Soft Particles:} This component is the most sophisticated, and also the most interesting. It appears to have been accomplished by a particle system with a special shader attached. The particle system emits \quotes{smokelike} particles that are then shaded with a static texture of a nebula using the screen-space coordinates of the fragment; this texture is also tiled and offset over time. The alpha of the original particle smoke texture is used. \end{itemize} If I were to try and implement this effect using a Unity shader, I would first try to implement the model highlighting as a second pass on top of the standard Unity shader (so that lighting is automatically consistent with the rest of the scene). The highlight color would be exposed as a property of the shader so that it could be altered if later desired. Then, I would implement the large smoky particles as stated above: with a custom shader that uses the alpha of the original particle, and the color value from a tiling, shifting-offset texture whose fragment sample UV parameter is taken from the screen-space of the render target rather than world-space. Finally, I would add the small, soft particles -- they appear to be quite visually similar to Unity's default particles and no custom rendering would be required. \end{document}
{ "alphanum_fraction": 0.7865108155, "avg_line_length": 114.5076923077, "ext": "tex", "hexsha": "26a53f7c01f4a4d096db3bbb6d0ba13fa9147362", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ed7f35fbde210e39ae5b2e171d6fd6ed3356a5b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "malcolmriley/CMPM-163", "max_forks_repo_path": "writeups/homework 1/Homework 1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ed7f35fbde210e39ae5b2e171d6fd6ed3356a5b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "malcolmriley/CMPM-163", "max_issues_repo_path": "writeups/homework 1/Homework 1.tex", "max_line_length": 825, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ed7f35fbde210e39ae5b2e171d6fd6ed3356a5b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "malcolmriley/CMPM-163", "max_stars_repo_path": "writeups/homework 1/Homework 1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1787, "size": 7443 }
\documentclass{beamer} \usepackage[utf8]{inputenc} % Encoding \usepackage{graphicx} % Work with graphics \usepackage{ragged2e} % Introduce justified text \usepackage{braket} % Dirac's braket notation \usepackage{xcolor} % Colors \usepackage{bm} % Bold symbols in equations \usepackage[export]{adjustbox} % Add color to edges of figure \usepackage[ruled,vlined]{algorithm2e} % Support for algorithms \usepackage{listings} % Support for code snippets \usepackage{caption} % Use subfigures \usepackage{subcaption} % Use subfigures \usepackage{verbatim} \usetheme{Warsaw} % Required for including image in text \newcommand*{\img}[1]{ \raisebox{-.15\baselineskip}{ \includegraphics[ height=\baselineskip, width=3\baselineskip, keepaspectratio, ]{#1} } } % Create macro for norm \newcommand{\norm}[1]{\left\lVert#1\right\rVert} % Macro to add page number \newcommand*\oldmacro{}% \let\oldmacro\insertshorttitle% \renewcommand*\insertshorttitle{% \oldmacro\hfill% \insertframenumber\,/\,\inserttotalframenumber} % Figures directory \graphicspath{{./Figures/}} % Title page information \title[Intro MCTDH]{A \emph{gentle} introduction to MCTDH} %\subtitle{\footnotesize With emphasis in potential representations} \author[Panadés-Barrueta]{Ramón L. Panadés-Barrueta} \institute{Computational Chemical Physics Group. \\ University of Twente.} \date{\small November 5, 2020} \titlegraphic{\includegraphics[width=1.65cm]{trex_logo.png}\hspace*{7.00cm}~% \includegraphics[width=2cm]{ut_logo.pdf}} % ToC in each section \AtBeginSection[] { \begin{frame} \tableofcontents[currentsection] \end{frame} } % Presentation's content \begin{document} \frame{\titlepage} % Initial table of contents \begin{frame} \tableofcontents \end{frame} \section{Nuclear Quantum Dynamics}\label{nqd} \begin{frame} \frametitle{Nuclear Quantum Dynamics} \begin{block}{What} \justifying{} The subfield of Theoretical Chemistry in which both the \textbf{electrons} and the \textbf{nuclei} of a molecular system are treated in a \textbf{quantum-mechanical} manner. \end{block} \begin{exampleblock}{When}<2-> \begin{itemize} \item Spectroscopy (e.g. IR transitions) \item Quantum tunneling \item Vibronic coupling \item ZPE determination \end{itemize} \end{exampleblock} \end{frame} \begin{frame} \frametitle{Nuclear Quantum Dynamics} Find the numerical solution of the TDSE truncating the Hilbert space to a finite dimension (Galerkin's method): \begin{equation} i\hbar\frac{\partial\Psi}{\partial t} = \hat{H}\Psi \end{equation} Given a parametric representation of the WF (\(\Psi\)), the optimal solution can be found using the Dirac-Frenkel Variational Principle (DF-VP): \begin{equation} \braket{\delta\Psi|\hat{H}-i\frac{\partial }{\partial t}|\Psi} = 0 \end{equation} \end{frame} \subsection{The Standard method}\label{stdmeth} \begin{frame} \frametitle{The Standard method} Most direct representation of the WF\@: \begin{equation} \Psi(q_i,\ldots, q_f, t) = \sum_{j_1=1}^{N_1}\cdots\sum_{j_f=1}^{N_f} C_{j_1\ldots j_f}(t)\prod_{\kappa=1}^f\varphi^{(\kappa)}_{j_{\kappa}}(q_{\kappa}) \label{stme} \end{equation} After plugging this WF into the DF-VP, and performing the corresponding algebra we obtain the following EOMs: \begin{block}{} \begin{align} \begin{split} i\dot{C}_L &= \sum_J \braket{\varphi_L|\hat{H}|\varphi_J}C_J \\ \bm{C}(t) &= e^{-i\bm{H}t}\bm{C}(0) \end{split} \end{align} \end{block} where we have introduced the composite indexes \(J = (j_1,\ldots,j_f)\). \end{frame} \subsection{The Time-Dependent Hartree method}\label{tdh} \begin{frame} \frametitle{The Time-Dependent Hartree method} \justifying{} If we now consider time-dependent single-particle functions (SPFs): \begin{equation} \Psi(q_1,\ldots, q_f, t) = A(t)\prod^f_{\kappa=1}\underbrace{ \sum_{\mu=1}^{N_{\kappa}}c_{\mu}^{(\kappa, j_{\kappa})}(t)\cdot \chi^{(\kappa)}_{\mu}(q_{\kappa})}_{\varphi_{\kappa}(q_{\kappa}, t)} \end{equation} and use the DF-VP with arbitrary real constraints \(g_{\kappa} = i\braket{\varphi_{\kappa}(t)|\dot{\varphi}_{\kappa}(t)}\), we get the EOMs: \begin{block}{} \begin{equation} \begin{split} A(t) &= A(0) \cdot e^{-i\int_0^t E(t')dt'} \\ i\dot{\varphi}_{\kappa} &= (\mathcal{H}^{(\kappa)} - E)\varphi_{\kappa} \end{split} \end{equation} \end{block} with \(\mathcal{H}^{(\kappa)} = \braket{\Phi^{(\kappa)}|H|\Phi^{(\kappa)}}\). \end{frame} \begin{frame} \frametitle{Limitations of SM and TDH} \begin{exampleblock}{Standard method} \justifying{} Its application is largely limited due to the \textbf{curse of dimensionality}. Only systems up to four atoms (6D) can be addressed in practice. \end{exampleblock} \begin{block}{Time-Dependent Hartree} A simpler approach, but physically inaccurate. The \textbf{nuclear correlation} is harder to retrieve than the electronic correlation due to the nuclei's larger mass. The character of the nuclear WF is inherently \textbf{multiconfigurational}. \end{block} \end{frame} \section{The Multiconfiguration Time-Dependent Hartree method}\label{mctdh} \begin{frame} \frametitle{Ansätze comparison} \vspace{-.3cm} \begin{block}{Standard Method (FCI)} \begin{equation} \Psi(q_i,\ldots, q_f, t) = \sum_{j_1=1}^{N_1}\cdots\sum_{j_f=1}^{N_f} C_{j_1\ldots j_f}(t)\prod_{\kappa=1}^f\varphi^{(\kappa)}_{j_{\kappa}}(q_{\kappa}) \end{equation} \end{block} \begin{exampleblock}{Time-Dependent Hartree (HM)}<2-> \begin{equation} \Psi(q_1,\ldots, q_f, t) = A(t)\prod^f_{\kappa=1}\varphi_{\kappa}^{(\kappa)}(q_{\kappa}, t) \end{equation} \end{exampleblock} \begin{alertblock}{Multiconfiguration Time-Dependent Hartree (MCSCF)}<3-> \begin{equation} \Psi(q_1,\ldots, q_f, t) = \sum^{n_1}_{j_1=1}\cdots\sum^{n_f}_{j_f=1}A_{j_1,\ldots,j_f}(t)\prod^{f}_{\kappa=1}\varphi^{(\kappa)}_{j_{\kappa}}(q_{\kappa}, t) \label{mctdh_antz} \end{equation} \end{alertblock} \end{frame} \begin{frame} \frametitle{MCTDH origins and distribution} \justifying{ MCTDH was originally developed by Meyer and coworkers from the University of Heidelberg, in the early nineties\@:} \begin{center} \includegraphics[scale=0.5, cfbox=blue 1pt]{first_mctdh.png} \end{center} There are currently three major implementations of the algorithm: \begin{center} \includegraphics[scale=.07]{manthe.png} \hspace{.5cm} \includegraphics[scale=.07]{dieter.jpg} \hspace{.5cm} \includegraphics[scale=.2]{graham.png} \end{center} \end{frame} \subsection{EOMs}\label{eom} \begin{frame} \frametitle{The MCTDH EOMs} The MCTDH ansatz has a very flexible Sum-of-Products (SOP) form: \begin{equation} \Psi(q_1,\ldots, q_f, t) = \sum^{n_1}_{j_1=1}\cdots\sum^{n_f}_{j_f=1}A_{j_1,\ldots,j_f}(t)\prod^{f}_{\kappa=1}\varphi^{(\kappa)}_{j_{\kappa}}(q_{\kappa}, t) \end{equation} with time dependent SPFs \begin{equation} \label{spf} \varphi^{(\kappa)}_{j_{\kappa}}(q_{\kappa}, t) = \sum_{\mu=1}^{N_{\kappa}}c_{\mu}^{(\kappa, j_{\kappa})}(t)\cdot \chi^{(\kappa)}_{\mu}(q_{\kappa}) \end{equation} The \(\chi^{(\kappa)}_{\mu}(q_{\kappa})\) are typically DVR functions. \end{frame} \begin{frame} \frametitle{The MCTDH EOMs} \justifying{ The \emph{ansatz} WF is determined up to a multiplicative constant. To derive the EOMs arbitrary constraint operators (\(\hat{g}^{(\kappa)}\)) are introduced:} \begin{equation} i\braket{\varphi^{(\kappa)}_l|\dot{\varphi}^{(\kappa)}_j} = \braket{\varphi^{(\kappa)}_l|\hat{g}^{(\kappa)}|\varphi^{(\kappa)}_l} \end{equation} Using once again the DF-VP we get (for \(\hat{g}^{(\kappa)} \equiv 0\)): \begin{block}{} \begin{align} \begin{split} i\dot{A}_J &= \sum_L \braket{\Phi_J|\hat{H}|\Phi_L}A_L \\ i\dot{\varphi}^{(\kappa)}_j &= (1 - \hat{P}^{(\kappa)})\sum_{k,l=1}^{n_{\kappa}}(\bm{\rho}^{(\kappa)^{-1}})_{jk}\braket{\hat{H}}_{kl}^{(\kappa)}\varphi^{(\kappa)}_l \label{mctdh_eom} \end{split} \end{align} \end{block} \end{frame} \begin{frame} \frametitle{The MCTDH EOMs} \vspace{-.3cm} \begin{block}{} \begin{align} \begin{split} i\dot{A}_J &= \sum_L \braket{\textcolor{red}{\Phi_J}|\hat{H}|\textcolor{red}{\Phi_L}}A_L \\ i\dot{\varphi}^{(\kappa)}_j &= (1 - \textcolor{blue}{\hat{P}^{(\kappa)}})\sum_{k,l=1}^{n_{\kappa}}(\textcolor{cyan}{{\bm{\rho}^{(\kappa)}}^{-1}})_{jk}\textcolor{magenta}{\braket{\hat{H}}_{kl}^{(\kappa)}}\varphi^{(\kappa)}_l \label{mctdh_eom} \end{split} \end{align} \end{block} \vspace{-.35cm} \begin{equation} \textcolor{red}{\Phi_J = \prod_{\kappa=1}^f\varphi_{j_{\kappa}}^{(\kappa)}} \end{equation} \begin{equation} \textcolor{cyan}{\rho_{kl}^{(\kappa)} = \braket{\Psi_k^{(\kappa)}|\Psi_l^{(\kappa)}} = \sum_{J^{\kappa}}A_{J_{k}^{\kappa}}^*A_{J_{l}^{\kappa}}} \quad \textcolor{magenta}{\braket{\hat{H}}_{kl}^{(\kappa)} = \braket{\Psi_k^{(\kappa)}|\hat{H}|\Psi_l^{(\kappa)}}} \end{equation} \begin{equation} \textcolor{blue}{\hat{P}^{(\kappa)} = \sum_{j=1}^{n_{\kappa}}\ket{\varphi_j^{(\kappa)}}\bra{\varphi_j^{(\kappa)}}} \end{equation} \end{frame} \begin{frame} \frametitle{MCTDH integration scheme} The MCTDH-EOMs solution is expensive due to the large amount of multidimensional integrals to solve. Since the \textbf{mean fields} are not strongly oscillating we can consider (CMF integration): \begin{block}{} \begin{align} \begin{split} i\dot{A}_J &= \sum_L \bar{\mathcal{K}}_{JL} A_L \\ i\dot{\varphi}^{(1)}_j &= (1 - \hat{P}^{(1)})\{\hat{h}^{(1)}\varphi^{(1)}_j + \sum_{k,l=1}^{n_{1}}({\bm{\rho}^{(1)}}^{-1})_{jk}\braket{\bar{H}_R}_{kl}^{(1)}\varphi^{(1)}_l\} \\ &\vdots \\ i\dot{\varphi}^{(f)}_j &= (1 - \hat{P}^{(f)})\{\hat{h}^{(f)}\varphi^{(f)}_j + \sum_{k,l=1}^{n_{f}}({\bm{\rho}^{(f)}}^{-1})_{jk}\braket{\bar{H}_R}_{kl}^{(f)}\varphi^{(f)}_l\} \label{mctdh_eom_mod_dec} \end{split} \end{align} \end{block} \end{frame} \begin{frame} \frametitle{Mode combination} Nothing prevents us from grouping physical coordinates into logical particles: \begin{align} \begin{split} Q_{\kappa} &\equiv (q_{\kappa,1}, q_{\kappa,1}, \ldots, q_{\kappa,d}) \\ \varphi_{j}^{(\kappa)}(Q_{\kappa}, t) &= \varphi_{j}^{(\kappa)}(q_{\kappa,1}, q_{\kappa,1}, \ldots, q_{\kappa,d}, t) \end{split} \end{align} Under these conditions, the MCTDH \emph{ansatz} will take the form: \begin{align} \begin{split} \Psi(Q_1,\ldots, Q_p, t) &= \sum^{n_1}_{j_1=1}\cdots\sum^{n_p}_{j_p=1}A_{j_1,\ldots,j_p}(t)\prod^{p}_{\kappa=1}\varphi^{(\kappa)}_{j_{\kappa}}(Q_{\kappa}, t) \\ \varphi_{j}^{(\kappa)}(Q_{\kappa}, t) &= \sum_{i_1 \ldots i_d} C^{(\kappa, j)}_{i_1 \ldots i_d}(t)\prod^{d}_{\nu=1} \chi^{(\kappa, \nu)}(q_{\kappa, \nu}) \end{split} \label{mctdh_antz_mc} \end{align} \end{frame} \begin{frame} \frametitle{Multilayer MCTDH (3-layer case)} We can propagate the multidimensional SPFs with MCTDH itself! \begin{equation} \Psi(q_1, q_2, q_3, t) = \sum_{j_{12}=1}^{n_{12}}\sum_{j_3 = 1}^{n_3}A_{j_{12}j_3}(t)\textcolor{red}{\varphi^{(12)}_{j_{12}}(q_1,q_2,t)}\varphi^{(3)}_{j_{3}}(q_3,t) \end{equation} where we have introduced: \begin{equation} \textcolor{red}{\varphi^{(12)}_{j_{12}}(q_1,q_2,t) = \sum_{k_1 = 1}^{n_1}\sum_{k_2=1}^{n_2}B_{k_1,k_2}^{(12,j_{12})}(t)\prod_{\mu=1}^2} \textcolor{blue}{\xi_{k_{\mu}}^{(\mu)}(q_{\mu},t)} \end{equation} and: \begin{equation} \textcolor{blue}{\xi_{k_{\mu}}^{(\mu)}(q_{\mu},t) = \sum_{i_{\mu}=1}^{N_{\mu}} c_{i_{\mu}}^{(\mu,k_{\mu})}(t)\chi_{i_{\mu}}^{(\mu)}(q_{\mu})} \end{equation} \end{frame} \begin{frame} \frametitle{Multilayer MCTDH (3-layer case)} \begin{minipage}{0.3\linewidth} \tiny{ \begin{equation*} \Psi(q_1, q_2, q_3, t) = \sum_{j_{12}=1}^{n_{12}}\sum_{j_3 = 1}^{n_3}A_{j_{12}j_3}(t)\textcolor{red}{\varphi^{(12)}_{j_{12}}(q_1,q_2,t)}\varphi^{(3)}_{j_{3}}(q_3,t) \end{equation*} \begin{equation*} \textcolor{red}{\varphi^{(12)}_{j_{12}}(q_1,q_2,t) = \sum_{k_1 = 1}^{n_1}\sum_{k_2=1}^{n_2}B_{k_1,k_2}^{(12,j_{12})}(t)\prod_{\mu=1}^2} \textcolor{blue}{\xi_{k_{\mu}}^{(\mu)}(q_{\mu},t)} \end{equation*} \begin{equation*} \textcolor{blue}{\xi_{k_{\mu}}^{(\mu)}(q_{\mu},t) = \sum_{i_{\mu}=1}^{N_{\mu}} c_{i_{\mu}}^{(\mu,k_{\mu})}(t)\chi_{i_{\mu}}^{(\mu)}(q_{\mu})} \end{equation*}} \end{minipage} \hfill \begin{minipage}{0.4\linewidth} \begin{center} \includegraphics[scale=.26]{mlmctdh.png} \end{center} \end{minipage} \end{frame} \subsection{Relaxation and block-improved relaxation}\label{relax} \begin{frame} \frametitle{Obtaining vibrational orbitals} MCTDH can be also used to solve the TISE. The GS distribution of the system can be obtained by propagation in negative imaginary time \(\tau=-it\): \begin{equation} \dot{\Psi} = -\hat{H}\Psi \end{equation} The new algorithm can be derived by applying the time-independent variational principle with Lagrange multipliers: \begin{equation} \delta \{\braket{\Psi|\hat{H}|\Psi} -E(\sum_J A_J^{*}A_J-1) -\sum_{\kappa=1}^f\sum_{j,l=1}^{n_{\kappa}}\epsilon^{(\kappa)}_{jl}[\braket{\varphi_j^{(\kappa)}|\varphi_l^{(\kappa)}} - \delta_{jl}] \} = 0 \end{equation} \end{frame} \begin{frame} \frametitle{Obtaining vibrational orbitals} Taking the variations with respect to the complex conjugate of both the A-vector and the SPFs independently we get: \begin{block}{} \begin{align} \begin{split} \sum_K H_{JK}A_K &= EA_J \\ \frac{\partial \varphi^{(\kappa)}_j}{\partial \tau} &= -(1-\hat{P}^{(\kappa)}) \sum_{k,l}(\rho^{(\kappa)^{-1}})_{jk}\braket{\hat{H}}^{(\kappa)}_{kl}\varphi_l^{(\kappa)} = 0 \label{mctdh_eom_ti} \end{split} \end{align} \end{block} The second of these equations implies that we can obtain the updated SPFs simply by relaxation. The A-vector in the first equation can be obtained by Davidson diagonalization algorithm. \end{frame} \section{Potential energy surface representations}\label{probpes} \begin{frame}[fragile] \frametitle{The importance of the SOP form} \vspace{-.5cm} The multidimensional integrals arising from the MCTDH-EOMs are the bottleneck of the propagation. To address this issue, we impose SOP form to \textbf{all quantities}: \begin{align} \begin{split} \hat{O} &= \sum_{\alpha=1}^S c_{\alpha} \prod_{\kappa=1}^f \hat{o}_{\alpha}^{(\kappa)} \\ \braket{\Phi_J|\hat{O}|\Phi_L} &= \sum_{\alpha=1}^S c_{\alpha} \prod_{\kappa=1}^f \braket{\varphi_{j_{\kappa}}^{(\kappa)}|\hat{o}_{\alpha}^{(\kappa)}|\varphi_{l_{\kappa}}^{(\kappa)}} \end{split} \end{align} \begin{block}{} \begin{itemize} \item<1-> KEO already in the desired form (\verb|TANA| and \verb|TNUM| software) \item<2-> \textcolor{red}{PES might be challenging to transform} \end{itemize} \end{block} \end{frame} \subsection{Tensor decomposition algorithms}\label{tdec} \begin{frame} \frametitle{Transforming the PES} Usually the PES needs to be refitted (\textbf{tensor decomposed}) before using it. The POTFIT algorithm is an elegant way of achieving this in \textbf{Tucker form}: \begin{align} \label{potfit} \begin{split} V_{i_1,\ldots,i_f} &= V(q^{(1)}_{i_1},\ldots,q^{(f)}_{i_f}) \\ V_{i_1,\ldots,i_f} &= \sum_{j_1=1}^{m_1} \cdots \sum_{j_f=1}^{m_f} C_{j_1 \cdots j_f}\prod_{\kappa=1}^f u_{i_{\kappa}j_{\kappa}}^{(\kappa)} \end{split} \end{align} with the core tensor coefficients given by the overlap with the potential: \begin{equation} \label{core} C_{j_1\ldots j_f} = \sum_{i_1\ldots i_f} V_{i_1\ldots i_f} u^{(1)}_{i_1\ j_1} \cdots u^{(f)}_{i_f\ j_f} \end{equation} \end{frame} \begin{frame} \frametitle{The Tucker form} The tucker decomposition of a 3D tensor can be represented graphically as\footnote{Panadés-Barrueta R., and Peláez D. JCP (in review)} \begin{center} \includegraphics[scale=.1]{tuck.pdf} \end{center} which can be contrasted with the algebraic and tensor forms: \begin{align} \label{alg} \begin{split} \textcolor{red}{V_{i_1,\ldots,i_f}} &= \sum_{j_1=1}^{m_1} \cdots \sum_{j_f=1}^{m_f} \textcolor{green}{C_{j_1 \cdots j_f}}\prod_{\kappa=1}^f \textcolor{blue}{u_{i_{\kappa}j_{\kappa}}^{(\kappa)}}\\ \textcolor{red}{\mathcal{V}} &= \textcolor{green}{\mathcal{C}} \times_1 \textcolor{blue}{\mathbf{U}_1} \cdots \times_n \textcolor{blue}{\mathbf{U}_n} \end{split} \end{align} \end{frame} \begin{frame} \frametitle{Tensor decomposition algorithms} There is a number of tensor decomposition algorithms currently in use (e.g. POTFIT, MGPF, MCPF, MLPF), however, they are all limited by the size of the grids. The \textbf{SOP-FBR} method was developed as an alternative to the former: \begin{align} \label{sopfbr} \begin{split} V(q_1, \ldots, q_f) &= \sum_{j_1=1}^{m_1} \cdots \sum_{j_f=1}^{m_f} C_{j_1 \cdots j_f}\prod_{\kappa=1}^f \Phi_{j_{\kappa}}^{(\kappa)}(q_{\kappa})\\ \Phi_{j_{\kappa}}(q_{\kappa}) &= \sum_{\nu_{\kappa}=1}^{t_k}B_{\nu_{\kappa}j_{\kappa}}^{(\kappa)}T_{\nu_{\kappa}}(q_{\kappa}) \end{split} \end{align} This is a fully analytical SOP form, differentiable \emph{ad infinitum}, and that can be directly interfaced with MCTDH\@. \end{frame} \begin{frame} \frametitle{\normalsize The POTFIT and HOOI algorithms} \vspace{-1cm} \begin{columns} \begin{column}{0.6\textwidth} \centering \scalebox{.7}{\begin{algorithm}[H] \SetAlgoLined \KwResult{\(\mathcal{C}, \mathbf{U_1, \ldots, U_n}\)} Input: \(\mathcal{V}\)\; \For{\(k\gets1\) \KwTo \(n\)} { \(\mathbf{U}_k\) \(\leftarrow\) \(EVD(\mathbf{V}_{(k)}^{\dagger} \cdot \mathbf{V}_{(k)})\) } \(\mathcal{C}\) \(\leftarrow\) \(\mathcal{V} \times_1 \mathbf{U_1}^{-1}\cdots \times_n \mathbf{U_n}^{-1} \) \caption{POTFIT} \end{algorithm}} \end{column} \hspace{-1cm} \begin{column}{0.7\textwidth} \centering \scalebox{.7}{\begin{algorithm}[H] \SetAlgoLined \KwResult{\(\mathcal{C}, \mathbf{U_1, \ldots, U_n}\)} Input: \(\mathcal{V}\)\; \Repeat{\textcolor{red}{\(\norm{\mathcal{V}_{app} - \mathcal{V}} < \epsilon\)}}{ \For{\(k\gets1\) \KwTo \(n\)} { \textcolor{red}{\(\mathcal{Y} \leftarrow \mathcal{V} \times_1 \mathbf{U}_1^{-1} \cdots \times_{k-1} \mathbf{U}_{k-1}^{-1} \times_{k+1} \mathbf{U}_{k+1}^{-1} \cdots \times_n \mathbf{U}_{n}^{-1}\)\;} \(\mathbf{U}_k\) \(\leftarrow\) \(SVD(\mathbf{V}_{(k)})\) \textcolor{red}{\(SVD(\mathbf{Y}_{(k)})\)} } \(\mathcal{C}\) \(\leftarrow\) \(\mathcal{V} \times_1 \mathbf{U_1}^{-1}\cdots \times_n \mathbf{U_n}^{-1} \) } \caption{HOSVD \textcolor{red}{HOOI}} \end{algorithm}} \end{column} \end{columns} \begin{block}{} \begin{itemize} \item \footnotesize \(EVD(\mathbf{V}_{(k)}^{\dagger} \cdot \mathbf{V}_{(k)}) \equiv SVD(\mathbf{V}_{(k)})\) ! \item \footnotesize POTFIT optimizes the factor matrices in a slightly different manner: \[\mathbf{\tilde{v}}_j^{(\kappa)} = \mathbf{v}_j^{(\kappa)} + \sum_{l=n_{\kappa}+1}^{N_{\kappa}}\mu_{jl}^{(\kappa)}\mathbf{v}^{(\kappa)}_{l}\] \end{itemize} \end{block} \vspace{-1cm} \end{frame} \begin{frame}\frametitle{\normalsize The SOP-FBR algorithm} \centering \hspace{-1.4cm} \scalebox{.55}{ \begin{algorithm}[H] \SetAlgoLined \SetKwProg{Fnb}{Function}{:}{\KwRet \(E_{sop}\)} \SetKwFunction{Fosp}{sopfbr} \SetKwFunction{Fcheb}{chebyshev} \Fnb{\Fosp(\(B, C\))}{ \(l \leftarrow 0\)\; \For{\(k \leftarrow 0\) \KwTo D}{ \For{\(j \leftarrow 0\) \KwTo \(M[k]\)}{ \For{\(i \leftarrow 0\) \KwTo \(G_{ab}[:, k]\)}{ \(U_{ij}^{(k)} \leftarrow \text{\Fcheb}(G_{ab}[i, k], B(l:l+T[k]))\)\; } \(l \leftarrow l + T[k]\)\; } } \(E_{sop} \leftarrow C \times_1 U^{(1)} \cdots \times_D U^{(D)} \)\; } \end{algorithm} } \scalebox{.55}{ \begin{algorithm*}[H] \SetAlgoLined \SetKwProg{Fn}{Function}{:}{\KwRet \(\rho\)} \SetKwFunction{Fosp}{sopfbr} \SetKwFunction{Fgen}{geogen} \SetKwFunction{Ftar}{target} \SetKwFunction{Fcheb}{chebyshev} \SetKwFunction{Fspl}{split} \SetKwFunction{Fconc}{concatenate} \KwResult{\(x_{opt}\)} Input: \(x_{guess}\) guess parameters, \(D\) dimensionality, \(M\) number of basis functions, \(T\) degree of Chebyshev series, \(N_g\) number of geometries, \(\epsilon\) threshold \; \(k \leftarrow 0\)\; \(x_0 \leftarrow x_{guess}\)\; \(G_{ab}, E_{ab} \leftarrow \text{\Fgen}(N_g)\)\; \Fn{\Ftar(\(B, C\))}{ \(E_{sop} \leftarrow \text{\Fosp}(B, C)\)\; \(\rho \leftarrow \lVert E_{ab} - E_{sop} \lVert_{L_2}\)\; } \Repeat{\(\rho < \epsilon \lor k < N\)}{ \(B, C \leftarrow \text{\Fspl}(x_{k}, T \times M)\)\; \(B \leftarrow \text{BFGS}(\text{\Ftar}(B,\bar{C}))\)\; \(\rho, C \leftarrow \text{Powell}(\text{\Ftar}(\bar{B},C))\)\; \(x_{k+1} \leftarrow \text{\Fconc}(B, C)\)\; \(k \leftarrow k + 1\)\; } \(x_{opt} \leftarrow x_k\) \caption{SOP-FBR}\label{algo_10} \end{algorithm*} } \hspace{-1.2cm} \vspace{-0.4cm} \end{frame} \section{Code structure and example applications}\label{applic} \begin{frame}[fragile] \frametitle{The Heidelberg implementation of MCTDH} \justifying{ The actual implementation is written mainly in\img{fort.png}, with some small\img{c.png}and\img{python.png}contributions. The program has a modular structure with a very intuitive and consistent input syntax. Some sections of a POTFIT input file:} \lstset{frameround=fttt} \begin{lstlisting}[frame=trBL, breaklines, basicstyle=\tiny] RUN-SECTION # System declaration name = h2o.pfit # The file extension only end-run-section # suggests a POTFIT calculation OPERATOR-SECTION pes = pjt2{binding} vcut < 0.5 # Define Hamiltonian end-operator-section PRIMITIVE-BASIS-SECTION r1 sin 34 1.0 3.475 r2 sin 34 1.0 3.475 # Define coordinates theta Leg/R 50 0 all 0.5 3.2 # and basis functions end-primitive-basis-section \end{lstlisting} \end{frame} \begin{frame} \frametitle{Applications} Some interesting applications that showcase the power of MCTDH are~\footnote{Vendrell, O., and Meyer, H.D., JCP 134.4 (2011): 044135.}: \begin{figure}[ht] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{pyra.png} \caption{24D} \label{figpyr} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{hh.png} \caption{1458D} \label{fighh} \end{subfigure} \caption{Power spectrum obtained with ML-MCTDH for (a) pyrazine (b) the Henon-Heiles Hamiltonian}\label{figapl} \end{figure} \end{frame} \subsection[Bibliography]{}\label{biblio} \begin{frame} \frametitle{Bibliography} \centering \begin{minipage}{.6\linewidth} \small{Gatti, F., \emph{et al.} Applications of quantum dynamics in chemistry. Vol. 98. Springer, 2017.} \end{minipage} \hspace{1cm} \begin{minipage}{.1\linewidth} \includegraphics[width=3.3em]{appli.jpg} \end{minipage} \vspace{.5cm} \begin{minipage}{.6\linewidth} \small{Meyer, H.D. (\LaTeX{} version by Pel\'aez, D.) ``Introduction to MCTDH.'' Lecture Notes (2011)} \end{minipage} \hspace{1cm} \begin{minipage}{.1\linewidth} \includegraphics[width=2.7em, cfbox=black]{int_dan.pdf} \end{minipage} \vspace{.5cm} \begin{minipage}{.6\linewidth} \small{Beck, M.H., \emph{et al.} The multiconfiguration time-dependent Hartree (MCTDH) method: a highly efficient algorithm for propagating wavepackets. Physics reports 324.1 (2000): 1--105.} \end{minipage} \hspace{1cm} \begin{minipage}{.1\linewidth} \includegraphics[width=2.7em, cfbox=black]{mctdh_rev.png} \end{minipage} \end{frame} \begin{frame} \centering \Large Thanks for your attention!\\ \textcolor{blue}{Bedankt voor uw aandacht!}\\~\\ Questions?\\ \textcolor{blue}{Vragen?} \end{frame} \end{document}
{ "alphanum_fraction": 0.6471780867, "avg_line_length": 37.2962962963, "ext": "tex", "hexsha": "76a8fa316afa5701eb00cba823993db1aa3dc584", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2f95684bcfd8d4bb51b5c0ad7677db553720b467", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Panadestein/mctdh_talk", "max_forks_repo_path": "mctdh_intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2f95684bcfd8d4bb51b5c0ad7677db553720b467", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Panadestein/mctdh_talk", "max_issues_repo_path": "mctdh_intro.tex", "max_line_length": 259, "max_stars_count": null, "max_stars_repo_head_hexsha": "2f95684bcfd8d4bb51b5c0ad7677db553720b467", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Panadestein/mctdh_talk", "max_stars_repo_path": "mctdh_intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9227, "size": 24168 }
\documentclass[letterpaper]{article} \usepackage{amsmath, amsfonts, amssymb, amsthm} \usepackage{enumerate,hyperref} \usepackage[margin=1in]{geometry} \usepackage[section]{placeins} \theoremstyle{definition} \newtheorem{problem}{Problem} \newtheorem*{lemma}{Lemma} \newtheorem*{corollary}{Corollary} \providecommand{\equationref}[1]{Equation \eqref{eq:#1}} \providecommand{\needscite}{[\textit{Citation Needed}]} \setcounter{secnumdepth}{0} \title{Spell Damage Analysis and Stat Weights} \author{Balor - Anathema} \date{Status: DRAFT. Last updated \today} \begin{document} \maketitle This analysis was motivated by determining stat weights for a Balance druid casting Starfire. Wherever possible, however, things were kept general so as to be applicable to other spells and classes. \section{Assumptions} We use the following assumptions about how damage works. \begin{itemize} \item Spells have a base chance to hit that is purely a function of player level, target level, and Hit gear. Resistance does not affect a spell's chance to hit. For raid bosses, the base spell hit is 83, and thus a spell's percent chance to hit is $(83 + H)$ where $H$ is your total hit bonus from gear or talents.\needscite \item Whether a spell lands as a criticial hit is determined after a spell is known to land. That is, a 10\% chance to crit means that 10\% of all spells \textit{that hit} will be critical hits, not that 10\% of all spells that are cast will crit. \needscite This is in contrast to melee attacks, which use a different system to determine hit and crit chance. \item Critical hits provide a fixed multiplicative boost to the damage of a spell. This is usually a 1.5 multiplier, but can vary depending on talents. \needscite For Balance Druids, the Vengeance talent gives a 2.0 damage multiplier on critical hits. \item Spellpower increases the damage of a spell by increasing the damage of a non-resisted, non-critical hit by $c$ times your total spellpower, where $c$ is a fixed constant for a given spell. Usually, this constant is given by the default cast time for that spell divided by 3.5. \needscite \end{itemize} \section{The Damage Formula} Let $B$ be the base damage of a spell, $c$ be that spell's corresponding spell coefficient, $H \in [0, 16]$ be a player's current total hit bonus (as a percentage, so +12\% hit is $H = 12$. Note that player hit chance can not be increased to 100, so only the first 16 are useful \needscite), $P$ be a player's total spellpower that applies to that spell, and $R \in [0, 100]$ be the player's spell crit, also as a percentage. Finally, let $x$ be the crit bonus, or one minus the crit multiplier (for example, if spell crits do 1.5 times damage in the default case, $x = 0.5$). Then the expected damage from one spell cast on a raid boss is given by the following. \begin{equation} \left(0.83 + \frac{H}{100}\right)\left(B + cP\right)\left(1 + x\frac{R}{100}\right) \label{eq:damage} \end{equation} To get DPS, we can simiply divide this by $T$, the total casting time of the spell. There is one complication here for druids, however. The Nature's Grace talent decreases the cast time of your next spell by 0.5 seconds whenever a spell lands a critical hit. Using assumption 2 above, we know that the probability of one spell resulting in a critical hit is $(0.83 + \frac{H}{100})(\frac{R}{100})$. Therefore, we can calculate an average cast time for the spell over a sufficiently long encounter as the following. Note that $t$ here is the casting time reduction that a critical hit yields. In the case of having Nature's Grace, $t=0.5$. If one does not have Nature's Grace, then $t=0$. \begin{equation} T - t\left(0.83 + \frac{H}{100}\right)\frac{R}{100} \label{eq:time} \end{equation} Note that this is somewhat inaccurate, as the first spell in a fight is guaranteed to take $T$ time to cast, and so this is truly only the expected cast time for all subsequent spells. Factoring in the additional time from the first cast would require making assumptions on the total encounter length, which we hope to avoid here. Over sufficiently long encounters, these will converge to the same, so the effect of thsi is ignored in the following analysis. Dividing the expected damage in \equationref{damage} by the expected cast time in \equationref{time} yields our expected total DPS, $D$. \begin{equation} D = d\frac{\left(0.83 + \frac{H}{100}\right)\left(mB + cP\right)\left(1 + x\frac{R}{100}\right)}{T - t\left(0.83 + \frac{H}{100}\right)\frac{R}{100}} \end{equation} For completeness, we have added in two additional factors, $d$, and $m$. $m$ is any multiplicative modifier on the base damage of a spell that might arise from talents or set bonuses. For example, the Druid talent Moonfury sets $m=1.1$. $d$ is any multiplicative damage modifer on total damage of the spell, including things like Curse of Shadows and the target's resistance. (TODO: add argument for why we can treat resistance, which really determines a probability distrubution of multiplicative damage reductions, as one simple average damage reduction. Also verify that either resistance cannot cause full 100\% damage reductions, or, that if it does, a spell can still be a crit while being 100\% resisted. If this is untrue, resistance will have an effect on Nature's Grace proc rates.). \section{Stat Weightings} To determine how we should value each stat ($H$, $P$, $R$), we have to examine how DPS varies as you change each stat. To do so, we will use derivatives, which measure the rate of change of the function with respect to a given parameter. The partial derivatives of DPS with respect to $H$, $P$, and $R$ are given below. \begin{equation} \frac{\partial D}{\partial P} = d\frac{c\left(83+H\right)\left(100 + xR\right)}{100^2T - t(83 + H)R} \end{equation} \begin{equation} \frac{\partial D}{\partial H} = d\left(mB + cP\right)\left(100+xR\right) \left(\frac{100^2T}{\left(100^2T - t\left(83+H\right)R\right)^2}\right) \end{equation} \begin{equation} \frac{\partial D}{\partial R} = d\left(mB+cP\right)\left(83+H\right) \left(\frac{xT + t\left(0.83 + \frac{H}{100}\right)}{\left(100T - t\left(0.83 + \frac{H}{100}\right)R\right)^2}\right) \end{equation} $\frac{\partial D}{\partial P}$ says that, when adding a very small amount of $P$, we expect the function value to change by $\frac{\partial D}{\partial P}$ \textit{per point of $P$ we varied}. It is the limiting value for very small changes of $P$, which gives a sense of how relevant $P$ is to the output function at a given point in the parameter space. Since we are concerned with stat weights, what we care most about is how these derivatives relate to each other. If we set the value of one spellpower to be 1 by convention, then taking ratios of derivatives will give us values for the other stats, $R$ and $H$. These equations are as follows. \begin{equation} \textrm{HitWeight} = \frac{\frac{\partial D}{\partial H}}{\frac{\partial D}{\partial P}} = \frac{\frac{mB}{c} + P}{83 + H} \left(\frac{100^2 T}{100^2T - t(83 + H)R}\right) \end{equation} \begin{equation} \textrm{CritWeight} = \frac{\frac{\partial D}{\partial R}}{\frac{\partial D}{\partial P}} = x\frac{\frac{mB}{c} + P}{100+xR} \left(\frac{T + \frac{t}{x}\left(0.83+\frac{H}{100}\right)}{T - t\left(0.83 + \frac{H}{100}\right)\frac{R}{100}}\right) \end{equation} \subsection{No Nature's Grace} To slightly generalize these to other classes, we can remove Nature's Grace from the equations by setting the casting time reduction from a crit to zero. That is, by setting $t=0$. Note that the equations were already factorized to make the impact of Nature's Grace apparent. Upon doing so, we get the following stat weights, which should be applicable to other classes. \begin{equation} \nonumber \textrm{SpellpowerWeight} = 1 \end{equation} \begin{equation} \nonumber \textrm{HitWeight} = \frac{\frac{mB}{c} + P}{83 + H} \end{equation} \begin{equation} \nonumber \textrm{CritWeight} = x\frac{\frac{mB}{c} + P}{100 + xR} \end{equation} \end{document}
{ "alphanum_fraction": 0.7430564205, "avg_line_length": 79.495049505, "ext": "tex", "hexsha": "9dd89e7b899168398a25938a30b9ba776b9e0284", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d1cfbb110a49677b8cb1cc82231e4931efa02e63", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ultrabis/libclassic", "max_forks_repo_path": "contrib/whitepaper/SpellDamage.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "d1cfbb110a49677b8cb1cc82231e4931efa02e63", "max_issues_repo_issues_event_max_datetime": "2022-02-27T07:02:52.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-04T20:57:18.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kmmiles/libclassic", "max_issues_repo_path": "contrib/whitepaper/SpellDamage.tex", "max_line_length": 793, "max_stars_count": null, "max_stars_repo_head_hexsha": "d1cfbb110a49677b8cb1cc82231e4931efa02e63", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kmmiles/libclassic", "max_stars_repo_path": "contrib/whitepaper/SpellDamage.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2290, "size": 8029 }
\chapter{piv data at station 6} \label{appendix:station_6} This appendix contains a complete set of plots for a full station, at $x/c=9.5$. \input{tables/station_6_measurements} \section{Free stream velocity of 15 meters per second} \input{figs/tex/run_51} \newpage \section{Free stream velocity of 17 meters per second} \input{figs/tex/run_52} \newpage \section{Free stream velocity of 19 meters per second} \input{figs/tex/run_53} \newpage \section{Free stream velocity of 21 meters per second} \input{figs/tex/run_54} \newpage \section{Free stream velocity of 23 meters per second} \input{figs/tex/run_55} \newpage \section{Free stream velocity of 25 meters per second} \input{figs/tex/run_56} \newpage \section{Free stream velocity of 27 meters per second} \input{figs/tex/run_57} \newpage \section{Free stream velocity of 29 meters per second} \input{figs/tex/run_58} \newpage \section{Free stream velocity of 31 meters per second} \input{figs/tex/run_59} \newpage \section{Free stream velocity of 33 meters per second} \input{figs/tex/run_60} \newpage \chapter{piv data at 23 meters per second} \label{appendix:23mps} This appendix contains a complete set of plots for all stations at a selected velocity of 23 $m/s$. \section{Station 1} \input{figs/tex/run_5} \newpage \section{Station 2} \input{figs/tex/run_15} \newpage \section{Station 3} \input{figs/tex/run_25} \newpage \section{Station 4} \input{figs/tex/run_35} \newpage \section{Station 5} \input{figs/tex/run_45} \newpage \section{Station 6} \input{figs/tex/run_55} \newpage \section{Station 7} \input{figs/tex/run_65}
{ "alphanum_fraction": 0.7721280603, "avg_line_length": 24.1363636364, "ext": "tex", "hexsha": "c26a95a2a554c164ffe56613823e1b1270404344", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Jwely/thesis-pivpr", "max_forks_repo_path": "texdocs/docs/appendices/piv_data_appendices.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Jwely/thesis-pivpr", "max_issues_repo_path": "texdocs/docs/appendices/piv_data_appendices.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Jwely/thesis-pivpr", "max_stars_repo_path": "texdocs/docs/appendices/piv_data_appendices.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 532, "size": 1593 }
\section{Form Plug-Ins} \subsection{Overview} \begin{table}[ht] \centering \begin{tabular}{@{}lr@{}} \toprule \textbf{Required superclass} & \textit{FormServerPlugin} \\ \textbf{Configurable Sub-URLs} & \textit{no} \\ \textbf{Requires JavaScript} & \textit{no} \\ \textbf{Methods to implement} & \textit{void performFormAction( RequestFacade req , } \\ & \textit{ResponseFacade resp , } \\ & \textit{MultiPartObject mpo , }\\ & \textit{ModelInformation mi , }\\ & \textit{LoginableUser u )} \\ \addlinespace & \textit{JSONObject getFormConfig( ModelInformation mi , } \\ & \textit{RequestFacade req , } \\ & \textit{LoginableUser u )} \\ \addlinespace & \textit{\textbf{(optional)} String getItemText()} \\ \addlinespace & \textit{\textbf{(optional)} String getItemIconPath()}\\ \addlinespace \textbf{Graphical representation} & \textit{a single menu item or context button} \\ \textbf{Example(s)} & \textit{Behavioral Interface Generation,} \\ & \textit{Deploy to SolutionCenter} \\ \bottomrule \end{tabular} \end{table} \subsection{Details} \subsubsection{Basic Information} As we can derive from the superclass name \textit{FormServerPlugin}, this type of plug-in displays a form. As an advantage over simple plug-ins the entered form data can be used while processing the users request. It is recommended to use this plug-in type, when the displayed form has to be generated dynamically. If the general shape of the form is indepent of a concrete model, or model state, please use \textit{DialogPlugin} as superclass (cf. Section \ref{dialog_plugins}). When the form is sent back to the server it is delivered in a special format called multipart. For details on multipart see Section \ref{multipart_format}. The implementer of the plug-in does not have to care about parsing this format since that is done before the respective methods are called. \subsubsection{Methods} \paragraph{void performAction(RequestFacade req, ResponseFacade resp, MultiPartObject mpo, ModelInformation mi, LoginableUser u );} After the form has been submitted, this method is responsible for processing the incoming request and sending a resulting response to the requester. All available information is contained within the \textit{ModelInformation} and the \textit{MultiPartObject} objects. The latter contains the data the user entered into the form belonging to this plug-in. See Section \ref{multipart_format} for information on the multipart format. See Section \ref{response_section} for information on how responses must be structured. \paragraph{JSONObject getFormConfig(ModelInformation mi, RequestFacade req, LoginableUser lu)} Return a \textit{JSONObject} that configures the required form. To facilitate this step there exist several classes within the package \verb!com.inubit.research.! \verb!server.extjs!. Detailed information on these classes is given in Section \ref{extjs}. Two buttons, "Submit" and "Cancel" are automatically added to the form. \paragraph{String getItemText();} Return the text for the menu item. If the \verb!showInToolbar()!-method for this plug-in returns \verb!true! the item text will be used as tooltip text instead. \paragraph{String getItemIconPath();} Return the path to the icon of the menu item. This is especially important if the plug-in is represented as a simple button within the toolbar or as a context menu button. \subsubsection{ExtJS-Form Creation} \label{extjs} The creation of ExtJS form configurations is facilitated by the \textit{ExtJSFormFactory} class. By calling \verb!createEmptyForm()! you will receive an empty form to which you can add all required items. As all objects returned by the factory extend the class \textit{JSONObject}, no further conversion is required. Supported ExtJS form elements are (further elements may be implemented): \begin{itemize} \item Container elements (can have multiple sub-elements): \begin{itemize} \item FieldSet \item CheckboxGroup \end{itemize} \item Simple elements: \begin{itemize} \item Checkbox \item TextField \end{itemize} \end{itemize} For configuring these elements (and also the form itself) use the respective \verb!setProperty! \verb!(key, value)! method. To view all configurable attributes take a look at \url{http://www.extjs.com/deploy/dev/docs/}. For accessing the entered data during request processing, you have to specify the \textbf{name}-attribute for each simple element. This name can then be used to get the corresponding value out of the created \textit{MultiPartObject} instance. \subsubsection{Multipart Format} \label{multipart_format} When the user submits the form, the server receives the data in multipart format. In a first step this format is transferred into a Java object of type \textit{MultiPartObject}. Accessing this object will deliver the entered data to the plug-in. The following methods are considered to be helpful, where \verb!mpo! is an instance of class \textit{MultiPartObject}, \verb!mi! is a \textit{MultiPartItem}, and \verb!mp! is a \textit{SimpleMultipartParser}: \begin{itemize} \item \verb!mbo.getItems()!\\ This returns all items contained in the multipart object. That means, that all none empty (or unset) form elements are returned by this call. \item \verb!mbo.getItemByName(String name)!\\ Return one specific item that is identified by its unique name. The name is taken from the \textbf{name}-attribute of the form element. The form element is only found if the name exists and the element has a none-\verb!null! value. For checkbox elements this means, that they are only part of the submitted form if they were checked when submitting the form. \item \verb!mbi.getContent()!\\ Get the textual content of a form element. This is equal to the value the user entered into the specific field. \item \verb!sp.parseItemContentAsByteArray(BufferedInputStream bis, String itemName)!\\ Reads a specific item of the input stream as byte array. This can be used, e.g., to parse an image \end{itemize}
{ "alphanum_fraction": 0.75076428, "avg_line_length": 42.8620689655, "ext": "tex", "hexsha": "ff700f077eebeda95d45867bc04eb5dab22ded3d", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-12-26T07:01:44.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-24T11:15:41.000Z", "max_forks_repo_head_hexsha": "7a608d7fcc9a172a768a9907c9365a466164a7b1", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "frapu78/processeditor", "max_forks_repo_path": "docs/tex-src/ServerPlugins/03_form_plugins.tex", "max_issues_count": 32, "max_issues_repo_head_hexsha": "7a608d7fcc9a172a768a9907c9365a466164a7b1", "max_issues_repo_issues_event_max_datetime": "2018-06-18T19:34:53.000Z", "max_issues_repo_issues_event_min_datetime": "2015-02-26T21:09:45.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "frapu78/processeditor", "max_issues_repo_path": "docs/tex-src/ServerPlugins/03_form_plugins.tex", "max_line_length": 318, "max_stars_count": 5, "max_stars_repo_head_hexsha": "7a608d7fcc9a172a768a9907c9365a466164a7b1", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "frapu78/processeditor", "max_stars_repo_path": "docs/tex-src/ServerPlugins/03_form_plugins.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-27T15:52:37.000Z", "max_stars_repo_stars_event_min_datetime": "2015-12-29T00:56:12.000Z", "num_tokens": 1649, "size": 6215 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{book} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Study project: undamentals of Causal Inferences With R}, pdfauthor={François Lefebvre}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs,array} \usepackage{calc} % for calculating minipage widths % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} \ifluatex \usepackage{selnolig} % disable illegal ligatures \fi \usepackage[]{natbib} \bibliographystyle{plainnat} \title{Study project: undamentals of Causal Inferences With R} \author{François Lefebvre} \date{2022-02-01} \usepackage{amsthm} \newtheorem{theorem}{Theorem}[chapter] \newtheorem{lemma}{Lemma}[chapter] \newtheorem{corollary}{Corollary}[chapter] \newtheorem{proposition}{Proposition}[chapter] \newtheorem{conjecture}{Conjecture}[chapter] \theoremstyle{definition} \newtheorem{definition}{Definition}[chapter] \theoremstyle{definition} \newtheorem{example}{Example}[chapter] \theoremstyle{definition} \newtheorem{exercise}{Exercise}[chapter] \theoremstyle{definition} \newtheorem{hypothesis}{Hypothesis}[chapter] \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem*{solution}{Solution} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{study-projec}{% \chapter*{Study Projec}\label{study-projec}} \addcontentsline{toc}{chapter}{Study Projec} \hypertarget{about}{% \section*{About}\label{about}} \addcontentsline{toc}{section}{About} This is a study project of \citet{brumback2022}. Many thanks to Babette Brumback for a great book to tackle the question of causality. It is \emph{extremely} in my professional life which is not about making predictions but rather come up with intervention plans (business intel). The suggestions for errata are in the section \emph{Errata}. Comments are also included in the section \emph{Comments}. \hypertarget{packages}{% \section*{Packages}\label{packages}} \addcontentsline{toc}{section}{Packages} The functions have been rewritten to simplify them and improve the learning experience (personal opinion) for newbies such as me. The following packages are so useful that they could not be avoided. In particular, \texttt{dplyr} and \texttt{tidyr} make the coding experience so much more interesting that one could almost claim they have become the standard in \texttt{R} coding. \begin{itemize} \tightlist \item \texttt{dplyr} for data wrangling. see \citet{R-dplyr} \item \texttt{tidyr} for data wrangling. see \citet{R-tidyr} \item \texttt{ggplot2} for plots, see \citet{R-ggplot2}` \item \texttt{gt} for all tables, see \citet{R-gt} \item \texttt{dagitty} for analysis of structural causal models, see \citet{R-dagitty} \item \texttt{ggdag} for directed acyclic graphs, see \citet{R-ggdag} \item \texttt{gee}: a generalized estimation equation solver. Introduced in chapter 4. See \citet{R-gee}. \item \texttt{MonteCarlo}: To perform Monte Carlo simulations. Introduced in chapter 4, section 4.2. See \citet{R-MonteCarlo}. \item \texttt{simstudy}: To run all sorts of simulations. A great tool to learn right from the start since simulations are so important. See \citet{R-simstudy}. \end{itemize} \hypertarget{errata}{% \chapter*{Errata}\label{errata}} \addcontentsline{toc}{chapter}{Errata} \hypertarget{preface}{% \section*{Preface}\label{preface}} \addcontentsline{toc}{section}{Preface} page xi, last word of first paragraph is \textbf{standaridzation}, s/b \emph{standardization} \hypertarget{chapter-1}{% \section*{Chapter 1}\label{chapter-1}} \addcontentsline{toc}{section}{Chapter 1} \hypertarget{section-1.2.3.2-p.-11}{% \subsection*{Section 1.2.3.2, p.~11}\label{section-1.2.3.2-p.-11}} \addcontentsline{toc}{subsection}{Section 1.2.3.2, p.~11} The sentence of the 6th line on top of the page is ``We simulated the data according to the \textbf{hyothetical}'', s/b \emph{hypothetical} \hypertarget{chapter-2}{% \section*{Chapter 2}\label{chapter-2}} \addcontentsline{toc}{section}{Chapter 2} \hypertarget{figure-2.1-p.-30}{% \subsection*{Figure 2.1, p.~30}\label{figure-2.1-p.-30}} \addcontentsline{toc}{subsection}{Figure 2.1, p.~30} This is really a small detail. The caption of the bottom plot is \(\hat{E_{np}}(Y \mid A= 1, H =1, T = 1)\), s/b \(\hat{E}_{np}\) \hypertarget{chapter-3}{% \section*{Chapter 3}\label{chapter-3}} \addcontentsline{toc}{section}{Chapter 3} \hypertarget{typography-section-3.2-p.-40-equation-3.1}{% \subsection*{Typography: section 3.2 p.~40, equation 3.1}\label{typography-section-3.2-p.-40-equation-3.1}} \addcontentsline{toc}{subsection}{Typography: section 3.2 p.~40, equation 3.1} The current latex expression of conditional independence used seems to be \texttt{(Y(0),\ Y(1))\ \textbackslash{}\ \textbackslash{}text\{II\}\ \textbackslash{}\ T} with the output \[ (Y(0), Y(1)) \ \text{II} \ T \] a better typography would be \texttt{\textbackslash{}perp\textbackslash{}!\textbackslash{}!\textbackslash{}!\textbackslash{}perp} for the symbol \(\perp\!\!\!\perp\). When used for equation 3.1 as \texttt{(Y(0),\ Y(1))\ \textbackslash{}perp\textbackslash{}!\textbackslash{}!\textbackslash{}!\textbackslash{}perp\ T} we obtain \[ (Y(0), Y(1)) \perp\!\!\!\perp T \] \hypertarget{comments}{% \chapter*{Comments}\label{comments}} \addcontentsline{toc}{chapter}{Comments} \hypertarget{chapter-2-1}{% \section*{Chapter 2}\label{chapter-2-1}} \addcontentsline{toc}{section}{Chapter 2} \hypertarget{section-2.4-p.-31}{% \subsection*{section 2.4 p.~31}\label{section-2.4-p.-31}} \addcontentsline{toc}{subsection}{section 2.4 p.~31} The second sentence of the last paragraph on p.~33 says \begin{quote} We also need the \texttt{car} package in order for the summary() function to operate on boot objects the way we describe. \end{quote} This sentence is \textbf{not required} if we use the \texttt{boot::boot.ci()} which simplifies \texttt{lmodboot.r()} and does not require the \texttt{car} package. See the code in this document for \texttt{lmodboot.r} in chapter 2. \hypertarget{chapter-4}{% \section*{Chapter 4}\label{chapter-4}} \addcontentsline{toc}{section}{Chapter 4} \hypertarget{section-4.1}{% \subsection*{Section 4.1}\label{section-4.1}} \addcontentsline{toc}{subsection}{Section 4.1} See the plots in section 4.2. They could be helpful to visualize the changes in effect measures from one level of modifier to the other. \hypertarget{section-4.2}{% \subsection*{Section 4.2}\label{section-4.2}} \addcontentsline{toc}{subsection}{Section 4.2} \hypertarget{monte-carlo-simulation}{% \subsubsection*{Monte Carlo Simulation}\label{monte-carlo-simulation}} \addcontentsline{toc}{subsubsection}{Monte Carlo Simulation} A Monte Carlo is provided in section 4.2 and coded in a function called \texttt{betasim\_effect\_measures()}. It uses the \(Beta\) distribution. It is helpful in that it \begin{itemize} \tightlist \item confirms the same results as in \citet{shanninbrumback2021} \item is less CPU intensive as it needs only 5000 iterations to confirm \citet{shanninbrumback2021} \item is easier to code than \texttt{java} and uses \texttt{R} which is the declared language of \citet{brumback2022} \item allows some extra flexibility with the shape parameters of \(Beta\) to investigate the conclusion with diffferent curves. See the suggestion for applications below. \end{itemize} \hypertarget{page-72-figure-4.1}{% \subsubsection*{page 72, Figure 4.1}\label{page-72-figure-4.1}} \addcontentsline{toc}{subsubsection}{page 72, Figure 4.1} \begin{quote} The probabilites shown in the Venn diagram do not add up to 100\% because, for example, the event that RR changes in the same direction as RD but not in the same direction as the other two measures {[}\ldots{]}. It would akward to arbitrarily one of those 2 chances as zero. \end{quote} \citet{shanninbrumback2021} mentions that it is the result of \emph{not mutually exclusive events}. That is true. Yet, these events, properly grouped are actually mutually exclusive. In section 4.2 they are called \textbf{Opposite pairwise events}. Using these definitions then yes, they are mutually exclusive but cannot be properly shown in the Venn diagram. This can be easily solved by splitting the probabilities. See section 4.2 for details. The end result a proper partitioning of the sample space \(\Omega\) and is, in fact, a \(\sigma-field\) (See \citet{grimmett}, section 1.2). Yet it does not change the conclusions reached in \citet{shanninbrumback2021}. Actually, it reinforces them as this point is \textbf{extremely important} when using probabilities and statistics. \hypertarget{applications}{% \subsubsection*{Applications}\label{applications}} \addcontentsline{toc}{subsubsection}{Applications} See my sub-section 4.2 called \emph{Applications} where 2 possible applications are mentioned. \begin{itemize} \tightlist \item Data pre-processing (data cleaning) \item Bayesian prior for Beta-binomial model \end{itemize} \hypertarget{exercises}{% \subsection*{Exercises}\label{exercises}} \addcontentsline{toc}{subsection}{Exercises} \hypertarget{exercise-1}{% \subsubsection*{Exercise 1}\label{exercise-1}} \addcontentsline{toc}{subsubsection}{Exercise 1} Using the causal power, the conclusion is different than the official answer. It is not obvious why the official solution does not make use of the \emph{causal power}. \hypertarget{exercise-5}{% \subsubsection*{Exercise 5}\label{exercise-5}} \addcontentsline{toc}{subsubsection}{Exercise 5} The official solution uses \texttt{gee} with the default family, that is \texttt{gaussian}. Since the outcome \(attend\) is binary isn't it better to use the \texttt{binomial} family? We quote p.~50 from chapter 3 in that respect \begin{quote} Because our outcome is binary, we choose to fit the logistic parametric model \end{quote} \hypertarget{chapter-5}{% \section*{Chapter 5}\label{chapter-5}} \addcontentsline{toc}{section}{Chapter 5} The \texttt{dagitty} and \texttt{ggdag} are used extensively. \hypertarget{hello-bookdown}{% \chapter{Hello bookdown}\label{hello-bookdown}} All chapters start with a first-level heading followed by your chapter title, like the line above. There should be only one first-level heading (\texttt{\#}) per .Rmd file. \hypertarget{setup}{% \section{Setup}\label{setup}} Make sure you tell GitHub that the web site is not to be build via Jekyll, since the \textbf{bookdown} HTML output is already a standalone website. See section 6.3 of \href{https://bookdown.org/yihui/bookdown/github.html}{bookdown} for details. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# create a hidden file .nojekyll} \CommentTok{\# to tell GitHub that the website is not to be build via Jekyll} \NormalTok{a\_file }\OtherTok{\textless{}{-}} \FunctionTok{file.path}\NormalTok{(}\FunctionTok{getwd}\NormalTok{(), }\StringTok{".nojekyll"}\NormalTok{)} \ControlFlowTok{if}\NormalTok{ (}\SpecialCharTok{!}\FunctionTok{file.exists}\NormalTok{(a\_file)) }\FunctionTok{file.create}\NormalTok{(a\_file)} \end{Highlighting} \end{Shaded} \hypertarget{a-section}{% \section{A section}\label{a-section}} All chapter sections start with a second-level (\texttt{\#\#}) or higher heading followed by your section title, like the sections above and below here. You can have as many as you want within a chapter. \hypertarget{an-unnumbered-section}{% \subsection*{An unnumbered section}\label{an-unnumbered-section}} \addcontentsline{toc}{subsection}{An unnumbered section} Chapters and sections are numbered by default. To un-number a heading, add a \texttt{\{.unnumbered\}} or the shorter \texttt{\{-\}} at the end of the heading, like in this section. \hypertarget{cross}{% \chapter{Cross-references}\label{cross}} Cross-references make it easier for your readers to find and link to elements in your book. \hypertarget{chapters-and-sub-chapters}{% \section{Chapters and sub-chapters}\label{chapters-and-sub-chapters}} There are two steps to cross-reference any heading: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Label the heading: \texttt{\#\ Hello\ world\ \{\#nice-label\}}. \begin{itemize} \tightlist \item Leave the label off if you like the automated heading generated based on your heading title: for example, \texttt{\#\ Hello\ world} = \texttt{\#\ Hello\ world\ \{\#hello-world\}}. \item To label an un-numbered heading, use: \texttt{\#\ Hello\ world\ \{-\#nice-label\}} or \texttt{\{\#\ Hello\ world\ .unnumbered\}}. \end{itemize} \item Next, reference the labeled heading anywhere in the text using \texttt{\textbackslash{}@ref(nice-label)}; for example, please see Chapter \ref{cross}. \begin{itemize} \tightlist \item If you prefer text as the link instead of a numbered reference use: \protect\hyperlink{cross}{any text you want can go here}. \end{itemize} \end{enumerate} \hypertarget{captioned-figures-and-tables}{% \section{Captioned figures and tables}\label{captioned-figures-and-tables}} Figures and tables \emph{with captions} can also be cross-referenced from elsewhere in your book using \texttt{\textbackslash{}@ref(fig:chunk-label)} and \texttt{\textbackslash{}@ref(tab:chunk-label)}, respectively. See Figure \ref{fig:nice-fig}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, .}\DecValTok{1}\NormalTok{, .}\DecValTok{1}\NormalTok{))} \FunctionTok{plot}\NormalTok{(pressure, }\AttributeTok{type =} \StringTok{\textquotesingle{}b\textquotesingle{}}\NormalTok{, }\AttributeTok{pch =} \DecValTok{19}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} {\centering \includegraphics[width=0.8\linewidth]{_main_files/figure-latex/nice-fig-1} } \caption{Here is a nice figure!}\label{fig:nice-fig} \end{figure} Don't miss Table \ref{tab:nice-tab}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \FunctionTok{head}\NormalTok{(pressure, }\DecValTok{10}\NormalTok{), }\AttributeTok{caption =} \StringTok{\textquotesingle{}Here is a nice table!\textquotesingle{}}\NormalTok{,} \AttributeTok{booktabs =} \ConstantTok{TRUE} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:nice-tab}Here is a nice table!} \centering \begin{tabular}[t]{rr} \toprule temperature & pressure\\ \midrule 0 & 0.0002\\ 20 & 0.0012\\ 40 & 0.0060\\ 60 & 0.0300\\ 80 & 0.0900\\ \addlinespace 100 & 0.2700\\ 120 & 0.7500\\ 140 & 1.8500\\ 160 & 4.2000\\ 180 & 8.8000\\ \bottomrule \end{tabular} \end{table} \hypertarget{parts}{% \chapter{Parts}\label{parts}} You can add parts to organize one or more book chapters together. Parts can be inserted at the top of an .Rmd file, before the first-level chapter heading in that same file. Add a numbered part: \texttt{\#\ (PART)\ Act\ one\ \{-\}} (followed by \texttt{\#\ A\ chapter}) Add an unnumbered part: \texttt{\#\ (PART\textbackslash{}*)\ Act\ one\ \{-\}} (followed by \texttt{\#\ A\ chapter}) Add an appendix as a special kind of un-numbered part: \texttt{\#\ (APPENDIX)\ Other\ stuff\ \{-\}} (followed by \texttt{\#\ A\ chapter}). Chapters in an appendix are prepended with letters instead of numbers. \hypertarget{footnotes-and-citations}{% \chapter{Footnotes and citations}\label{footnotes-and-citations}} \hypertarget{footnotes}{% \section{Footnotes}\label{footnotes}} Footnotes are put inside the square brackets after a caret \texttt{\^{}{[}{]}}. Like this one \footnote{This is a footnote.}. \hypertarget{citations}{% \section{Citations}\label{citations}} Reference items in your bibliography file(s) using \texttt{@key}. For example, we are using the \textbf{bookdown} package \citep{R-bookdown} (check out the last code chunk in index.Rmd to see how this citation key was added) in this sample book, which was built on top of R Markdown and \textbf{knitr} \citep{xie2015} (this citation was added manually in an external file book.bib). Note that the \texttt{.bib} files need to be listed in the index.Rmd with the YAML \texttt{bibliography} key. The RStudio Visual Markdown Editor can also make it easier to insert citations: \url{https://rstudio.github.io/visual-markdown-editing/\#/citations} \hypertarget{blocks}{% \chapter{Blocks}\label{blocks}} \hypertarget{equations}{% \section{Equations}\label{equations}} Here is an equation. \begin{equation} f\left(k\right) = \binom{n}{k} p^k\left(1-p\right)^{n-k} \label{eq:binom} \end{equation} You may refer to using \texttt{\textbackslash{}@ref(eq:binom)}, like see Equation \eqref{eq:binom}. \hypertarget{theorems-and-proofs}{% \section{Theorems and proofs}\label{theorems-and-proofs}} Labeled theorems can be referenced in text using \texttt{\textbackslash{}@ref(thm:tri)}, for example, check out this smart theorem \ref{thm:tri}. \begin{theorem} \protect\hypertarget{thm:tri}{}\label{thm:tri}For a right triangle, if \(c\) denotes the \emph{length} of the hypotenuse and \(a\) and \(b\) denote the lengths of the \textbf{other} two sides, we have \[a^2 + b^2 = c^2\] \end{theorem} Read more here \url{https://bookdown.org/yihui/bookdown/markdown-extensions-by-bookdown.html}. \hypertarget{callout-blocks}{% \section{Callout blocks}\label{callout-blocks}} The R Markdown Cookbook provides more help on how to use custom blocks to design your own callouts: \url{https://bookdown.org/yihui/rmarkdown-cookbook/custom-blocks.html} \hypertarget{sharing-your-book}{% \chapter{Sharing your book}\label{sharing-your-book}} \hypertarget{publishing}{% \section{Publishing}\label{publishing}} HTML books can be published online, see: \url{https://bookdown.org/yihui/bookdown/publishing.html} \hypertarget{pages}{% \section{404 pages}\label{pages}} By default, users will be directed to a 404 page if they try to access a webpage that cannot be found. If you'd like to customize your 404 page instead of using the default, you may add either a \texttt{\_404.Rmd} or \texttt{\_404.md} file to your project root and use code and/or Markdown syntax. \hypertarget{metadata-for-sharing}{% \section{Metadata for sharing}\label{metadata-for-sharing}} Bookdown HTML books will provide HTML metadata for social sharing on platforms like Twitter, Facebook, and LinkedIn, using information you provide in the \texttt{index.Rmd} YAML. To setup, set the \texttt{url} for your book and the path to your \texttt{cover-image} file. Your book's \texttt{title} and \texttt{description} are also used. This \texttt{gitbook} uses the same social sharing data across all chapters in your book- all links shared will look the same. Specify your book's source repository on GitHub using the \texttt{edit} key under the configuration options in the \texttt{\_output.yml} file, which allows users to suggest an edit by linking to a chapter's source file. Read more about the features of this output format here: \url{https://pkgs.rstudio.com/bookdown/reference/gitbook.html} Or use: \begin{Shaded} \begin{Highlighting}[] \NormalTok{?bookdown}\SpecialCharTok{::}\NormalTok{gitbook} \end{Highlighting} \end{Shaded} \bibliography{books.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.7466014138, "avg_line_length": 39.3207236842, "ext": "tex", "hexsha": "0e257860e62548531b4441934a5c9b5833ef89c2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9c825b8200bc8a136c9c7375f8da04f34096fda5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FrankLef/FundamentalsCausalInference", "max_forks_repo_path": "_main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9c825b8200bc8a136c9c7375f8da04f34096fda5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FrankLef/FundamentalsCausalInference", "max_issues_repo_path": "_main.tex", "max_line_length": 338, "max_stars_count": null, "max_stars_repo_head_hexsha": "9c825b8200bc8a136c9c7375f8da04f34096fda5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FrankLef/FundamentalsCausalInference", "max_stars_repo_path": "_main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7567, "size": 23907 }
\documentclass{beamer} \usetheme{Boadilla} \mode<presentation> \title{The Transfer Matrix Method in Linear Dielectrics} \author{Claudio Barros} \institute{University at Buffalo} \date{\today} \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame} \frametitle{Outline} \tableofcontents \end{frame} \section{Electromagnetic Plane-Waves in Dielectric Media} \section{Reflection and Transmission at Dielectric Interfaces} \section{The Transfer Matrix Method} \begin{frame} \frametitle{Electromagnetic Plane-Waves in Dielectric Media} As always, our starting point is the Maxwell equations: \begin{equation} \nabla \cdot \mathbf{D} = \rho_{f} \end{equation} \begin{equation} \nabla \cdot \mathbf{B} = 0 \end{equation} \begin{equation}\label{faraday} \nabla \times \mathbf{E} = - \frac{\partial \mathbf{B}}{\partial t} \end{equation} \begin{equation}\label{ampere} \nabla \times \mathbf{H} = \frac{\partial \mathbf{D}}{\partial t} + \mathbf{J}_{f} \end{equation} In the case of linear and isotropic media we have the following relations for our fields: \begin{equation} \mathbf{D} = \epsilon \mathbf{E} \end{equation} \begin{equation} \mathbf{H} = \frac{1}{\mu} \mathbf{B} \end{equation} \end{frame} \begin{frame} In the absence of any sources, the Maxwell equations can be combined to obtain the Helmholtz equations: \begin{equation}\label{helm1} (\nabla^{2} + \mu \epsilon \omega^{2}) \mathbf{E} = 0 \end{equation} \begin{equation}\label{helm2} (\nabla^{2} + \mu \epsilon \omega^{2}) \mathbf{B} = 0 \end{equation} The solutions? Plane-waves, of course! \begin{equation} \mathbf{E}(\mathbf{r}, t) = \mathbf{E}_{0} e^{i( k \mathbf{n} \cdot \mathbf{r} - \omega t )} \end{equation} \begin{equation} \mathbf{B}(\mathbf{r}, t) = \mathbf{B}_{0} e^{i( k \mathbf{n} \cdot \mathbf{r} - \omega t )} \end{equation} The dispersion relation and index of refraction can also be obtained now: \begin{equation} k^{2} = \mu \epsilon \omega^{2} \end{equation} \begin{equation}\label{index} n = \sqrt{ \frac{ \mu \epsilon }{ \mu_{0} \epsilon_{0} } } \end{equation} \end{frame} \begin{frame} \frametitle{Reflection and Transmission at Dielectric Interfaces} \begin{figure}[h] \centering \includegraphics[width=.65\linewidth]{1} \caption{Propagation of a wave into a medium with a different index of refraction, which gives rise to reflected and refracted waves. The y-axis points into the page for this geometry [\textbf{1}].} \label{fig:lat} \end{figure} \end{frame} \begin{frame} We can join all three waves at the interface and use the superposition principle: \begin{equation}\label{superp} \mathbf{E}_{0} e^{i (\mathbf{k} \cdot \mathbf{r} - \omega t)} + \mathbf{E}_{0}'' e^{i (\mathbf{k}'' \cdot \mathbf{r} - \omega t)} = \mathbf{E}_{0}' e^{i (\mathbf{k}' \cdot \mathbf{r} - \omega t)} \end{equation} \begin{equation}\label{k1} |\mathbf{k}| = |\mathbf{k}''| = \omega \sqrt{\mu \epsilon} \end{equation} \begin{equation}\label{k2} |\mathbf{k}'| = \omega \sqrt{\mu' \epsilon'} \end{equation} \begin{equation} \mathbf{k} \cdot \mathbf{r} = \mathbf{k}'' \cdot \mathbf{r} = \mathbf{k}' \cdot \mathbf{r} \end{equation} The laws of reflection and refraction then come out as natural consequences: \begin{equation} |k||r| \cos \theta_{i} = |k''||r| \cos \theta_{r'} \hspace{10mm} \rightarrow \hspace{10mm}\theta_{i} = \theta_{r'} \end{equation} \begin{equation} k \sin \theta_{i} = k' \sin \theta_{r} \end{equation} \end{frame} \begin{frame} The dynamic properties are a direct result of the boundary conditions at the dielectric interface: \begin{equation} [ \epsilon (\mathbf{E}_{0} + \mathbf{E}_{0}'') - \epsilon' \mathbf{E}_{0}' ] \cdot \mathbf{n} = 0 \end{equation} \begin{equation} [ \mathbf{k} \times \mathbf{E}_{0} + \mathbf{k}'' \times \mathbf{E}_{0}'' - \mathbf{k}' \times \mathbf{E}_{0}' ] \cdot \mathbf{n} = 0 \end{equation} \begin{equation} (\mathbf{E}_{0} + \mathbf{E}_{0}'' - \mathbf{E}_{0}' ) \times \mathbf{n} = 0 \end{equation} \begin{equation} \left[ \frac{1}{\mu} ( \mathbf{k} \times \mathbf{E}_{0} + \mathbf{k}'' \times \mathbf{E}_{0}'' ) - \frac{1}{\mu'} ( \mathbf{k}' \times \mathbf{E}_{0}') \right] \times \mathbf{n} = 0 \end{equation} \end{frame} \begin{frame} The the key outcome is the acquisition of the Fresnel equations: \begin{equation}\label{fresnel1} r_{p} = \frac{ n_{f} \cos \theta_{i} - n_{i} \cos \theta_{f} }{ n_{f} \cos \theta_{i} + n_{i} \cos \theta_{f}} \end{equation} \begin{equation} t_{p} = \frac{ 2 n_{i} \cos \theta_{i} }{ n_{f} \cos \theta_{i} + n_{i} \cos \theta_{f}} \end{equation} \begin{equation} r_{s} = \frac{ n_{i} \cos \theta_{i} - n_{f} \cos \theta_{f} }{ n_{i} \cos \theta_{i} + n_{f} \cos \theta_{f}} \end{equation} \begin{equation}\label{fresnel2} t_{s} = \frac{ 2 n_{i} \cos \theta_{i} }{ n_{i} \cos \theta_{i} + n_{f} \cos \theta_{f}} \end{equation} \end{frame} \begin{frame} \frametitle{The Transfer Matrix Method} We imagine a stacked layer of dielectric slabs with different properties: \begin{align}\label{layers} n(z) =& n_{0}, \hspace{10mm} z < z_{0} \nonumber \\ & n_{1}, \hspace{10mm} z_{0} < z < z_{1} \nonumber \\ & n_{2}, \hspace{10mm} z_{1} < z < z_{2} \nonumber \\ & \vdots \nonumber \\ & n_{N}, \hspace{10mm} z_{N - 1} < z \nonumber \\ \end{align} Note that $n$ is a function of $z$. \end{frame} \begin{frame} Lets examine the relation between forward and backward E-fields in two different layers: \begin{align}\label{system2} E_{F}' = E_{F} e^{ i k_{zj} z } \nonumber \\ E_{B}' = E_{B} e^{ -i k_{zj} z } \end{align} Looks a lot like a matrix equation... \begin{equation} E' = T_{j} E \end{equation} The two transfer matrices are then: \begin{equation}\label{Tj} T_{j} = \begin{pmatrix} \exp i \Phi_{j} & 0 \\ 0 & \exp - i \Phi_{j} \end{pmatrix} \end{equation} \begin{equation}\label{Tji} T_{ji} = \frac{1}{t_{ji}} \begin{pmatrix} 1 & r_{ji} \\ r_{ji} & 1 \end{pmatrix} \end{equation} Whereas the \textit{full} transfer matrix becomes: \begin{equation} T = T_{N(N-1)} T_{N-1} \hdots T_{32} \hspace{1mm} T_{2} \hspace{1mm} T_{21} \hspace{1mm} T_{1} \hspace{1mm} T_{10} \end{equation} \end{frame} \begin{frame} Finally, our desired results! \begin{equation} R = |r|^{2} \end{equation} \begin{equation} T_{p} = |t|^{2} \frac{ \mathrm{Re} ( n_{f} \cos \theta_{f}^{*} ) }{ \mathrm{Re} ( n_{i} \cos \theta_{i}^{*} ) } \end{equation} \begin{equation} T_{s} = |t|^{2} \frac{ \mathrm{Re} ( n_{f} \cos \theta_{f} ) }{ \mathrm{Re} ( n_{i} \cos \theta_{i} ) } \end{equation} \end{frame} \begin{frame} \frametitle{References} \begin{thebibliography}{1} \bibitem{1} J. Jackson, \textit{Classical Electrodynamics} (John Wiley \& Sons, Inc., 1999). \bibitem{2} E. Hecht, \textit{Optics} (Pearson, 2002). \bibitem{3} M. Born, and E. Wolf, \textit{Priciples of Optics} (Pergamon Press, 1986). \bibitem{4} M. Claudia Troparevsky, Adrian S. Sabau, Andrew R. Lupini, and Zhenyu Zhang, "Transfer-matrix formalism for the calculation of optical response in multilayer systems: from coherent to incoherent interference," Opt. Express 18, 24715-24721 (2010) \bibitem{5} https://pypi.org/project/tmm/ \end{thebibliography} \end{frame} \end{document}
{ "alphanum_fraction": 0.6432207931, "avg_line_length": 20.675900277, "ext": "tex", "hexsha": "5bc70e0d841fcaca07b8fca5dc5fc93819419c70", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "31cf58015ac1421d9f808229fc7f3c4a3b7ca9df", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ubsuny/transfer-matrix-method-final20", "max_forks_repo_path": "Tranfer Matrix Method Files/Final Presentation/Final Presentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "31cf58015ac1421d9f808229fc7f3c4a3b7ca9df", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ubsuny/transfer-matrix-method-final20", "max_issues_repo_path": "Tranfer Matrix Method Files/Final Presentation/Final Presentation.tex", "max_line_length": 257, "max_stars_count": null, "max_stars_repo_head_hexsha": "31cf58015ac1421d9f808229fc7f3c4a3b7ca9df", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ubsuny/transfer-matrix-method-final20", "max_stars_repo_path": "Tranfer Matrix Method Files/Final Presentation/Final Presentation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2781, "size": 7464 }
%%% Time-stamp: <mainrep.tex 19:57, 17 Jul 2016 by P Sunthar> %%% $Log:$ % This document describes how to use iitbreport style %******************************************************************** %\documentclass[11pt,a4paper,openright]{report} \documentclass[twoside]{iitbreport} %% Default spacing: 1.5 %% Default font size: 12pt %% Default font: txfonts (similar to times new roman) %% Selectively comment out sections that you want to be left out but %% maintaining the page numbers and other \ref %%% Some commonly used packages (make sure your LaTeX installation %%% contains these packages, if not ask your senior to help installing %%% the packages) \usepackage{booktabs} \graphicspath{{expt/}} \usepackage{tikz} \usepackage{float} \usepackage{natbib} \usepackage[toc,page]{appendix} \def\undertilde#1{\mathord{\vtop{\ialign{##\crcr{} $\hfil\displaystyle{#1}\hfil$\crcr\noalign{\kern1.5pt\nointerlineskip} $\hfil\tilde{}\hfil$\crcr\noalign{\kern1.5pt}}}}} % Math partial diff marco %\providecommand{\pdv}[3][{}]{\frac{\partial^{#1}{#2}}{\partial{#3}^{#1}}} % % Referencing macros \newcommand{\Eqref}[1]{Equation~\eqref{eq:#1}} \newcommand{\Tabref}[1]{Table~\ref{#1}} \newcommand{\Figref}[1]{Figure~\ref{fig:#1}} \newcommand{\Appref}[1]{Appendix~\ref{#1}} \begin{document} %%********************************Frontmatter*********************** % In frontmatter everything comes with roman numbering %\pagenumbering{roman} \setcounter{page}{1} %******************************************************************* % Title Page %******************************************************************* \title{Boundary Layer Receptivity to Freestream Disturbances} \author{Pawan Singh Negi ($174010003$) } %% Print the date. Today's date comes by default, change it here to %% other date format, if required: %\date{\today} %\date{10 Mar 2016} %% The type of the report can be set here \reporttype{A Report} %\reporttype{A Thesis} %\reporttype{A Dissertation} %\reporttype{A Project Report} %% Name of the degree \degree{Hydrodynamic Stability Theory (AE718)} %\degree{Master of Technology} %% Department/Centre Name %\dept{Department of Aerospace Engineering} %% Supervisor and cosupervisor/excosupervisor are not essential parts %% of a report title page, as it is your report! %% But if you **have** to put it uncomment these %\cosupervisor{Co-super name} %\excosupervisor{External Supervisor} %% Roll number \rollnum{1} \maketitle \tableofcontents %****************************************************************** % Chapters %****************************************************************** \pagebreak \section{Introduction} \label{intro} The receptivity of boundary layers concerns with the cause of instability rather it's evolution. The boundary layers gets influenced by freestream turbulence, surface roughness, sound etc. thus, there are no mathematical model that can predict the transition Reynolds number on a flat plate. Since, the linear stability methods are initial condition dependent, also the boundary layers are convectively unstable i.e and unsteady distubance is required to generate the instability wave, emphasis was shifted to the source of the instabilities in boundary layers. \section{Paths of Transition} The freestream Disturbances enters the boundary layers as steady or unsteady fluctuation called receptivity termed by \citet{Morkovin1969}. The intial flow gives us the information like amplitude, frequency and phase of the distubance. In the \Figref{path} the amplitude of the distubance coming from the free stream increases from left to right. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{path.png} \caption{Diffrent possible paths for transition } \label{fig:path} \end{figure} These different instabilities may occur independently or together, the appearance of a particular type of instability depends in the Reynolds number, wall curvature, sweep, roughness and initial conditions. The path A is followed by weak disturbances, the initial growth of these can be described by linear stability theory of normal modes. The growth is weak, occurs over a long streamwise length. As the amplitude grows, 3d and nonlinear interactions begining to occur in the form of secondary instability. The linear stability theory can predict the transition in this case given the free stream has weak disturbances. However, freestream can have strong distubances which by passes the growth of linear disturbaces. This refers to path E for which the phenomenon is yet to be explored. The bypass refers to a transition process whose intial growth is not described by the primary modes of Orr-Sommerfeld equation. Transition growth occurs when two, nonorthogonal, stable modes interact, undundergo algebraic growth, and decay exponentially. This was shown that large amplitude can be acheived through transient growth when the boundary layers is provided with appropriate initial condition. The spectrum of which depends on receptivity. Generally amplitude and spectral characteristics of the distubances inside the laminar viscous layer strongly influence the path to be taken to turbulence. \section{Tollmien-Schlichting (T-S) waves} The Orr-Sommerfeld parallel flow inviscid approximation \begin{equation} (U-c)\left(\phi^{\prime \prime}-\alpha^{2} \phi\right)-U^{\prime \prime} \phi=-\frac{i}{\alpha \operatorname{Re}}\left(\phi^{\prime \prime \prime \prime}-2 \alpha^{2} \phi^{\prime \prime}+\alpha^{2} \phi\right), \label{eq:os} \end{equation} have an special case when $Re \to \infty$ called Reyleigh equation given by \begin{equation} (U-c)\left(\phi^{\prime \prime}-\alpha^{2} \phi\right)-U^{\prime \prime} \phi=0. \label{eq:ra} \end{equation} For a boudary layer to be absolutely unstable, it must satisfy Rayleigh criterion as a result of \Eqref{ra}. That is $D^2U = 0$, where D is the y derivative and $Y$ is the free stream velocity profile. In other words, the velocity profile must have an inflection point to be unstable. As the boundary layer profile are montonically increasing there first derivative does not change sign, hence the boundary layers must be unconditionally stable. However, from exprerince it is evident that boundary layers are unstable thus inviscid approximation are not suficient to predict instabilities in boundary layers. it can be shown using energy method that \begin{equation} \frac{D E}{D t}=-\int_{V} u^{\prime} v^{\prime}\left(\frac{d U}{d y}\right)-\frac{1}{R} \int_{V}\left(\nabla \vec{v}^{\prime}\right)^{2}. \label{eq:energy} \end{equation} The rightmost term is a viscous dissipation term and is stabilizing. The leftmost term called the Reynolds stress is the primary term for instability growth. However, in viscous flow $u^{'}$ and $v^{'}$ are non-orthogonal due to which viscosity becomes destabilizing and is the reason for the formation of Tollmien-Schlichting (T-S) wave. As described by \citet{Majumdar1996}, the essential kinematic mechanism of T-S wave can be described in terms of interaction of two "partial modes" of the system. For system like boundary layer flow can be separated into stable wave guides, each of which supports neutral wave in isolation, and where wave in one waveguide may propagate in opposite direciton then in the the other. from an energetic view point waves may have positive and negative energy; and they are the "partial modes" in the complete system. In boundary layer flows the inviscid modes and the viscous decaying modes are the two partial modes. Their interaction may result in one of the following. \begin{enumerate} \item if one equates the speed of the inviscid partial mode with the most weakly damped viscous partial mode in uniform shear, one obtains \Figref{6}, implies that the instability derives the interaction between the modes in some way. \begin{figure}[h!] \centering \includegraphics{6.png} \caption{The Stability Curve for Blasius-like profile} \label{fig:6} \end{figure} \item The mutual forcing must be strong enough to overcome the inherent damping due to viscosity. \item The structure of eigenfunction (\Figref{8}) for growing modes in Blasius-like profile resembles a superposition of an inviscid partial mode plus the viscous forced reponse. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{8.png} \caption{Eigen function for Blasius-like profile} \label{fig:8} \end{figure} \end{enumerate} In the \Figref{tswave} showing flow ove a flat plate. From point 1 to point 2 a stable laminar flow is established that starts from the leading edge and extends to the point of inception of the unstable two-dimensional Tollmien-Schlichting waves. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{xyz.png} \caption{The T-S wave on a flat plate} \label{fig:tswave} \end{figure} \section{Receptivity Theory} Receptivity concerns with the generation of instability, rather their evolution \cite{saric2002boundary}. An unsteady instability is required to generate instabilty in boundary layer as it is convectively unstable. The unsteady instability can be naturally occuring or forced. Forced instability has broad wave spectrum, thus it can be used to appropriatly excite a particular mode. However, naturally occuring instabilities have concentrated wavenumbers, which is different then instability wave number thus they require a wavelength conversion process. The natural receptivity occurs where the mean flow changes rapidly in the stream-wise direction, invalidating the parallel flow assumption of OSE. The region where natural receptivity occurs can be separated into two classes namely: \begin{enumerate} \item Leading edge region where the boundary layer is thin \item Downstream region, where some local feature causes the mean flow to adjust on a short streamwise length scale (e.g wall humps, suction strip). \end{enumerate} \subsection{Leading-edge Receptivity Theory} the Reynolds number is assumed to be large so that the outer problem corresponds to the inviscid interaction of the small- amplitude freestream disturbance with the body. The asymptotic structure for the unsteady viscous motion in the boundary layer contains two distinct streamwise regions. The leading edge, where $x2 \pi f /U_\infty = O(1)$, the motion satisfies the linearized, unsteady, boundary-layer equation (LUBLE) \begin{equation} u_{t}^{\prime}+U u_{x}^{\prime}+V u_{y}^{\prime}+u^{\prime} U_{x}+v^{\prime} U_{y}=-p_{x}^{\prime}+\left(1 / R_{L}\right) u_{y y}^{\prime}. \label{eq:LUBLE} \end{equation} The LUBLE contains $u^{'} U_x$ and $V u^{'}_{y}$ which do not appear in the OSE. Downstream from the leading edge, where $x2\pi f /U_\infty = O(\epsilon_{LE}^{-2})$ consistent approximation leads to the classical large-Reynolds-number, small wavenumber approximation to the OSE. The asymptotic matching of these two regions, showed that the first Lam-Rott asymptotic eigenfunction of the LUBLE, with coefficient $C_1$ , matches onto the T-S wave that becomes unstable farther downstream in the OSE region. The receptivity due to small-amplitude acoustic waves impinging obliquely on the leading edge of a semi-infinite flat plate is analyzed by separating the incident acoustic field into components parallel and perpendicular to the plate surface. Near the leading edge, this scattered component has a square root singularity corresponding to inviscid flow around the sharp edge. Receptivity due to scattering of the normal component of the incident acoustic wave is particularly important at low Mach numbers. the bodies of interest for practical applications gen- erally have parabolic or elliptical leading edges and an asymmetric mean flow owing to the presence of aerodynamic loading. In the absence of aerodynamic loading and a parallel acoustic wave, $|C_1 |$ first rises slightly (as $S = r_{n} 2 \pi f / U_\infty$ is increased) and then falls monotonically, to 15\% of the flat-plate value at S = 0.3. This behavior appears to be related to the favorable pressure gradient near the nose of the parabola. For obliquely inci- dent acoustic waves, the finite nose radius weakens the influence of leading-edge scattering. However, as the aerodynamic loading is increased toward its limiting value for attached flow, a strong rise in the receptivity coefficient occurs. An attractive feature of the leading- edge receptivity coefficient is that it is independent of frequency, f. \subsection{Downstream region Receptivity } Localized receptivity in the downstream is caused by the interaction of disturbances with short-scale variations in surface geometry. An asymptotic analysis for localized receptivity, utilizing the triple-deck structure shows that the viscous flow in the lower deck adjacent to the wall is governed by the LUBLE. Hence, the short-scale nonparallel mean-flow effects are again responsible for the transfer of energy from the wave- length of the freestream disturbance to that of the instability wave. The receptivity arises owing to nonparallel mean-flow effects, which are expressed in terms of a perturbation series with respect to the amplitude of the wall inhomogeneity. The OSE approach is only applicable for roughness elements of height $h/L << R^{-5/8}_{L}$, whereas the triple-deck analysis remains valid when $h/L = R^{-5/8}_{L}$. In the small height limit, the triple-deck equations can be solved analytically, whereas the OSE approach requires a numerical solution. \section{Computation Methods} For estimatation of T-S waves modes, the spatial direct numerical simula- tion (DNS) approach is widely applicable because it avoids many of the restrictions that must usually be imposed in other models and is the closest to emulating exper- iments. For example, no restrictions with respect to the form or amplitude of the disturbances have to be imposed, because no linearizations or special assumptions concerning the disturbances have to be made. With the spatial computational method, finite curvature can be included in the leading-edge region. Experimentally, the most popular model geometry for receptivity has been the flat plate with an elliptic leading edge. Thus it is reasonable that computational models consider the same geometry. However, the curvature at the juncture between the ellipse and the flat plate is discontinuous and provides a source of receptivity. Receptivity results can be expressed either in terms of (a) a leading-edge re- ceptivity coefficient defined as the ratio of the T-S amplitude in the leading-edge region at $x = O(U_{\infty} /2 \pi f )$ to the freestream-sound amplitude or (b) Branch I receptivity coefficient defined as the T-S amplitude at Branch I normalized with the freestream-sound amplitude. The appropriate receptivity coefficient is $K_{LE}$ because it is based strictly on local properties of the leading-edge region, whereas $K_I$ depends on the pressure gradient history from the leading edge to Branch I. The characteristic length scale for freestream spanwise vorticity is the convective wavelength $U_{infty} /2 \pi f$, which is approximately three times that of the amplified T-S wave at that frequency. A simple model of time-periodic freestream spanwise vorticity was introduced at the upstream computational boundary. This signal was decomposed into a symmetric and asymmetric streamwise velocity component with respect to the stagnation streamline. The effect of a transverse-velocity component at the leading edge could be ascertained, as the asymmetric-velocity case had this feature, whereas the symmetric-velocity did not. The complete integrated picture of geometry and associated pressure gradients (both favorable and adverse) must be included in any meaningful evaluation of receptivity, Thus DNS analysis is favourable. \section{Experiments For receptivity quatification} The typical model for boundary-layer experiments is the zero-pressure-gradient flat plate. In most cases, the flat plate is preceded by an elliptical leading-edge attachment. The coupling between the long-wavelength acoustic disturbance and a T-S wave occurs in four regions: the leading edge, the discontinuity in surface curvature at the flat-plate/leading-edge junction, the presence of localized pressure gradients, and any surface inhomogeneities such as roughness, suction slots, or filler material used for the flat-plate/leading-edge gap. \textbf{LARGE AMPLITUDE T-S WAVE} - When an external sound field is used as a source of disturbance energy, the boundary-layer measurement at a particular frequency will contain probe vibrations and a sound-wave component (Stokes layer) in addition to the T-S wave. If these signals are of comparable amplitude, one can not extract the T-S amplitude without some special separation technique \textbf{REMOVABLE RECEPTIVITY SOURCE} - The simplest solution is to take advantage of the exponential growth of the T-S wave and measure far enough downstream from the receptivity source. \textbf{KENDALL GAUGE} - It senses wall pressure fluctuations at two pressure ports spaced at approximately half the T-S wavelength. With a distribution of pressure-port pairs, one obtains immediately the spatial behavior of the T-S wave. This method is recommended in cases of freestream turbulence. \textbf{COMPLEX PLANE RESOLUTION} - Taking advantage of the fact that the acoustic wavelength is two orders of magnitude larger than the T-S wavelength, polar plots are used to separate the long-wavelength Stokes wave from the short-wavelength T-S wave. \textbf{RESOLUTION OF DUCT ACOUSTICS} - The downstream traveling wave reflects in the diffuser and returns an upstream traveling wave giving a standing wave pattern in the test section. This is not a problem for localized receptivity sites because one need only measure the freestream amplitude at the local position. \textbf{PULSED-SOUND TECHNIQUE} - The technique uses pulsed sound and is simple, effective, and lends itself to understanding the behavior of the T-S wave.From linear theory, the maximum of the T-S wave propagates at approximately one third of the freestream speed. Using this fact, the traveling T-S wave can be isolated from the acoustic disturbance and associated Stokes wave by sending bursts of sound into the test subsection \bibliography{ref} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7672962763, "avg_line_length": 48.7631578947, "ext": "tex", "hexsha": "5764c5814950f9cc72c7961be0d93d39a245c72b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4ca492f722bc00a08892e40d648c16537c5bd050", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "psnbaba/hyd_instability", "max_forks_repo_path": "report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4ca492f722bc00a08892e40d648c16537c5bd050", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "psnbaba/hyd_instability", "max_issues_repo_path": "report/report.tex", "max_line_length": 123, "max_stars_count": null, "max_stars_repo_head_hexsha": "4ca492f722bc00a08892e40d648c16537c5bd050", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "psnbaba/hyd_instability", "max_stars_repo_path": "report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4477, "size": 18530 }
% !TEX TS-program = pdflatex % !TEX encoding = UTF-8 Unicode % This is a simple template for a LaTeX document using the "article" class. % See "book", "report", "letter" for other types of document. \documentclass[11pt]{report} % use larger type; default would be 10pt \usepackage[utf8]{inputenc} % set input encoding (not needed with XeLaTeX) %%% Examples of Article customizations % These packages are optional, depending whether you want the features they provide. % See the LaTeX Companion or other references for full information. %%% PAGE DIMENSIONS \usepackage{geometry} % to change the page dimensions \geometry{a4paper} % or letterpaper (US) or a5paper or.... \geometry{margin=1.75cm} % for example, change the margins to 2 inches all round % \geometry{landscape} % set up the page for landscape % read geometry.pdf for detailed page layout information \usepackage{graphicx} % support the \includegraphics command and options \DeclareGraphicsExtensions{.png,.pdf,.jpg,.mps} % \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage[english]{babel} \usepackage{wrapfig} \usepackage{float} \usepackage{braket} \usepackage{esvect} \usepackage{commath} \usepackage{amsmath} %%% PACKAGES \usepackage{booktabs} % for much better looking tables \usepackage{array} % for better arrays (eg matrices) in maths \usepackage{paralist} % very flexible & customisable lists (eg. enumerate/itemize, etc.) \usepackage{verbatim} % adds environment for commenting out blocks of text & for better verbatim \usepackage{subfig} % make it possible to include more than one captioned figure/table in a single float % These packages are all incorporated in the memoir class to one degree or another... %%% HEADERS & FOOTERS \usepackage{fancyhdr} % This should be set AFTER setting up the page geometry \pagestyle{fancy} % options: empty , plain , fancy \renewcommand{\headrulewidth}{0pt} % customise the layout... \lhead{}\chead{}\rhead{} \lfoot{}\cfoot{\thepage}\rfoot{} %%% SECTION TITLE APPEARANCE \usepackage{sectsty} \allsectionsfont{\sffamily\mdseries\upshape} % (See the fntguide.pdf for font help) % (This matches ConTeXt defaults) %%% ToC (table of contents) APPEARANCE \usepackage[nottoc,notlof,notlot]{tocbibind} % Put the bibliography in the ToC \usepackage[titles,subfigure]{tocloft} % Alter the style of the Table of Contents \renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape} \renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} % No bold! \usepackage[none]{hyphenat} \setcounter{secnumdepth}{4} %%% END Article customizations %%% The "real" document content comes below... \title{\Huge Second year PhD status report} \author{\LARGE Thomas Karl Warburton\\ \\ \normalsize Supervisor:\\Dr. Vitaly Kudryavtsev} %\date{} % Activate to display a given date or no date (if empty), % otherwise the current date is printed \begin{document} \renewcommand{\thesection}{\arabic{section}} \renewcommand\bibname{References} %\renewcommand\abstract{\Large\textbf{Abstract}\\ \normalsize} \maketitle \tableofcontents \newpage \section*{Abstract} A review of the progress made in the past year. Progress has been made in simulations for the 35 ton, where calorimetry has been tuned, tracking algorithms have been benchmarked, photon detector track matching been implemented and the initial workings for a proton identification method have been developed. Work has also been carried out on preparing a camera system for installation in the 35 ton as well as the incorporation of two muon generators into LArSoft. \section{Introduction to LArSoft} All simulation work has been done in LArSoft which is a simulation, analysis and reconstruction package for Liquid Argon (LAr) Time Projection Chambers (TPC's) which is being used by many of the LAr-TPC's in operation, construction and planning within the US program {\cite{Church_LArSoft}. LArSoft has been developed to be detector agnostic, meaning that it can be used to simulate any detector geometry. To this end it is envisioned that it will be used as a platform for constant development for existing experiments through to those still in the planning phases such as DUNE. \\ LArSoft is built around the Fermilab-supported Analysis Reconstruction Framework (ART), which allows the full sequence of analysis, reconstruction and analysis to be built up in either a single events or a collection of events. External packages such as ROOT and GEANT4 are also incorporated into LArSoft meaning that the user does not have to co-ordinate specific versions of the packages which they want to use as the newest versions are automatically incorporated. \\ There are numerous mechanisms by which particles can be generated within the software. External packages which have already been incorporated into the software are GENIE, NuANCE and CRY. It is also possible to load in pre-made particles files made my user defined modules, or to use inbuilt single particle generation mode. The inbuilt particle generation is fully tunable as the momenta, positions and direction can be varied to encompass all space, along with the distribution of these quantities. For the studies presented here a combination of both CRY and user defined generation events have been used.\\ The co-ordinates and angles in LArSoft are defined as; \begin{itemize} \item X - The beam direction (for the 35 ton prototype where there is no beam, positive x is in the opposite direction to that which electrons in the large TPC's drift), \item Y - The vertical direction, \item Z - Defined as such to have a right handed co-ordinate system. \item $\theta$ - Angle between. \item $\phi$ - Angle between. \end{itemize} \section{Tuning of calorimetry} Correctly calculating the behaviour of different particles in the detector is an essential part of the simulation work for the 35 ton, therefore it is vital that the calorimetric corrections made are correct. One way of ensuring this is to make sure that the 'Minimally Ionising Particle' (MIP) peak for muons has a \( \frac{dE}{dx} \) value of 2.1 MeV cm\(^{-3}\). \\ To do this samples of 10,000 Anti-Muons have been simulated and reconstructed whenever there has been a change to the simulation framework. Using a modified box model \cite{ModBox} corrections are made to the measured \( \frac{dQ}{dx} \) on the wires to calculate a value for \( \frac{dE}{dx} \). Before this is done however, a conversion of ADC counts to number of electrons is made. This correction is crucial as when the detector properties or conditions change then the relationship between number of electrons per ADC count is likely to change and for all physics results it is essential to know the number of electrons produced which gave the observed effect. \section{Benchmarking of tracking algorithms} Knowledge of the strengths and weaknesses of different tracking algorithms is vital when using them for physics analyses. To this end it is beneficial to develop a module which calculates and compares the efficiencies with which tracks are matched. Efficiencies are calculated for two simulated samples, an idealised sample of primary Anti-Muons, and a more realistic CRY sample. Both of which are outlined below. \\ The muons used in the idealised sample have cosmic ray like properties, whereby they had no minimum or maximum energy. The angular distribution of the muons follows a $\cos^2$ distribution and the positions follow a flat distribution in the X and Z directions, and a constant starting position in Y above the cryostat. \\ In the more realistic CRY sample particles are generated from a time of -1.6 ms to 16 ms which corresponds to a total of 11 drift windows, where a drift window is defined as the time it takes for an electron to drift from the Cathode Plane Assembly (CPA) to the Anode Plane Assembly (APA). This corresponds to a time of 1.6 ms for an electric field of 500 V cm\(^{-1}\), which is the nominal running field of the 35 ton phase II. A period of 10 drift windows or 'milliblock' was used as data from the 35 ton prototype is proposed to be read out in this way. Particles are also generated 1 drift window before the start of the milliblock as the electrons these particles produce could still be in the detector if they have not yet drifted to the APA's, as will be the case for real data. An important consideration for tracks caused by the particles which enter before, but cause tracks after T = 0 is that their initial positions in the cryostat have to be corrected in order to reflect their Monte Carlo positions at T = 0, so as to not calculate a Monte Carlo length which is not reconstructable. \subsection{Matching tracks with Monte Carlo information} In order to match the Monte Carlo particle which induced a reconstructed track to the track a new data product was created. This required summing the charge deposited by each Monte Carlo particle at each reconstrcuted hit and then summing the charges deposited over all the hits assigned to the given track. The track is then matched to the Monte Carlo particle which contributed the most charge to the total charge collected in the track. \\ This means that the reconstructed quantities can then be compared to Monte Carlo quantities, so the accuracy of reconstruction can be measured. For example, it can be calculated how accurately the position at which the particle entered the active volume was reconstructed. This is a very important development for the simulation machinery.\\ This matching also solves a difficulty in reconstructing events with a large drift time through the identification of an interaction time. The reconstruction algorithms have to assume that the time at which charge is deposited on the APA's is related to the X position of where the ionisation electrons were produced. As a result, X positions of over 20 m can be reconstructed for events at the end of a milliblock sample. These large X positions obviously need to be corrected before analysis is performed and using Monte Carlo truth to correct these positions is a clearly viable option for simulated data. \subsection{Calculating reconstructed efficiencies} Accurately defining the metric used to measure efficiency is vital, as differing definitions could produce significantly different results. One such example is outlined below. \\ Suppose a particle travels 100 cm in the active volume of the detector, but is reconstructed as 2 separate tracks (tracks 1 and 2), with lengths 77 cm and 23 cm respectively. One metric of efficiency could be if the particle is reconstructed with a track length between \( 75\% \) and \( 125\% \) of the actual length traversed in the detector, in which case track 1 would be considered well reconstructed. One could however define efficiency as whether the distance traversed by the particle is between \( 75\% \) and \( 125\% \) of the reconstructed length, in which case neither track would be considered well matched. Both scenarios have used exactly the same tracks and a seemingly identical method of considering whether a track is well reconstructed or not, but have got the opposite results. As such, it would be unfair to say which consideration gave the correct result, but instead the result of each should be considered independantly. \\ For this study the first scenario is used, where a track is considered well reconsructed if it matches a well defined list of criteria; \begin{itemize} \item Reconstructed track length is more than or equal to \( 75\% \) of the Monte Carlo track length. \item Reconstructed track length is less than or equal to \( 125\% \) of the Monte Carlo track length. \item Only one reconstructed track can be matched per Monte Carlo particle. \end{itemize} Reconstructed efficiencies have been calculated for a number of particle sets. The most general of these sets considers all charged particles. A subset of this general set can also be considered so as to only consider certain particle types, such as muons or protons. \\ When calculating efficiencies it is important to consider much more than just the ratio of reconstructed to true track length. To this end efficiencies with regards to many aspects of the tracks are calculated; \begin{itemize} \item Track length \item Energy deposited in the active volume of the detector \item The angle $\theta$ of the track \item The angle $\phi$ of the track \end{itemize} In all efficiency plots the Monte Carlo truth quantity, not the reconstructed quantity is shown so as to reflect how the variations of these quantities affect the reconstruction efficiencies. \\ Two sets of efficiencies are calculated for each reconstruction algorithm. One where reconstruction is performed as if the data were 'real' whereby no monte carlo truth information is used, and a second efficiency whereby Monte Carlo information is used to 'cheat' the hit disambiguation. This offers the chance to see the effect which disambiguation plays in reconstructing tracks. As expected cheating disambiugation improves the efficacy of the algorithms as it is less likely hits will be incorrectly reconstructed into clusters to seed the reconstructed tracks. Unless it is otherwise stated efficiencies referred to are those associated with the 'full' (non-cheated) reconstruction. \subsection{Status of tracking efficiencies} From the idealised sample it was found that the three predominant reconstruction algorithms have significantly lower efficiencies for short particles than for longer particles, though this rapidly increases for tracks below 30 cm where the efficiencies all plateau at around 80-100 \%. This is because short tracks are difficult to reconstruct due to the low number of hits which they cause, and the large number of short tracks from particles such as low energy protons. There is however a significant drop in efficiency for tracks of length $\sim$195 cm, PMTrack for example exhibits a $\sim$30$\%$ drop in efficiency. This track length is approximately the vertical height of the detector and so any muons which travel vertically down (a large majority in a cosmic sample) would have this track length, this would cause all of the ionisation electrons to be incident on a single collection plane wire and thus make it very difficult to disambiguate hits on the other two planes. This is shown through comparison with the efficiency for cheated disambiguation where this drop is significantly less, $\sim$10$\%$ for PMTrack. \\ SHOW THE LENGTH PLOTS HERE! \\ It is also interesting to note the effect varying the angles $\theta$ and $\phi$ has on the efficiencies of the different reconstruction algorithms. This also offers verification that the drop in efficiencies for track of lengths is due to vertical muons and not merely a coincidence, as a similar drop in efficiency is also observed for $\theta$ $=$ $\frac{\pi}{2}$. \\ SHOW THE THETA VERSUS PHI PLOT COMP PLOT HERE \\ Efficiencies for the idealised simulation will continue to be calculated as they are useful for indentifying the absolute efficacy of reconstruction algorithms. However, more focus is placed on understanding the patterns in the milliblock sample as this sample more closely resembles what the data will look like as outlined above. \\ Initially when the milliblock sample was analysed the efficiencies were significantly lower as shown below. Only the efficiency as a function of length is shown to illustrate that this not a function of the milliblock having significantly more short, difficult particles to reconstruct but is in fact consistent for all track lengths. This decrease was only a significant problem when disambiguation was not cheated, which led to the realisation that disambiguation was only selecting the largest cluster in a given TPC. This had much more of an effect in the CRY sample where multiple particles enter the detector producing tracks than in the idealised sample where only a single particle enters the detector. \\ SHOW THE INITIAL MILLIBLOCK EFFICIENCY BEFORE DISAMBIG CORRECTION!!! \\ To correct this the disambiguation algorithm was restructured so as to select only clusters which were cleanly separated in channel and time space. Where a smaller cluster (determined by the number of hits) was fully contained within a large cluster, only the larger cluster was carried forward as it is assumed that this smaller cluster is due to misdisambiguated, fake hits. Following this improvement the efficiencies became comparable to the idealised efficiencies as shown below. \\ A slightly lower reconstruction efficiency is to be expected for the milliblock sample compared to the idealised sample due to its added complexity from having multiple primary particles at random times where co-incidence is likely. One feature which was not expected however is the decrease in efficiency which Pandora exhibits for tracks above 250 cm that is not repeated for the other algorithms. \\ SHOW THE MILLIBLOCK LENGTH PLOT \\ Hand-scanning of events showed that this was due tracks being stitched across the APA's at large times. As noted above, tracks at large times have large X offsets due to the reconstruction algorithms assumption that the ionisation electrons were produced at T = 0. Stitching tracks across the APA's at large times means that this X offset is doubled, causing tracks which are well reconstructed in Y and Z to have very large value for $\Delta$X, which causes the track length to be much more than 125\% of the Monte Carlo track length. The other algorithms do not do this as they check the change in X between TPC's, as well as the co-linearity in YZ. This is shown below, where a muon produced with a small time offset (T = 0.5 ms) crosses the APA and is stitched by Pandora but not by PMTrack. The stitching causes a large change in X, meaning the track length is significantly increased whilst the two tracks produced by PMTrack are not stitched and remain as two separate tracks. A drop in efficiency is not seen when the tracks are not stitched because for a long track a significant portion of the track (over 75\%) is likely to be in a single drift volume (usually the long drift volume), and only a small segment will be in the other drift volume (short drift volume). When this is the case, as only the relative length is considered when efficiency is calculated the long track will be considered 'well mathced' even though the bit of track in the other drift volume was not stitched. \\ These stitched tracks which are considered badly reconstructed are also evident in the $\theta$ vs $\phi$ plot shown below, where a large region of .................... space has a much lower efficiency for Pandora than for PMTrack or Cosmic Tracker. \\ SHOW THETA VS PHI MILLIBLOCK PLOT! \\ \section{Identifying interaction times from photon detectors} In the 35 ton phase II design photon detectors are incorporated in the APA's as a means of assigning an interaction time (T0) for reconstructed tracks. Once a T0 is known, the X position of the track can be corrected to the hit time minus the flash time multiplied by the elctron drift velocity all divided by the electron drift velocity. Therefore, developing the code which evaluates the performance of these associations is another important step in preparing the detector for data taking. This also involved the first example of combining the TPC and photon detector simulations, which until this time had been totally separate. \\ An important step in this incorporation was adding the photon detector information to the LArSoft event display, most specifically the view in which the track positions are shown in the XZ and YZ planes. This is because when the flash is reconstructed the likely location of its origin is calculated and so being able to visually compare this position with track trajectories is particularly useful. \\ SHOW A SINGLE MUON TRACK WITH FLASH CLOSE TO IT!!! EVENT DISPLAY!!!\\ A similar method to that utilised in developing the tracking efficiencies was utilised in matching flashes with tracks, whereby an idealised simualtion of single anti-muons was initially used before extension to a CRY milliblock sample. Following hand-scanning of the anti-muon events two metrics for matching were established. \\ The strongest metric for matching was identified following the addition of the photon detector information to the event display, as it was observed that flashes are very well reconstructed in the YZ plane. This shows the excellent work performed by Alex Himmel (Fermilab) and Gleb Sinev (Duke) who led the photon detector reconstruction efforts. It was determined that using a Point of Closest Approach (PoCA) between the track space points and the flash centre was the best method of matching a flash to the track which caused it. This is compared to using the distance between the track and flash centres, or the distance between the flash centre and a line constructed between track start and end point below. This is because many tracks are not straight, and so stepping through the track space points is required. \\ COMPARE THE THREE WAYS OF MATCHING YZ SEPARATION HERE!!! \\ A second metric using the relationship between the number of photoelectrons collected in a flash and the true X position of the flash is also used. If two flashes have the same properties, but one is produced further away from the photon detectors then one would expect that the detected photons from the more distant flash would be more diffused and so would produce fewer photoelectrons. A fit of this relationship allows the X position of a flash to be predicted given the number of photoelectrons collected. Comparing this with the time separation of flashes and hits gives a method of predicting the X position of a flash. The difference in these predicted X positions is used as a metric. \\ SHOW THE PE vs X PLOT!!! \\ Using these two metrics it is possible to attempt to match a flash with a track. This is done by selecting tracks which occur within one drift window of the flash, and minimising the sum of these two metrics in quadrature. It is possible to weight the quantities to reflect that the $\Delta$YZ metric is more trusted than the $\Delta_X$ quantity. It is also possible that the absolute light levels and efficiencies of the photon detectors may be being incorrectly simulated and so this $\Delta_X$ quantity will not want to be used, at least initially. \\ SHOW THE TWO METRIC PLOT!!! \\ Simulations show that this method successfully matches tracks and flashes to within 1 ms, a large proportion of the time. This is shown through comparing the Monte Carlo truth time of the particle which caused the track and the flash time. This comparison also shows the simulated electronics response of the photon detectors to be 0.5 ms, as all successful matches have this offset. \\ SHOW THE MCTRUTH vs FLASH TIME, WHOLE AND ZOOM!!! \section{Development of a proton identification method} Before proton identification can be performed using real data it is vitally important that a thorough simulation is performed to assess both the viability of the study and its proposed method. This has been performed using the large sample of CRY events used to measure tracking efficiency and to match photon flashes and tracks. \\ A preliminary study was performed to assess the predicted number of protons which would be seen per hour of data taking in the 35 ton detector. This was done by generating a CRY sample of length 1 hour and looking at the numbers of given particles which entered the active volume of the detector after the GEANT4 stage. This showed that roughly 40,000 protons would be produced per hour compared to almost 3 million muons, however when only stopping particles were considered the numbers became much more similar as almost all muons above 2 GeV are MIP's. \\ Subsequently a more refined estimation of reconstructed proton flux was performed. The reconstruction efficiency of the tracking algorithms for only protons was calculated so as to evaluate the number of well reconstructed protons that can be expected per hour of data taking. It should be noted that when correcting the X position of the track and also when performing calorimetric corrections the photon detector T0 is used. \\ SHOW THE PROTON LENGTH and THETA vs PHI EFFICIENCIES!!! \\ As PMTrack gives the best efficiencies further results shown are acquired using this reconstruction algorithm. ANY FURTHER DISCUSSION OF PLOTS? \\ Estimating the number of protons in the data is important, however they are still significantly outnumbered by muons in cosmic ray showers. Therefore it is neccessary to establish the mechanism by which a proton sample can be separated from a cosmic sample. One of the main mechanisms by which particle identification is performed in Liquid Argon (LAr) is using the relationship between $\frac{dE}{dx}$ and residual range. This relationship is particularly powerful near the end of the track, as described in \cite{PIDA} where a theoretical power-law dependence is used to identify protons in ArgoNeuT. Equation \ref{eq:PIDA_eq} shows the power law and the predicted $\frac{dE}{dx}$ versus residual ranges for different particle is shown in Figure \ref{fig:PIDA_Baller_MC}. \begin{equation} \label{eq:PIDA_eq} \frac{dE}{dx}_{hyp} = A R^b \end{equation} \begin{figure}[h] \centering \includegraphics[width=9cm]{PIDA_Pred_Baller.png} \caption{Plot of the $\frac{dE}{dx}_{hyp}$ versus residual range for different particles \cite{PIDA}.} \label{fig:PIDA_Baller_MC} \end{figure} Figure \ref{fig:PIDA_Baller_MC} shows that there is a weak dependence on b, but a strong dependence on A. This means that by setting b=-0.42 and calculating A for each spacepoint, it is possible to calculate an average A for a track. This average is referred to as PIDA and it's value for different particles is shown in shown in Figure \ref{fig:PIDA_Baller_PIDA}. \\ \begin{figure}[h] \centering \includegraphics[width=9cm]{PIDA_Baller_PIDA.png} \caption{Plot of the calculated PIDA for different particles, calculated with Monte Carlo Truth information \cite{PIDA}.} \label{fig:PIDA_Baller_PIDA} \end{figure} PIDA can only be calculated for stopping particles, as residual range calculations require a particle to stop within the detector. As such there are two methods by which this can be done, either cheating (using Monte Carlo information) or using reconstructed data. Only the former has been done thus far, work on using on reconstructed information will continue in parralel with other work. \\ A filter is applied to reconstructed Monte Carlo data to select only particles which stop in the detector, this should remove the MIP peak for particles which do not stop in the detector, and leave only tracks which terminate where the particle stops. However, as alluded to where tracking efficiencies were calculated, one can imagine the case where a long muon stops in the detector but is reconstructed as two separate tracks. In this case one of the partial tracks would end whilst the muon is still a MIP and so the $\frac{dE}{dx}$ versus residual range of this track would not increase as the track ends. This results in there still being a MIP peak in the cheated data sets, considered below. \\ \section{Incorporating muon generators} Two muon generators have been incorporated into LArSoft for use by both the DUNE collaboration and wider community which use LArSoft. A muon generator for surface fluxes developed by Joel Klinger, referred to as Gaissers parameterisation module and a muon generator for underground fluxes developed by Vitaly Kudryavtsev called MUSUN. The details of both are outlined below. \subsection{Gaissers parameterisation module} \subsection{MUSUN} \section{Status of camera modules in the 35 ton prototype} Following work done in the previous year the camera system was sent to Fermilab to await installation into the 35 ton cryostat. To comply with cleanliness standards full cleaning of the all components using an ultrasound cleaner and alcohol was required. This was performed in November, along with a successful test installation of one camera onto one of the cryostat pipes where it was identified that the method for securing the modules was not substantial. Technicians at Fermilab agreed to use another method to attach them which they ensured would be sufficient for the modules. \\ Construction of the cold and warm cabling was also performed and labelled in preparation for its installation into the cryostat. Prior to installation it was neccessary to build the rack which would house the components for the monitoring system and check all the connections were still functionable. This required writing the documents for unattended use at Fermilab, which was granted in April. \\ At the time of writing all documentation has been produced and the camera system has been transfered to PC4 where the 35 ton resides and is awaiting installation within the coming weeks. Once installed the camera system will look at the high voltage (HV) breakdown in the cryostat. \newpage \section{Thesis plan} \begin{itemize} \item Theory \subitem Neutrino oscillations \subitem Status of knowledge from experiments \subitem Liquid Argon detectors \item Outline of LArSoft \subitem Structure and vision \subitem Geometries \subitem Simulations and analyses \item The Deep Underground Neutrino Experiment (DUNE) detector \subitem Goals and status \subitem Modular design \item The 35 ton prototype \subitem Phase I \subitem Phase II \item Service work for Phase II of the 35 ton prototype \subitem Simulations \subitem Shifts \subitem Hardware \item The 35 ton camera system \item Proton identification in the 35 ton prototype \subitem Basis for identifcation \subitem Simulations \subitem Identification from data \item Cosmic background for the DUNE far detector \subitem Cosmic background for the LBNE surface detector \subitem Incorporation of generators into LArSoft \subitem Description of simulations and calculated backgrounds \item Other work - proton decay? \end{itemize} \section{Thesis timetable} Theory - Attendence at INSS will prove invaluable as many of the topics which will be in this section were covered there. In addition a reworking of the theory I did during my literature review last year will provide an excellent base at which to start. I would imagine this will be the first chapter to write, though the current experimental best fit data will need to be revisited upon completion.\\ Outline of LArSoft - An overview of how LArSoft is used and how it works as all simualtions and analyses showed in the thesis are done using LArSoft. Is distinct from theory, but still is an introductory part of the thesis and so can be written in early stages. \\ DUNE - Upon the completion of the CD1-refresh a clear statement is made of the design, capabilities and future plans for the experiment are made so this can be used to write this chapter. It is envisioned that this will provide an excellent base with which to write this chapter. \\ The 35 ton prototype - The workings of the prototype are known and papers and reference documents exist detailing the finer structure and results. These will be used to write this chapter, along with any future documents detailing the running of the phase II run. \\ Service work for the 35 ton - Lab book notes are being recorded for work and presentations being made as well as formal notes written. The service tasks performed are well integrated into the wider study of proton identification, and so these notes can likely be included with little change. Shifting will be recorded and observations noted for inclusion. Hardware work will also be recorded. \\ Camera system - Information for development and operation needs to be collated, will also include any results and papers which are written from running. \\ Proton identification - Monte Carlo studies are progressing and look promising, notes need to carry on being recorded. Identification from data will be done upon completion of data taking in early 2016. \\ Cosmic background and proton decay - Notes kept and written detailing the incorporation of the muon generation methods. No work has been done on calculating cosmic backgrounds for neutrino oscillation and no proton decay studies have been performed thus far, but upon implementation of work it will be recorded. \section{List of conferences attended} \begin{itemize} \item Neutrino School for Theory-Experiment Collaboration (NuSTEC), Fermilab Oct 2014 \subitem An interesting school devoted to neutrino cross section physics. \item DUNE collaboration meeting, Fermilab, Jan 2015 \item DUNE collaboartion meeting, Fermilab, Apr 2015 \item International Neutrino Summer School (INSS), Sao Paulo, Aug 2015 \subitem An interesting school covering a wide range of topics in neutrino physics, from its theoretical basis to current and future experimental capabilities. \item DUNE collaboartion meeting, Fermilab, Aug 2015 \end{itemize} \section{Doctoral Development Programme} Participation in the Research and Ethics and Integrity module was continued through a study group done remotely whereby example highlighting examples of good and poor ethics were found and discussed with other students. Continued development of theoretical knowledge was also achieved through attendance at conferences and schools. Development of computing knowledge was increased through further exposure, and a high level of competency has been gained with LArSoft due to its use every day. \begin{thebibliography}{56} \bibitem{Church_LArSoft} Eric Church, \emph{LArSoft: A Software Package for Liquid Argon Time Projection Drift Chambers}, \emph{arXiv:1311.6774v2} \bibitem{Higgs} CMS Collaboration, \emph{Measurement of the properties of a Higgs boson in the four-lepton final state}, CMS-HIG-13-002, CERN-PH-EP-2013-220 \emph{arXiv:1312.5353v1}. \bibitem{ModBox} S.Amoruso et al., \emph{Study of electron recombination in liquid argon with the ICARUS TPC}, Nucl. Instr. Meth. A, 523, 2004, 275. \bibitem{PIDA} R. Acciarri et al., \emph{A study of electron recombination using highly ionizing particles in the ArgoNeuT Liquid Argon TPC}, FERMILAB-PUB-13-184-E, \emph{arXiv:1306.1712}. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7979575258, "avg_line_length": 95.7444444444, "ext": "tex", "hexsha": "9787cb5b02fffc3061755c50f1b54a6de5c3ce11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9ee832a23e81adba94cd564316aa52b403a25f77", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tkarlwarburton/KarlWarburton_Thesis", "max_forks_repo_path": "35tonSimulation/Extensive_2ndYr.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9ee832a23e81adba94cd564316aa52b403a25f77", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tkarlwarburton/KarlWarburton_Thesis", "max_issues_repo_path": "35tonSimulation/Extensive_2ndYr.tex", "max_line_length": 1496, "max_stars_count": null, "max_stars_repo_head_hexsha": "9ee832a23e81adba94cd564316aa52b403a25f77", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tkarlwarburton/KarlWarburton_Thesis", "max_stars_repo_path": "35tonSimulation/Extensive_2ndYr.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7560, "size": 34468 }
\documentclass[main.tex]{subfiles} \begin{document} \marginpar{Wednesday\\ 2020-11-18, \\ compiled \\ \today} Recall that % \begin{align} \Sigma = \num{5.2} \alpha^{-4/5} \dot{M}_{16}^{7/10} M^{1/4} R^{-3/4} \SI{}{g / cm^2} \,. \end{align} Is it true that \(M_d = M _{\text{disk}} \ll M\), as we assumed? We can calculate it as % \begin{align} M_d &= \int_{R _{\text{in}}}^{R _{\text{max}}} 2 \pi R \Sigma \dd{R} \\ &\approx \num{e-8} \alpha^{-4/5} M_{16}^{7/10} M^{1/2} M_{\odot} \,, \end{align} % assuming that \(R _{\text{max}} \approx 10 R _{\text{in}}\). This is indeed many orders of magnitude below the mass of the stellar source. \subsubsection{Regions of the disk} The \(\alpha \) parameter comes from the very rough assumption \(\nu _{\text{turb}} = \alpha c_s H\), it is a weak point of this model. Asking that \(\alpha = \const\) is just plain wrong. Further, we assumed that \(P = P _{\text{gas}}\), and that the Rosseland mean opacity \(\kappa _R\) is only given by free-free absorption. We know that for Thompson scattering the cross-section is \(\kappa _R^{s} = \sigma _T / m_p = \SI{.4}{cm^2 / g}\). When is this smaller than the free-free opacity? In dimensionless terms, the equation for \(\kappa = \tau / \Sigma \) reads % \begin{align} \kappa_{R}^{\text{ff}} &> \kappa_R^{s} \\ \num{6.3} \dot{M}_{16}^{-1/2} M^{1/4} R_{10}^{-3/4} f^2 &> \num{.4} \\ R_{10} &> \num{.5e-2} \dot{M}_{16}^{2/3} M^{1/3} f^{8/3} \\ R &> \num{.5e8} \dot{M}_{16}^{2/3} M^{1/3} f^{8/3} \SI{}{cm} \,. \end{align} For a white dwarf, this is always the case; we can tell in general that for \(R \lesssim \SI{e8}{cm}\) electron scattering dominates. The temperature decreases with radius as \(T \propto R^{-3/2}\), so for high enough radii it can drop below \SI{e4}{K}, at which point recombination can occur: at that point free-free absorption cannot occur anymore, and we must account for free-bound and bound-bound transitions. If a NS or a BH is accreting, on the other hand, we can have a scattering-dominated internal region. So, in terms of the \textbf{main type of matter-radiation interaction} we will have three regions: going outwards, there is domination of electron scattering, free-free absorption, bound-free/bound-bound absorption. Does \(P _{\text{gas}}\) dominate over \(P _{\text{rad}}\)? their ratio is indeed % \begin{align} \frac{P _{\text{rad}}}{P _{\text{gas}}} = \frac{ \frac{1}{3} a T^{4}}{\frac{k T_c \Sigma H}{\mu m_p}} \approx \num{3e-3} \alpha^{1/10} \dot{M}_{16}^{7/10} R_{10}^{-3/8} f^{7/5} \ll 1 \,. \end{align} As \(R\) decreases, \(P _{\text{rad}}\) becomes ever more relevant. Is there an equality radius? It will definitely be smaller than \SI{3e8}{cm}. Doing the calculation, we find % \begin{align} R _{\text{equality}} \approx \num{24} \alpha^{2/21} \dot{M}_{16}^{16/21} f^{9/21} \SI{}{km} \,. \end{align} This may sometimes be attained in the innermost region of the disk, right before the ISCO. So, in terms of the \textbf{nature of most of the pressure}, we may have a radiation-dominated region in the innermost part of the disk, but mostly there will be gas pressure domination. \paragraph{The shape of the disk} We calculate the shape of the disk, \(H(R)\), in the innermost radiation-dominated region. The sound speed under radiation domination is % \begin{align} c_s^2 = \frac{P}{\rho } = \frac{1}{3} \frac{a T_c^{4}}{\rho} = \frac{1}{3} \frac{4 \sigma }{c} \frac{T_c^{4}}{\rho } \,, \end{align} % and we know that % \begin{align} \frac{4}{3} \frac{\sigma T_c^{4}}{\tau } = \frac{3 GM \dot{M}}{8 \pi R^3} f \,, \end{align} % which means that % \begin{align} c_s^2 = \frac{3 GM \dot{M} \tau f}{8 \pi R^3 \rho c} \,, \end{align} % and using the fact that, for scattering opacity domination, % \begin{align} \tau = \kappa _R^{s} \Sigma = \frac{\sigma _T}{m_p} \rho H \,, \end{align} % we find % \begin{align} c_s^2 = \frac{3 GM \dot{M} \sigma _T \rho H f}{8 \pi R^3 \rho c m_p} = \frac{3 GM \dot{M} \sigma _T H f}{8 \pi R^3 c m_p} \,, \end{align} % but we also know that \(H = c_s R (R / GM)^{1/2}\); using this fact we can calculate the sound speed \(c_s = (H/R)(GM/R)^{1/2} \). Using this (see equation \eqref{eq:speed-of-sound-disk-shape}), we get % \begin{align} \frac{H^2}{R^2} \frac{GM}{R} &= \frac{3 GM \dot{M} \sigma _T H f}{8 \pi R^3 c m_p} \\ H &= \frac{3 \sigma _T \dot{M} }{8 \pi c m_p}f \,, \end{align} % which is nearly independent of \(R\): the only dependence is inside the factor \(f\), which depends on \(R\) quite weakly. The shape of the disk is slab-like in the inner region, and concave in the outer part but still quite flat. \paragraph{The Eddington limit} The Eddington luminosity for electron scattering is % \begin{align} L _{\text{Edd}} = \frac{4 \pi G M m_p c}{\sigma _T} \,, \end{align} % and the corresponding accretion rate is \(\dot{M} _{\text{Edd}} = L _{\text{Edd}} / c^2\). The critical accretion rate is the one which produces an Eddington luminosity, after accounting for efficiency: % \begin{align} \eta \dot{M} _{\text{crit}} c^2 = L _{\text{Edd}} \,, \end{align} % so \(\dot{M} _{\text{crit}}\) is larger than \(\dot{M} _{\text{Edd}}\). Using this, the height of the disk is given by % \begin{align} H &= \frac{3}{2} \dot{M} f \frac{\sigma _T}{4 \pi c m_p } = \frac{3}{2} \dot{M} f \frac{GM}{L _{\text{Edd}}} \\ &= \frac{3}{2} \underbrace{\frac{GM}{R _{\text{in}}c^2}}_{\eta } R _{\text{in}} \frac{\dot{M}}{\dot{M} _{\text{Edd}}} f \\ &= \frac{3}{2} \eta R _{\text{in}} \frac{\dot{M}}{\dot{M} _{\text{Edd}}} f \,, \end{align} % so % \begin{align} \frac{H}{R _{\text{in}}} = \frac{3}{2} \frac{\dot{M}}{\dot{M} _{\text{crit}}} f \,, \end{align} % and we can see that \(H < R _{\text{in}}\) iff \(\dot{M} < M _{\text{crit}}\). Then, we see that \textbf{the accretion rate must be subcritical as long as we want to keep the disk thin}. \subsection{The multicolor blackbody} A final point about disks: each layer of the disk emits roughly a blackbody, which means that the total spectrum is a superposition of several blackbodies, this is called a multicolor blackbody. The spectrum emitted by each annulus, as usual, is described by a Planck function: % \begin{align} I_\nu = \frac{2h}{c^2} \frac{\nu^3}{\exp(\frac{h \nu }{k_B T}) - 1} \,, \end{align} % as long as there is no reprocessing of the radiation from stuff around the disk. This is an interesting process, but for simplicity we will not discuss it. What we measure is the flux: % \begin{align} F_\nu = \int_{4 \pi } I_\nu \cos \theta \dd{\Omega } \,, \end{align} % where \(\dd{\Omega } = 2 \pi R \dd{R}/D^2\), where \(D\) is the distance from us. The integral to compute is % \begin{align} F_\nu = \frac{2 \pi }{D^2} \cos \iota \int_{R _{\text{in}}}^{R _{\text{out}}} R \dd{R} \frac{\nu^3}{\exp(\frac{h \nu }{k_B T(R)}) - 1} \,. \end{align} This integral can be computed numerically, but we can already gather its main characteristics. As we have seen earlier \eqref{eq:temperature-accretion-disk}, the temperature looks like % \begin{align} T(R) = T _{\text{in}} \qty( \frac{R _{\text{in}}}{R})^{3/4} \,. \end{align} Let us now estimate the integral in three limits, comparing \(h \nu \) to \(k_B T(R _{\text{in}})\) and \(k_B T (R _{\text{out}})\) respectively. The first interesting limit is the low-energy one: \(h \nu \ll k_B T (R _{\text{out}}) < k_B T(R)\) for any \(R\). Then, the flux is proportional to % \begin{align} F_\nu \propto \int R \dd{R} \frac{\nu^3}{h \nu / k_B T(R)} \propto \nu^2 \int T(R) R \dd{R} \propto \nu^2 \,. \end{align} The opposite, high energy limit \(h \nu \gg k_B T(R _{\text{in}})\) yields an exponential cutoff, since the contribution of the exponential \(e^{-h \nu / k_B T(R)}\) of the term in the integral with \(T = T _{\text{in}}\) is dominant % \begin{align} F_\nu \propto \nu^3 \exp(- \frac{h \nu }{k_B T _{\text{in}}}) \,. \end{align} In the intermediate region, \(k_B T _{\text{out}} < h \nu < k_B T _{\text{in}}\). Defining \(x = h \nu / k_B T(R)\), we find % \begin{align} F_\nu \propto \int \frac{\nu^3 }{e^{x} - 1} R \dd{R} \,, \end{align} % but \(x \propto \nu / T \propto \nu R^{3/4}\), therefore \(R \propto x^{4/3} \nu^{-4/3}\), which means that (since \(\nu \) is constant in the context of the integral) \(\dd{R} \propto x^{1/3} \nu^{-4/3} \dd{x}\) % \begin{align} F_\nu \propto \int \frac{\nu^3 \nu^{-8/3}}{e^{x}-1} x^{5/3} \dd{x} \,, \end{align} % which means that % \begin{align} F_\nu \propto \nu^{1/3} \int \frac{x^{5/3}}{e^{x}-1} \dd{x} \,. \end{align} The integral is approximately one from 0 to \(\infty \), a number. The \(F \propto \nu^{1/3}\) signature intermediate region is a characteristic of accretion disks. \end{document}
{ "alphanum_fraction": 0.6388220465, "avg_line_length": 35.7663934426, "ext": "tex", "hexsha": "bd6608f2174865a179caa20e1a11ea5c75168dd9", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jacopok/notes", "max_forks_repo_path": "ap_third_semester/compact_objects/nov18.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jacopok/notes", "max_issues_repo_path": "ap_third_semester/compact_objects/nov18.tex", "max_line_length": 281, "max_stars_count": 6, "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jacopok/notes", "max_stars_repo_path": "ap_third_semester/compact_objects/nov18.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "num_tokens": 3217, "size": 8727 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % "ModernCV" CV and Cover Letter % LaTeX Template % Version 1.11 (19/6/14) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % Original author: % Xavier Danaux ([email protected]) % % License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) % % Important note: % This template requires the moderncv.cls and .sty files to be in the same % directory as this .tex file. These files provide the resume style and themes % used for structuring the document. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[10pt,letterpaper]{moderncv} % Font sizes: 10, 11, or 12; paper sizes: a4paper, letterpaper, a5paper, legalpaper, executivepaper or landscape; font families: sans or roman \moderncvstyle{classic} % CV theme - options include: 'casual' (default), 'classic', 'oldstyle' and 'banking' \moderncvcolor{orange} % CV color - options include: 'blue' (default), 'orange', 'green', 'red', 'purple', 'grey' and 'black' \usepackage[scale=0.8]{geometry} % Reduce document margins %\setlength{\hintscolumnwidth}{3cm} % Uncomment to change the width of the dates column \setlength{\makecvtitlenamewidth}{40cm} % For the 'classic' style, uncomment to adjust the width of the space allocated to your name \usepackage{hyperref} \hypersetup{ colorlinks = true, linkcolor = red } %---------------------------------------------------------------------------------------- % NAME AND CONTACT INFORMATION SECTION %---------------------------------------------------------------------------------------- \firstname{Maria [Masha]} % Your first name \familyname{Okounkova} % Your last name %\photo[4in][0.4pt]{me_lecture} % The first bracket is the picture height, the second is the thickness of the frame around the picture (0pt for no frame) % All information in this block is optional, comment out any lines you don't need %\title{Curriculum Vitae} \address{Flatiron Institute, 162 5th Ave}{New York, NY, 10010} \email{[email protected]} \homepage{https://mariaokounkova.github.io/} %---------------------------------------------------------------------------------------- \begin{document} \makecvtitle % Print the CV title I am a Flatiron Research Fellow at the Center for Computational Astrophysics at Simons Foundation Flatiron Institute in New York City. My research is in numerical relativity, and I am primarily interested in using numerical relativity to test general relativity through gravitational wave observations. I am a member of the \href{https://www.black-holes.org/}{Simulating Extreme Spacetimes (SXS)} collaboration and the \href{https://www.ligo.org/}{LIGO Scientific Collaboration (LSC)}. \section{Scientific Interests} % \cvitem{}{Numerical relativity, binary black holes, gravitational waves, theories of gravity beyond general relativity, testing general relativity with gravitational wave observations, black hole quasi-normal modes, binary black hole spacetime non-linearities, black hole shadows, code development for numerical relativity} %---------------------------------------------------------------------------------------- % POSITIONS %---------------------------------------------------------------------------------------- \section{Academic positions} \cventry{Aug 2019 - present}{Flatiron Institute Center for Computational Astrophysics (CCA)}{}{\textit{Flatiron Research Fellow}}{Member of Gravitational Waves and Compact Objects groups}{}{} %---------------------------------------------------------------------------------------- % EDUCATION %---------------------------------------------------------------------------------------- \section{Education} \cventry{2014 - 2019}{California Institute of Technology (Caltech)}{}{}{PhD in physics}{advised by Saul Teukolsky}{} \cventry{2010 - 2014}{Princeton University}{}{}{B.A. in physics, certificate in applications of computing}{\textit{magna cum laude}}{} %---------------------------------------------------------------------------------------- % Publications %---------------------------------------------------------------------------------------- \section{Selected Publications} \cvitem{[11]}{\textbf{Maria Okounkova}, Will Farr, Maximilliano Isi, Leo C. Stein. \textit{Constraining gravitational wave amplitude birefringence and Chern-Simons gravity with GWTC-2}. \href{https://arxiv.org/abs/2101.11153}{arXiv:2101.11153} Submitted to Phys. Rev. D., Jan 2021} \cvitem{[10]}{\textbf{Maria Okounkova}. \textit{Revisiting non-linearity in binary black hole mergers}. \href{https://arxiv.org/abs/2004.00671}{arXiv:2004.00671} Submitted to Phys. Rev. D., Apr 2020 } \cvitem{[9]}{\textbf{Maria Okounkova}. \textit{Numerical relativity simulation of GW150914 in Einstein dilaton Gauss-Bonnet gravity}. \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.084046}{Phys. Rev. D 102:084046}, Oct 2020} \cvitem{[8]}{\textbf{Maria Okounkova}, Leo C. Stein, Jordan Moxon, Mark A. Scheel, and Saul A. Teukolsky. \textit{Numerical relativity simulation of GW150914 beyond general relativity}. \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.101.104016}{Phys. Rev. D 101:104016}, May 2020} \cvitem{[7]}{\textbf{Maria Okounkova}. \textit{Stability of rotating black holes in Einstein dilaton Gauss-Bonnet gravity}. \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.100.124054}{Phys. Rev. D 100:124054}, Dec 2019} \cvitem{[6]}{\textbf{Maria Okounkova}, Leo C. Stein, Mark A. Scheel, and Saul A. Teukolsky. \textit{Numerical binary black hole collisions in dynamical Chern-Simons gravity}. \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.100.104026}{Phys. Rev. D 100:104026}, Nov 2019} \cvitem{[5]}{Michael Boyle et al. (inc \textbf{Maria Okounkova}), \textit{The SXS Collaboration catalog of binary black hole simulations} \href{https://iopscience.iop.org/article/10.1088/1361-6382/ab34e2}{Class. Quant. Grav.}, April 2019} \cvitem{[4]}{\textbf{Maria Okounkova}, Mark A. Scheel, and Saul A. Teukolsky. \textit{Evolving Metric Perturbations in dynamical Chern-Simons Gravity}. \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.99.044019}{Phys. Rev. D 99:044019}, Feb 2019} \cvitem{[3]}{\textbf{Maria Okounkova}, Mark A. Scheel, and Saul A. Teukolsky. \textit{Numerical black hole initial data and shadows in dynamical Chern-Simons gravity}. \href{http://iopscience.iop.org/article/10.1088/1361-6382/aafcdf}{Class. Quant. Grav.}, Feb 2019} \cvitem{[2]}{Swetha Bhagwat, \textbf{Maria Okounkova}, Stefan W. Ballmer, Duncan A. Brown, Matthew Giesler, Mark A. Scheel, and Saul A. Teukolsky. \textit{On choosing the start time of binary black hole ringdowns}. \href{https://link.aps.org/doi/10.1103/PhysRevD.97.104065}{Phys. Rev. D 97:104065}, May 2018.} \cvitem{[1]}{\textbf{Maria Okounkova}, Leo C. Stein, Mark A. Scheel, and Daniel A. Hemberger. \textit{Numerical binary black hole mergers in dynamical Chern-Simons gravity: Scalar field}. \href{https://link.aps.org/doi/10.1103/PhysRevD.96.044020}{Phys. Rev. D 96:044020}, Aug 2017.} \section{Upcoming Publications} \cvitem{[2]}{\textbf{Maria Okounkova}, Francois Hebert, Katerina Chatziioannou, Jordan Moxon, Leo Stein, Saul Teukolsky \textit{Connecting the strong field dynamics of binary black hole mergers to gravitational waveforms at infinity using ray-tracing}. In prep, to be submitted to Phys. Rev. D.} \cvitem{[1]}{\textbf{Maria Okounkova}, Maximilliano Isi, Katerina Chatziioannou, Will Farr. \textit{Searching for binary black hole mergers beyond general relativity}. In prep, to be submitted to Phys. Rev. D.} %---------------------------------------------------------------------------------------- % Invited Talks %---------------------------------------------------------------------------------------- \section{Invited Talks and Invited Workshops} \cventry{Jul 2021}{Sapienza University of Rome}{}{Gravity Theory Seminar}{}{}{} \cventry{Apr 2021}{Universitat de les Illes Balears}{}{Seminar}{}{}{} \cventry{Feb 2021}{Caltech}{}{Tapir Seminar}{}{}{} \cventry{Dec 2020}{SISSA Trieste}{}{Gravity Seminar}{}{}{} \cventry{Dec 2020}{TCNJ}{}{Physics Colloquium}{}{}{} \cventry{Nov 2020}{Columbia University}{}{Theory Group Seminar}{}{}{} \cventry{Oct 2020}{ICERM (Institute for Computational and Experimental Research in Mathematics), Brown University}{}{Mathematical and Computational Approaches for Solving the Source- Free Einstein Field Equations Workshop}{}{}{} \cventry{Sep 2020}{ICERM (Institute for Computational and Experimental Research in Mathematics), Brown University}{}{Advances and Challenges in Computational Relativity Workshop}{}{}{} \cventry{Aug - Sep 2020}{KITP (Kavli Institute of Theoretical Physics), UC Santa Barbara }{}{Probing Effective Theories of Gravity in Strong Fields and Cosmology Workshop}{}{}{} \cventry{Aug 2020}{University of Mississippi}{}{Special seminar}{}{}{} \cventry{June 2020}{Canadian Institute for Theoretical Astrophysics}{}{CITA seminar}{}{}{} \cventry{July 2020}{Centro de Ciencias de Benasque}{}{New frontiers in Strong Gravity workshop}{\textit{Cancelled due to Covid-19 pandemic}}{}{} \cventry{June 2020}{University of Rome}{}{Strong Gravity Beyond workshop}{\textit{Cancelled due to Covid-19 pandemic}}{}{} \cventry{Dec 2019}{NYU}{}{}{Guest lecture in general relativity course}{} \cventry{Nov 2019}{University of Amsterdam}{}{Gravitational Wave Probes of Fundamental Physics workshop}{}{}{} \cventry{Oct 2019}{NYU Center for Cosmology and Particle Physics}{}{Astro seminar}{}{}{} \cventry{Dec 2018}{Cornell University}{}{Gravity Lunch Seminar}{}{} \cventry{Nov 2018}{UT Austin}{}{Invited Seminar}{}{} \cventry{Nov 2018}{Princeton University}{}{Princeton Gravity Initiative Lunch Seminar}{}{} \cventry{Sep 2018}{Perimeter Institute}{}{Strong Gravity Seminar}{}{} \cventry{Aug 2018}{Cal State Fullerton}{}{GWPAC High Performance Computing Workshop}{}{} \cventry{July 2018}{Simons Summer Workshop}{}{Forefronts in Cosmology and Numerical General Relativity}{}{} \cventry{June 2018}{Centro de Ciencias de Benasque}{}{Numerical Relativity beyond General Relativity workshop}{}{} \cventry{April 2018}{Caltech}{}{Theoretical astrophysics seminar}{}{} \cventry{Jan 2018}{Keck Institute for Space Sciences}{}{The Architecture of LISA Science Analysis}{}{} \cventry{Dec 2017}{Caltech}{}{LIGO seminar}{}{} %---------------------------------------------------------------------------------------- % Honors and awards %---------------------------------------------------------------------------------------- \section{Honors} \cventry{June 2019}{Kip Thorne Prize}{for Excellence in Theoretical Physics}{Caltech}{}{} \cventry{June 2018}{John Stager Stemple Memorial Prize}{for best performance on oral candidacy exam and research progress}{Caltech}{}{}{} \cventry{Mar 2018}{American Physical Society DGRAV prize}{for best student talk at PCGM34}{Caltech}{}{}{} \cventry{Oct 2017}{Oculus Prize}{Maestro team}{Hack Music LA}{}{}{} \cventry{Oct 2017}{Amazon Prize}{Maestro team}{Hack Music LA}{}{}{} \cventry{2014-2016}{Dominic Orr Graduate Fellowship}{full funding for first two years of research}{Caltech}{}{}{} \cventry{July 2016}{Hartle Award}{for best talk in numerical relativity session}{GR21 conference}{}{}{} \cventry{Nov 2015}{Theoretical Astrophysics in Southern California prize}{for best student talk}{Cal State Fullerton}{}{}{} \cventry{June 2014}{Kusaka Memorial Prize in Physics}{for top graduating seniors in physics}{Princeton University}{}{}{} \cventry{June 2013}{Allen G. Shenstone Prize in Physics}{for top juniors in physics}{Princeton University}{}{}{} %---------------------------------------------------------------------------------------- % Leadership and service %---------------------------------------------------------------------------------------- \section{Service and Leadership} \cventry{2019 - present}{Executive committee member}{Simulating eXtreme Spacetimes collaboration}{}{}{} \cventry{2019 - present}{Student-Postdoc Advocate}{Simulating eXtreme Spacetimes collaboration}{}{}{} \cventry{2019 - present}{Journal Referee}{APS Physical Review D, APS Physical Review Letters, Classical and Quantum Gravity}{}{}{} \cventry{2017-2019}{Organizing committee member}{Caltech/JPL Association for Gravitational-Wave Research}{}{}{} \cventry{2018}{Conference organizer}{Pacific Coast Gravity Meeting (PCGM) 34}{Caltech}{}{} \cventry{2016-2017}{Graduate student organizer}{Theoretical astrophysics including relativity group}{Caltech}{}{} \cventry{2015-2016}{Numerical relativity group discussion leader}{}{Caltech}{}{} %---------------------------------------------------------------------------------------- % Teaching %---------------------------------------------------------------------------------------- \section{Teaching and mentorship} \cventry{Summer 2021 - present}{\href{https://www.simonsfoundation.org/2020/11/30/simons-nsbp-scholars-program-2021/}{Simons-NSBP [National Society of Black Physicists] mentor}}{}{Mentoring UC Berkeley undergraduate student Lawrence Edmond in general relativity projects}{CCA}{} \cventry{Summer 2020 - present}{\href{https://cunyastro.org/astrocom/}{AstroCom NYC mentor}}{}{Mentoring CUNY undergraduate students Destiny Howell and William Chakalis in projects in black hole and gravitational wave astrophysics. (Joint with Tom Callister)}{CCA / CUNY}{} \cventry{2016-2017}{Teaching Assistant}{}{computational physics sequence (Ph20: Introduction to the Tools of Scientific Computing, Ph21: Tools for Data Analysis, Ph 22: Tools for Numerical Methods)}{Caltech}{} \cventry{Summer 2016}{Caltech SURF mentor}{}{}{Caltech}{} \cventry{2012-2014}{Laboratory Teaching Assistant}{}{computer science sequence (COS 126: Introduction to Computer Science, COS 217: Introduction to Programming Systems, COS 226: Algorithms and Data Structure)}{Princeton University}{} \medskip \cventry{}{I also maintain a \href{https://docs.google.com/document/d/1h4IOkTaq6E2bejXP07PSwa4KIIUjDjq-mtQlPtnAVOU/edit?usp=sharing}{guide} for undergraduates applying to physics graduate school}{}{}{}{} %---------------------------------------------------------------------------------------- % Outreach %---------------------------------------------------------------------------------------- \section{Outreach} \cvitem{}{I regularly participate in community science nights at local schools, guest lectures in high school and college courses, and astronomy outreach events including Astronomy on Tap. For an example of my public outreach talks to a general audience, please see a \href{https://youtu.be/d0nHtoh6Mzk?t=224}{lecture on computational physics} I gave at Caltech. For an example of my outreach talks to K-12 students, please see one of the \href{https://www.dropbox.com/s/rdnketnevxjgmy9/2020-08-26\%20AaS\%20Okounkova.mov?dl=0}{Ask-a-Scientist} discussions I led at the Flatiron Institute. } %---------------------------------------------------------------------------------------- % References %---------------------------------------------------------------------------------------- \section{References} \setlength{\tabcolsep}{12pt} \cvitem{}{\noindent \begin{tabular}{l l} \href{http://astro.cornell.edu/members/saul-a-teukolsky.html}{\textbf{Prof. Saul Teukolsky}} & \href{http://pma.caltech.edu/content/mark-scheel}{\textbf{Research Prof. Mark Scheel}} \\ TAPIR, SXS Collaboration & TAPIR, SXS Collaboration \\ Caltech / Cornell & Caltech \\ \small{\href{mailto:[email protected]}{[email protected]}} & \small{\href{mailto:[email protected]}{[email protected]}} \\ & \\ \href{https://duetosymmetry.com/}{\textbf{Asst. Professor Leo Stein}} & \href{https://www.simonsfoundation.org/team/will-farr/}{\textbf{Prof. Will Farr}} \\ University of Mississippi & Flatiron CCA / Stony Brook University \\ \small{\href{mailto:[email protected]}{[email protected]}} & \small{\href{[email protected]}{[email protected]}} \\ \end{tabular}} %---------------------------------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6525639471, "avg_line_length": 54.9933333333, "ext": "tex", "hexsha": "c70db1f1aea8c9091173b5ddc98b64666cdb815d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d85089f2aba69d196913ac505c46ec31dadba6ec", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "mariaokounkova/mariaokounkova.github.io", "max_forks_repo_path": "template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d85089f2aba69d196913ac505c46ec31dadba6ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "mariaokounkova/mariaokounkova.github.io", "max_issues_repo_path": "template.tex", "max_line_length": 591, "max_stars_count": null, "max_stars_repo_head_hexsha": "d85089f2aba69d196913ac505c46ec31dadba6ec", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "mariaokounkova/mariaokounkova.github.io", "max_stars_repo_path": "template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4397, "size": 16498 }
\documentclass[a4paper,11pt]{article} \usepackage[english]{babel} \usepackage{xcolor} \usepackage{tabularx} \usepackage{lastpage} \usepackage{fancyhdr} \fancypagestyle{summaryen}{% \renewcommand{\headrulewidth}{0pt} \renewcommand{\headheight}{24pt} \fancyhead[C]{\small\color{gray} Dissertation summary \\ Krol, L.~R. (2020) \textit{Neuroadaptive Technology: Concepts, Tools, and Validations.}} \fancyfoot[C]{\small\color{gray} Page~\thepage~of~\pageref{LastPage}} } \pagestyle{summaryen} \begin{document} \section*{Summary} This dissertation presents conceptual, methodological, and experimental advances in the field of neuroadaptive technology. Neuroadaptive technology refers to the category of technology that uses implicit input obtained from brain activity using a passive brain-computer interface in order to adapt itself, e.g. to enable implicit control or implicit interaction. Implicit input refers to any input obtained by a receiver that was not intended as such by the sender. Neuroadaptive technology thus detects naturally-occurring brain activity that was not intended for communication or control, and uses it to enable novel human-computer interaction paradigms. Part~I provides conceptual frameworks to unify previous works and guide future research. Chapter~1 reviews existing applications of passive brain-computer interfacing and suggests their level of interactivity to be a key parameter, ranging from mental state assessment, through open- and closed-loop adaptation, to forms of automated or intelligent adaptation. Systems in this latter category necessarily possess some autonomy to guide the interaction according to their own goals. Chapter~2 explains how this autonomy can be used for cognitive probing: a method in which the technology deliberately elicits a brain response from the user in order to learn from it. This allows neuroadaptive technology to exploit the fact that human brains automatically respond to the events they perceive. The gathered information can be used to further optimise the interaction, but can also be used in adverse ways. Chapter~2 therefore discusses a number of technological and ethical issues surrounding this method. Part~II introduces two tools to help validate some core methods related to neuroadaptive technology. Chapter~3 describes SEREEGA (Simulating Event-Related EEG Activity), a free and open source toolbox to simulate event-related electroencephalographic (EEG) activity. Because simulated data has a known ground truth, it can be used to evaluate and validate analysis methods. SEREEGA covers and extends the vast majority of past and present-day EEG simulation approaches. Chapter~4 then uses such simulated data to validate a classifier visualisation method. This method allows a number of commonly-used classification algorithms to be visualised in a virtual brain, revealing which (cortical) areas the classifier focused on. This provides important insight to validate the classifier itself and neuroadaptive technology more broadly. It also provides a new classifier-based analysis method for neuroscientific research in general. Part~III presents two experimental studies illustrating the technology described in Part~I, using the methods from Part~II. Chapter~5 demonstrates how neuroadaptive technology can be used to enable implicit cursor control using cognitive probing. By repeatedly eliciting brain responses to initially random cursor movements, and classifying these responses as reflecting either positive or negative interpretations of each movement, a computer can gradually reinforce the cursor to move in the direction desired by the observer. Importantly, the observer need not be aware of this happening. Chapter~6 presents additional analyses of this paradigm, revealing that brain activity elicited by the cursor movements can indeed reflect internal, subjective interpretations. These experiments highlight both the potential benefits and the potential risks addressed in Part~I. \vfill \begin{tabularx}{\textwidth}{XXX} & & \hrule Klaus Gramann \\ \end{tabularx} \vspace{5cm} \end{document}
{ "alphanum_fraction": 0.820494186, "avg_line_length": 108.6315789474, "ext": "tex", "hexsha": "d73c3ae28e50947cb5f50bc02d7e272052e9d8f6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "548167344fada64384f95d23be67a48ee08f7449", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lrkrol/dissertation", "max_forks_repo_path": "administrative/summary_en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "548167344fada64384f95d23be67a48ee08f7449", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lrkrol/dissertation", "max_issues_repo_path": "administrative/summary_en.tex", "max_line_length": 1003, "max_stars_count": null, "max_stars_repo_head_hexsha": "548167344fada64384f95d23be67a48ee08f7449", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lrkrol/dissertation", "max_stars_repo_path": "administrative/summary_en.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 834, "size": 4128 }
\def\macrosUseBeamer{} \input{arthur} \input{macros} \usepackage{fancyvrb} \usepackage{graphicx} \usepackage{multicol} \newcommand\tab{$\hphantom{--}$} \usepackage{ebproof} \usepackage{tikz-cd} \begin{document} % No 'Figure' in captions \setbeamertemplate{caption}{\raggedright\insertcaption\par} %****************************************************************************** %****************************************************************************** %****************************************************************************** \title{Verification of Data Layout Transformations} \author[Ramon Fern\'{a}ndez Mir]{{\bf Ramon Fern\'{a}ndez Mir}\\ \vspace{1em} with Arthur Charguéraud } \institute[]{Inria} \date{24/09/2018} \frame{\titlepage} %****************************************************************************** %\framecontentdocument %****************************************************************************** %****************************************************************************** %****************************************************************************** %\section{Separation Logic: a first example} %\framecontentsection %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Software verification} Why do we care? Take for example GCC. \begin{itemize} \item Between 1999 and 2015, over 39.000 bugs were reported. \item Approximately 60\% of the files have some sort of bug. \item The life span of a bug is $\sim$200 days. \item The most buggy file (as of 2015) had 817 different bugs. \end{itemize} \bigskip \pause \textbf{Solution:} The CompCert verified compiler. \begin{figure}[H] \centering \includegraphics[width=7cm]{images/compcert} \end{figure} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Software verification - principles} \begin{minipage}{0.75\linewidth} Coq provides a formal language to write mathematical definitions and an environment to write machine-checked proofs. \end{minipage}% \begin{minipage}{0.2\linewidth} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/coq} \end{figure} \end{minipage} \bigskip Key ideas: \begin{itemize} \item Language semantics can be expressed with mathematical rules. \item Language properties can be written as theorems. \item We can prove them! \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Motivating example} \begin{figure}[H] \centering \begin{minipage}{0.275\linewidth} \centering \includegraphics[width=\textwidth, height=3.25cm]{images/ITER_tokamak} \caption{\footnotesize ITER tokamak} \label{fig:figure1} \end{minipage}% \hspace{0.5cm} \begin{minipage}{0.275\linewidth} \centering \includegraphics[width=\textwidth, height=3.25cm]{images/plasma_physics} \caption{\footnotesize Plasma physics} \label{fig:figure2} \end{minipage}% \hspace{0.5cm} \begin{minipage}{0.275\linewidth} \centering \includegraphics[width=\textwidth, height=3.25cm]{images/PIC_simulation} \caption{\footnotesize PIC simulation} \label{fig:figure2} \end{minipage} \end{figure} \bigskip Challenges: \begin{itemize} \item Exploit data-level parallelism. \item Use domain-specific knowledge of the code. \item Do it without introducing any bugs. \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Motivating example - initial code} \begin{lstlisting}[style=Cstyle] typedef struct { // Position float x, y, z; // Other fields float vx, vy, vz, c, m, v; } particle; particle data[N]; for (int i = 0; i < N; i++) { // Some calculation involving data[i] } \end{lstlisting} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Motivating example - peeling} %Further suppose that the intial `particle' record is not used as part of a dynamic data structure. %Typically, cold fields are stored in a different array. Suppose that the calculation uses mainly the position. \begin{lstlisting}[style=Cstyle] typedef struct { float vx, vy, vz, c, m, v; } cold_fields; typedef struct { float x, y, z; } hot_fields; cold_fields other_data[N]; hot_fields pos_data[N]; \end{lstlisting} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Motivating example - AoS to SoA} Now, say that we want to take advantage of vector instructions. \bigskip \begin{lstlisting}[style=Cstyle] typedef struct { float x[N]; float y[N]; float z[N]; } hot_fields; hot_fields pos_data; \end{lstlisting} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Motivating example - AoS to AoSoA} But without reducing too much the locality between accesses to fields of the original struct. \bigskip \begin{lstlisting}[style=Cstyle] typedef struct { float x[B]; float y[B]; float z[B]; } hot_fields; hot_fields pos_data[ceil(N/B)]; \end{lstlisting} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Motivating example - summary} In short, the transformations we have seen are: \begin{itemize} \item Peeling. \item AoS to SoA. \item AoS to AoSoA. \end{itemize} \bigskip \pause E.g., when applying all these transformations, an access of the form: \begin{lstlisting}[style=Cstyle] data[i].x \end{lstlisting} becomes: \begin{lstlisting}[style=Cstyle] pos_data[i/B].x[i%B] \end{lstlisting} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Project goals} \begin{itemize} \setlength\itemsep{1.5em} \item Find the basic transformations that combined give rise to the ones we are interested in.\\ \pause \item Formalize a C-like language with arrays, structs and pointers. \begin{itemize} \item Equipped with a high-level semantics, to simplify the proofs. \item Equipped with a low-level semantics, to be closer to C. \end{itemize} \pause \item Define the transformations and prove their correctness. \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Basic transformations - grouping} \begin{center} \begin{minipage}{0.3\linewidth} \textbf{\small 1. Field grouping} \begin{lstlisting}[style=Cstyle, basicstyle=\scriptsize] // Before typedef struct { int a, b, c; } s; // After typedef struct { int b, c; } sg; typedef struct { int a; sg fg; } s'; \end{lstlisting} \end{minipage}% \begin{minipage}{0.5\linewidth} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{images/grouping} \end{figure} \end{minipage} \end{center} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Basic transformations - tiling} \begin{center} \begin{minipage}{0.3\linewidth} \textbf{\small 2. Array tiling} \begin{lstlisting}[style=Cstyle, basicstyle=\scriptsize] // Before typedef int a[N]; // After typedef int a'[N/B][B]; \end{lstlisting} \end{minipage}% \begin{minipage}{0.5\linewidth} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{images/tiling} \end{figure} \end{minipage} \end{center} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Basic transformations - AoS to SoA} \begin{center} \begin{minipage}{0.3\linewidth} \textbf{\small 3. AoS to SoA} \begin{lstlisting}[style=Cstyle, basicstyle=\scriptsize] // Before typedef struct { int a, b; } s; // After typedef struct { int a[N]; int b[N]; } s'; \end{lstlisting} \end{minipage}% \begin{minipage}{0.5\linewidth} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{images/soa} \end{figure} \end{minipage} \end{center} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Basic transformations - justification} \begin{itemize} \setlength\itemsep{1.5em} \item \textbf{Peeling:} Field grouping twice. %\pause \item \textbf{AoS to SoA:} AoS to SoA. %\pause \item \textbf{AoS to AoSoA:} Array tiling and then AoS to SoA on the tiles. \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Language overview} The language includes: \begin{itemize} \item Pointers, structs and arrays. \item All the necessary memory operations: \end{itemize} \begin{Verbatim}[fontsize=\scriptsize] get ptr => *ptr array_access ptr i => ptr + i set ptr v => *ptr = v struct_access ptr f => &(ptr->f) new T => malloc(sizeof(T)) struct_get s f => s.f \end{Verbatim} \bigskip In the big picture: \begin{figure}[H] \centering \includegraphics[width=7cm]{images/compcert_our_language} \end{figure} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Language overview - rules} For example, the semantics of get is: \begin{center} \begin{prooftree} \Hypo{\langle C, \: S, \: m_1, \: t \rangle \: \Downarrow \: \langle m_2, \: (l, \pi) \rangle} \Hypo{m_2 [ l ]..\pi \: = \: v_r} \Hypo{v_r \neq \varnothing} \Infer3{\langle C, \: S, \: m_1, \: get_T \: t \rangle \: \Downarrow \: \langle m_2, \: v_r \rangle} \end{prooftree} \end{center} In Coq, this looks like: \begin{coqs} Inductive red (C:typdefctx) : stack -> state -> trm -> state -> val -> Prop := | red_get : forall l p S T v1 m1 m2 vr, red C S m1 t m2 (val_abstract_ptr l p) -> read_state m2 l p vr -> ~ is_uninitialized vr -> red C S m1 (trm_app (prim_get T) (t::nil)) m2 vr. \end{coqs} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Field grouping - rules} Similarly, we define rules for our transformation: \[ \pi := \varnothing \: | \: [i]::\pi \: | \: .f::\pi \] \begin{align*} \llbracket \varnothing \rrbracket &= \varnothing & \\ \llbracket [i]::\pi \rrbracket &= [i]:: \llbracket \pi \rrbracket & \\ \llbracket \: .f::\pi \rrbracket &= .f:: \llbracket \pi \rrbracket & \text{ when } f \notin Fs \\ \llbracket \: .f::\pi \rrbracket &= .f_g::.f:: \llbracket \pi \rrbracket & \text{ when } f \in Fs \\ \end{align*} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Field grouping - Coq} %\begin{prooftree} % \Hypo{\pi \: \Rightarrow_{tr} \: \pi'} % \Hypo{f \in f_s} % \Infer2{[.f] ++ \pi \: \Rightarrow_{tr} \: [.fg\: , \: .f] ++ \pi'} %\end{prooftree} \bigskip In Coq, this looks like: \begin{coqs} Inductive tr_accesses (gt:group_tr) : accesses -> accesses -> Prop := | tr_accesses_nil : tr_accesses gt nil nil | tr_accesses_array : forall p p' T i, tr_accesses gt p p' -> tr_accesses gt (access_array T i::p) (access_array T i::p') | tr_accesses_field_other : forall T Tt Fs Tg fg p p' f, gt = make_group_tr Tt Fs Tg fg -> tr_accesses gt p p' -> T <> Tt \/ f \notin Fs -> tr_accesses gt (access_field T f::p) (access_field T f::p'). | tr_accesses_field_group : forall Tt Fs Tg fg p p' f, gt = make_group_tr Tt Fs Tg fg -> tr_accesses gt p p' -> f \in Fs -> tr_accesses gt (access_field Tt f::p) (access_field Tt fg::access_field Tg f::p') \end{coqs} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Field grouping - simulation} With a similar pattern we define: \\[0.5em] \begin{minipage}{0.45\linewidth} \begin{itemize} \item \texttt{tr\_typdefctx}, \item \texttt{tr\_state}, \item \texttt{tr\_stack}, \end{itemize} \end{minipage}% \begin{minipage}{0.45\linewidth} \begin{itemize} \item \texttt{tr\_val} and \item \texttt{tr\_trm}. \end{itemize} \end{minipage} \bigskip The property that we require from the transformation is: \begin{equation*} \begin{tikzcd}[row sep=huge, column sep=huge] t \arrow[r,"tr"] \arrow[d, Rightarrow] & \llbracket t \rrbracket \arrow[d, Rightarrow, dashed] \\ v \arrow[r,"tr",dashed] & \llbracket v \rrbracket \end{tikzcd} \end{equation*} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Field grouping - theorem} In the end the theorem that we prove for full executions is: \begin{coq} Theorem red_tr: forall gt C C' t t' v m, red C empty_stack empty_state t m v -> ~ is_error v -> group_tr_ok gt C -> tr_typdefctx gt C C' -> tr_trm gt t t' -> wf_typdefctx C -> wf_trm C t -> exists v' m', tr_val gt v v' /\ tr_state gt m m' /\ red C' empty_stack empty_state t' m' v'. \end{coq} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Field grouping - induction} To make the proof work we strengthen it as follows: \begin{coq} Theorem red_tr_ind: forall gt C C' t t' v S S' m1 m1' m2, red C S m1 t m2 v -> ~ is_error v -> group_tr_ok gt C -> tr_typdefctx gt C C' -> tr_trm gt t t' -> tr_stack gt S S' -> tr_state gt m1 m1' -> wf_typdefctx C -> wf_trm C t -> wf_stack C S -> wf_state C m1 -> exists v' m2', tr_val gt v v' /\ tr_state gt m2 m2' /\ red C' S' m1' t' m2' v'. \end{coq} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{} \Ce{\Large Demo} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Array tiling and AoS to SoA} \textbf{Array tiling} \begin{itemize} \item Takes as arguments: \begin{itemize} \item The name of the array being changed (\texttt{Ta}). \item The name of the tiles (\texttt{Tt}). \item The size of the tiles (\texttt{K}). \end{itemize} \item All the instances of \texttt{t[i]} where \texttt{t} has type \texttt{Ta} become \texttt{t[i/K][i\%K]}. \end{itemize} \bigskip \textbf{AoS to SoA} \begin{itemize} \item Takes as arguments: \begin{itemize} \item The name of the array being changed (\texttt{Ta}). \item The fields names and types of the struct being changed (\texttt{Tfs}). \item The size of the array (\texttt{K}). \end{itemize} \item All the instances of \texttt{t[i].f} where \texttt{t} has type \texttt{Ta} become \texttt{t.f[i]}. \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{High-level transformations - summary} So far we have presented: \vspace{1em} \begin{itemize} \setlength\itemsep{1.5em} \item Field grouping. \item Array tiling. \item AoS to SoA. \end{itemize} \bigskip \pause The correctness of these is proved! \\ %\pause (up to a couple axioms, e.g., results on the modulo operation) \bigskip \pause \textbf{Problem}: This might all be just a hack if we don't link it with a more concrete, CompCert-style semantics... \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{High-level to low-level transformation} The grammar is extended with: \vspace{1em} \begin{itemize} \setlength\itemsep{1.5em} \item Low-level pointers. \begin{Verbatim}[fontsize=\scriptsize] (l, p) => (l, offset(p)) \end{Verbatim} \item Low-level heap operations. \begin{Verbatim}[fontsize=\scriptsize] struct_access (l, p) f => struct_ll_access (l, offset(p)) field_offset(f) \end{Verbatim} \item A special kind of value that consists of a list of words. \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{High-level to low-level transformation - memory} \begin{center} \begin{figure} \includegraphics[scale=0.31]{images/high_level_memory} \caption{High-level memory.} \end{figure} \begin{figure} \includegraphics[scale=0.31]{images/low_level_memory} \caption{Low-level memory.} \end{figure} \end{center} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{High-level to low-level transformation - theorem} The goal is to prove: \begin{coq} Theorem red_tr_warmup : forall C LLC T m a v t' m' v', red C LLC empty_stack empty_state t m v -> typing C empty_gamma empty_phi t T -> ~ is_error v -> ll_typdefctx_ok C LLC -> tr_trm C LLC a t t' -> wf_typdefctx C -> wf_trm C t -> wf_typ C T -> exists v' m', tr_state C LLC a m m' /\ tr_val C LLC a v v' /\ red C LLC empty_stack empty_state t' m' v'. \end{coq} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Project extent} Accomplished goals: \\[0.7em] \begin{itemize} \setlength\itemsep{1.2em} \item Defined a high-level language convenient to argue about data-layout transformations. \pause \item Found a way to connect it to realistic low-level semantics. \pause \item Proved the correctness of: \begin{itemize} \item Field grouping. \item Array tiling. \item AoS to SoA. \end{itemize} \end{itemize} \bigskip \pause Some statistics: \begin{center} \begin{tabular}{ccc} lines of spec & lines of proof & lines of comments \\ 2721 & 3113 & 707 \end{tabular} \end{center} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{Future work} Next steps: \\[0.7em] \begin{itemize} \setlength\itemsep{1.5em} \item Realizations of the transformations as functions. %\pause \item Some arithmetic results in the tiling and low-level transformations. %\pause \item Work on loops and add loop transformations.%\pause \item Connect the low-level language with CompCert (at which level?) % C or Clight? \end{itemize} \end{frame} %------------------------------------------------------------------------------ \begin{frame}[fragile] \frametitle{} \Ce{\Large Thanks!} \end{frame} %------------------------------------------------------------------------------ %\frame{\titlepage} %****************************************************************************** %****************************************************************************** %****************************************************************************** \end{document}
{ "alphanum_fraction": 0.5766861475, "avg_line_length": 24.9251968504, "ext": "tex", "hexsha": "c71ace029a7abfd78cd0d37365c19871c0029d12", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2afbe24651f5af8e4fa545ff1afb454be3ce7f9e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ramonfmir/verified_transfo", "max_forks_repo_path": "talk/camus.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2afbe24651f5af8e4fa545ff1afb454be3ce7f9e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ramonfmir/verified_transfo", "max_issues_repo_path": "talk/camus.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "2afbe24651f5af8e4fa545ff1afb454be3ce7f9e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ramonfmir/verified_transfo", "max_stars_repo_path": "talk/camus.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5308, "size": 18993 }
\section{Problem in Neural Network} \label{sec:Problem} Though neural network is able to achieve numerous back breaking tasks, it is fairly challenging to well train a neural network. Among all the difficulties, gradient explosion and vanishing is the most famous one. In this section, we will discuss a few problems in training a neural network. \subsection{Explosion and Vanishing} Intuitively, using the computation like $ Z^{[l]} = W^{[l]}A^{[l-1]} $ and $ dA^{[L-1]} = W^{[L-1]T}dZ^{[L]} $ in FP and BP, if $ W $ is much larger than 1 or much smaller than 1, after several multiplication in the deep neural network the final result $ Z^{[l]} = \prod\limits_{l=1}^LW^{[l]}A^{[0]} $ will gradually tend to be infinity or zero. Here we further illustrate the explosion and vanishing with a simple problem from \parencite{shamir2018exponential}: \begin{exmp} Suppose there is only one parameter $ w $. for a data set $\{x,\ y\}$ and a special architecture, the cost function is: \begin{equation} J(w) = (w^7 + 1)^2 \end{equation} The plot of $J(w)$ is illustrated in \autoref{fig:exmp1}. The global minimum $ w^* = -1 $, a basin exists in the region of $ [-1.05,\ -0.95] $. But in the region of $ [-\infty,\ -1.05],\ [1.05,\ \infty] $, $ J(w) $ is extremely steep, which is the explosion, and in the region of $ [-0.95,\ 1.05] $, $ J(w) $ is unusually flat, which is the vanishing. If the initial value $ w_0 $ falls into the basin, GD will converge quickly. If not, let's say $ w_0 = 0.5 $, then it would be terrible to traverse through the plateau with a tiny update in each iteration. \end{exmp} \begin{figure}[H] \centering \includegraphics[width=10cm]{exmp1} \caption{\label{fig:exmp1}Plot of $ J(w) = (w^7 + 1)^2 $} \end{figure} \subsection{Saddle Points in Non-convex Optimization} Though there seems to be no rigorous proof of the prevalence of saddle points in high dimensional non-convex optimization, one line of the evidence is carried out astonishingly from statistical physics. It suggests that among a myriad of critical points, most are likely to be saddle points. To understand the landscape in the neural network optimization, we first introduce the Hessian Matrix to pave the way for further discussion. \subsubsection{Hessian Matrix} \begin{defn} \label{def:Hessian} Suppose $ f: \mathbb{R}^n\rightarrow R $, all second partial derivatives of $ f $ exist and are continuous over the domain. The Hessian Matrix $ \mathbf{H} \in \mathbb{R}^{n\times n} $ of $ f $ is defined and arranged as follow: \begin{equation} \mathbf{H} = \begin{pmatrix} \frac{\partial^2f}{\partial x_1^2} & \frac{\partial^2f}{\partial x_1\partial x_2} & \cdots & \frac{\partial^2f}{\partial x_1\partial x_n} \\ \frac{\partial^2f}{\partial x_2\partial x_1} & \frac{\partial^2f}{\partial x_2^2} & \cdots & \frac{\partial^2f}{\partial x_2\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2f}{\partial x_n\partial x_1} & \frac{\partial^2f}{\partial x_n\partial x_2} & \cdots & \frac{\partial^2f}{\partial x_n^2} \end{pmatrix} \end{equation} If the second partial derivatives are continuous, then the order of partial derivatives does not matter. Thus, $ \mathbf{H} $ is a symmetric matrix. \end{defn} \par Since the Hessian Matrix is symmetric, the eigenvalues are real numbers, i.e. $ \lambda_i \in R $. For a cost function $ J(\theta) $, where $ \theta \in \mathbb{R}^n $ represents the high dimensional variable. The properties of critical points\footnote{Critical points are points $ \theta $ where $ \nabla J(\theta) = 0 $. } can be described by the eigenvalues of the Hessian. \begin{enumerate} \item If $ \forall \lambda_i \in R^+ $, i.e. Hessian is positive definite, then the critical point is a local minimum. \item If $ \forall \lambda_i \in R^- $, i.e. Hessian is negative definite, then the critical point is a local maximum. \item If $ \forall \lambda_i \neq 0 $, some are positive and others are negative, then the critical point is a (horse) saddle point with min-max structure. \item If Hessian matrix is singular, i.e. $ |\mathbf{H}| = 0 $, then the critical point is called degenerate critical point and it is a monkey saddle point. \end{enumerate} \begin{figure}[ht] \centering \subfigure[local minimum]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6cm]{minimum.png} \end{minipage}% }% \subfigure[local maximum]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6cm]{maximum.png} \end{minipage}% }% \subfigure[horse saddle point]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6cm]{horse.png} \end{minipage}% }% \subfigure[monkey saddle point]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6cm]{monkey.png} \end{minipage}% }% \centering \caption{\label{fig:activation}Different types of critical point} \end{figure} \subsubsection{The Prevalence of Saddle Points} \label{sssec:Saddle} In the early version of neural network algorithms, parameters are initialized with standard Gaussian noise. The article by Bray et al. \parencite{bray2007statistics} computes the average number of critical points of a Gaussian distribution on a high dimensional space. They derive the distribution of critical points with a relation between $ \alpha $ and $ \epsilon $, where $ \alpha $ is the fraction of negative eigenvalues of the Hessian at the critical points, $ \epsilon $ is the error at the critical points. In a Gaussian field $ \phi $ defined over volume $ V $ of an N-dimensional Euclidean space. The normalized density of eigenvalues defined as: \begin{equation} \rho(\lambda) = \frac{1}{N}\sum\limits_{i=1}^N\delta(\lambda-\lambda_i) \end{equation} The average of eigenvalues is defined as: \begin{equation} \bar(\lambda)=\int \lambda\rho(\lambda) d\lambda \end{equation} And it can be computed via: \begin{equation} \bar{\lambda}(\epsilon)=2 \frac{f^{\prime}(0) \epsilon}{f^{\prime \prime}(0) P} \end{equation} where $ P $ is defined as follow: \begin{equation} P=\frac{f^{\prime}(0)^{2}}{f^{\prime \prime}(0)^{2}}+\frac{f(0)}{f^{\prime \prime}(0)}\left(1-\frac{2}{N}\right) \approx \frac{f^{\prime}(0)^{2}}{f^{\prime \prime}(0)^{2}}+\frac{f(0)}{f^{\prime \prime}(0)} \end{equation} Finally, the relation between $ \alpha $ and $ \epsilon $ is given by: \begin{equation} \frac{2}{\pi} \int_{\frac{\bar{\lambda}}{2 \sqrt{f^{\prime \prime}(0)}}}^{1} (1-u^{2})^{\frac{1}{2}}du=\alpha \end{equation} where $ u $ just a temporary variable in the integration computation. \par In the $ \epsilon-\alpha $ graph, there is a global minimum at $ \alpha = 0,\ \epsilon = \epsilon_{min} $ and a global maximum at $ \alpha = 1,\ \epsilon = \epsilon_{max} $. Other critical points a located on a monotonically increasing curve as $ \alpha $ ranges from 0 to 1. Some experiments validating such proposal is finished by Dauphin et al. \parencite{dauphin2014identifying} in \autoref{fig:aevalidation}. This implies that local minimums, i.e. critical points that $ \alpha\rightarrow 0 $, are more closer to global minimum, while the majority of critical points that have high error are likely to have a large percentage of negative eigenvalues, i.e. most of critical points are saddle points. \begin{figure}[H] \centering \includegraphics[width=14cm]{aevalidation} \caption{\label{fig:aevalidation}(a) and (c) show how critical points are distributed in the $ \epsilon-\alpha $ plane. (b) and (d) plot the distributions of eigenvalues of the Hessian at three different critical points.} \end{figure} \subsection{Learning Rate Selection} The choice of stepsize has long been a puzzle to practitioners in neural networks. In fact, it is not peculiar to GD but a common problem in optimization. If the learning rate is too large, with high probability GD will diverge. If learning rate is too small, the time GD takes to converge is unbearable. In practice, the most widely used scheme is setting learning rate as a small constant like 0.1 or 0.01. The drawback of such scheme is obvious: it does not guarantee the convergence of GD. \par First we review the general optimization problem: \begin{align*} x_{k+1} = x_{k} - \alpha_k d_k \end{align*} We list other commonly used scheme in the following. \subsubsection{Goldstein Rule} Goldstein rule is the first effective rule for stepsize selection in general optimization problem that does not depend on linear minimization. Let $ \sigma \in (0,\ 0.5),\ \alpha_k $ satisfied: \begin{equation} \sigma \leq \frac{f\left(x^{k}+\alpha^{k} d^{k}\right)-f\left(x^{k}\right)}{\alpha^{k} ||d^{k}||^2} \leq 1-\sigma \end{equation} \subsubsection{Armijo Rule} \label{sssec:Armijo} It is natural to devise a scheme that successively reduce $ \alpha $, when $ f(x_{k+1}) < f(x_k) $ is not satisfied. The Armijo rule is exactly based on the this devisal. Here, let $ s \in R,\ \beta,\ \sigma \in [0,\ 1],\ \alpha_k=\beta^{m_k}s $, where $ m_k $ is the first nonnegative integer satisfies: \begin{equation} f(x_{k}) - f(x_{k} - \beta^{m_k}sd_k) \geq -\sigma\beta^{m_k}s\nabla ||d_k||^2 \end{equation} \par Usually $ \sigma \in [10^{-5},\ 10^{-1}],\ \beta \in [0.1,\ 0.5],\ s=1 $. \subsubsection{Diminishing Learning Rate} During the process of optimization, we let $ \alpha_k\rightarrow 0 $. The primary downside of such scheme is that after multiple iterations the stepsize may be too small to descent. So an additional requirement is added: \begin{equation} \sum\limits_{k=0}^\infty\alpha_k = \infty \end{equation}
{ "alphanum_fraction": 0.6779177327, "avg_line_length": 48.3904761905, "ext": "tex", "hexsha": "b495fe75d80ac8d3bb82717e87834c7190884e06", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ae11b768aeffe09ddd71b082dfd27c15c02d9c2d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xuebashuoge/Neural-Network-Overview", "max_forks_repo_path": "body/undergraduate/final/section/ProblemInNN.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ae11b768aeffe09ddd71b082dfd27c15c02d9c2d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xuebashuoge/Neural-Network-Overview", "max_issues_repo_path": "body/undergraduate/final/section/ProblemInNN.tex", "max_line_length": 210, "max_stars_count": null, "max_stars_repo_head_hexsha": "ae11b768aeffe09ddd71b082dfd27c15c02d9c2d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xuebashuoge/Neural-Network-Overview", "max_stars_repo_path": "body/undergraduate/final/section/ProblemInNN.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2996, "size": 10162 }
\section{Financial Report} The financial position of the SRCF remains reasonable though a loss was made in the 2009-2010 academic year due to large hardware purchases. However we did not submit our accounts before the 31\textsp{st} December deadline as this was not done during term\footnote{there were assurances that this was under control}. It was not possible to complete this process outside of term as since our bank had failed to correctly set up internet banking and since the box containing the bank statements was in the UK while the Junior Treasurer was in Japan. This meant that it was not possible to sign off to say that we had a particular quantity of money as we could not be sure that our electronic records were correct. At the beginning of January we owed various people in excess of £1000 (though we had the money to pay them) and had failed to submit our accounts. We were also missing 8 of our bank statements and had two unopened letters from the bank. A the beginning of this term Daniel was given the treasurer box and a meeting of the committee voted to give the Chair emergency powers\footnote{These powers expire at the AGM.} to fix the accounts and (if necessary\footnote{we haven't been informed that we de-registered or that we re-registered but if we had de-registered we should now be re-registered}) get us re-registered as a University Society (Elliott was absent and Daniel abstained). The Chair was able to audit the accounts, balance the books and ensure that all the receipts were numbered such that there is a auditable paper trail for all income and expenditure. Fortunately all our accounts were in order if slightly disorganised. The Chair wrote cheques to all those who were owed money by the SRCF and the Treasurer signed them. Visits to the bank revealed that the signatories on the account were incorrect as some previous committee had incorrectly filled in the Society Mandate form removing the Senior Treasurer and a new form had not been filed after the EGM to add the new Chair as a signatory and remove the old Chair. Fortunately this should now all have been resolved and our documentation on this has been improved in an effort to prevent this occurring in future. \subsection{Summary of accounts} A summary of the accounts will be circulated on paper at the AGM. We made a loss in 2009-2010 and a smaller profit so far in 2010-2011. \subsection{Future} Since the SRCF continues to grow and now has more hardware than at any previous point the SRCF will need to increase the size of its insurance fund. Since the SRCF relies on donations donations would be appreciated (\url{https://www.srcf.ucam.org/donate}). \subsection{Thanks} Particular thanks are due to CAUV, CUHaH, EBC and Romance.ucam.org for donations as societies, to Third Light for their large donation and to Open Market (formerly MX Telecom) who sponsored our garden party. Annabel Banks, Chris Hinde and various other individuals who wish to remain anonymous have also made donations this year. Kristian Glass and Malcolm Scott have been owed large sums of money for many months by the SRCF over the last year mainly due to our not getting around to writing the cheques and the SRCF is very grateful to them for bearing with us.
{ "alphanum_fraction": 0.8052227343, "avg_line_length": 141.5217391304, "ext": "tex", "hexsha": "137f810c81d667af4de46b0a96c265ea1b317948", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-09-13T16:32:23.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-29T16:52:01.000Z", "max_forks_repo_head_hexsha": "1081e0bc0f0ac9519eb3bb1a3cf5c8f9778c8572", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "CHTJonas/srcf-web", "max_forks_repo_path": "minutes/agm2011-02-03/Treasurer.tex", "max_issues_count": 32, "max_issues_repo_head_hexsha": "1081e0bc0f0ac9519eb3bb1a3cf5c8f9778c8572", "max_issues_repo_issues_event_max_datetime": "2022-03-29T13:07:37.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-24T20:27:52.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "CHTJonas/srcf-web", "max_issues_repo_path": "minutes/agm2011-02-03/Treasurer.tex", "max_line_length": 796, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1081e0bc0f0ac9519eb3bb1a3cf5c8f9778c8572", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "CHTJonas/srcf-web", "max_stars_repo_path": "minutes/agm2011-02-03/Treasurer.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-24T19:42:11.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-24T19:42:11.000Z", "num_tokens": 695, "size": 3255 }
\section{Unsupervised Learning} \subsection{Density Estimation} \subsubsection{Mixture Models} \subsubsection{Expectation Maximisation (EM)} \subsubsection{Histogram} \subsubsection{Kernel Density Estimator} \subsubsection{K-Nearest Neighbours Estimator} \subsection{Clustering} \subsubsection{K-Means} \subsubsection{Hierarchical} \subsubsection{Mean Shift} \subsection{Dimensionality Reduction} \subsubsection{Subset Selection} \subsubsection{Principal Component Analysis} \subsubsection{Multidimensional Scaling} \subsubsection{Fisher's LDA} \subsubsection{Isomap} \subsubsection{t-distributed Stochastic Neighbour Embedding}
{ "alphanum_fraction": 0.8248062016, "avg_line_length": 18.4285714286, "ext": "tex", "hexsha": "bcd343899da4bd9d52b1a4cad82371f39b5f3e3f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mcoot/CourseNotes", "max_forks_repo_path": "COMP4702/unsupervised.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mcoot/CourseNotes", "max_issues_repo_path": "COMP4702/unsupervised.tex", "max_line_length": 60, "max_stars_count": null, "max_stars_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mcoot/CourseNotes", "max_stars_repo_path": "COMP4702/unsupervised.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 153, "size": 645 }
\documentclass[conc-doc]{subfiles} \begin{document} \chapter[Delete]{Delete} Concurnas provides a mechanism where by a local variable can be removed from scope via the \lstinline{del} keyword. This has the additional side effect for non array Object types of invoking the delete method on them if one is defined. This is incredibly useful for performing resource management and is used in supporting the Concurnas gpu parallel computing as well as the off heap memory frameworks. Additionally Concurnas offers first class citizen support for the \lstinline{del} keyword when applied to maps and lists. \section{Removing values from maps} Values can be removed from a map as follows: \begin{lstlisting} mymap = {taste -> 10} del mymap['taste']//value corresponding to 'taste' is deleted \end{lstlisting} This saves us from having to write: \lstinline{mymap.remove('taste')} \section{Removing values from lists} Values can be removed from a list as follows: \begin{lstlisting} mylist = [5, 34, 2, 5, 11] del mylist[1]//the second value, '34', is removed from the list \end{lstlisting} This saves us from having to write: \lstinline{mylist.remove(1)} \section{Deleting Objects} Calling the delete operator on a variable in Concurnas does two things: \begin{enumerate} \item The variable is removed from scope \item The delete method is called on the object pointed to by the variable \end{enumerate} Here is an example of this in action: \begin{lstlisting} deleteCalled = false class DeleteMe{ override delete() void => deleteCalled = true } todel = DeleteMe() del todel //todel is now out of scope and cannot be referenced assert deleteCalled \end{lstlisting} Overriding the \lstinline{delete} method as above will allows us to implement functionality to be invoked upon the \lstinline{del} operator being applied to the object (or the \lstinline{delete} method being called directly). For instance, closing or otherwise managing resources. \section{@DeleteOnUnusedReturn Annotation} The \lstinline{@com.concurnas.lang.DeleteOnUnusedReturn} annotation can be used to denote that the delete method should be invoked on the return value of a method or function if it is unused by the caller (i.e. popped off the stack). This affords a degree of flexibility in API design around objects which require resource management as one does not need to worry about the caller correctly dealing with a return value which they may to be optional. This is used heavily in the support for Concurnas parallel GPU computing framework and is useful with off heap memory management. For example: \begin{lstlisting} from com.concurnas.lang import DeleteOnUnusedReturn @DeleteOnUnusedReturn class ClassWithResource{ def delete(){ //... } } def doWorkAndGetClassWithResouce(){ ret = ClassWithResource() //do some work here ret } doWorkAndGetClassWithResouce()//the delete method on the returned ClassWithResource object will be called \end{lstlisting} \begin{sloppypar} In the above example the delete method will be called on the returned \lstinline{ClassWithResource} object as it is unused by the caller. If it were used by the caller (e.g. assigned to a value, or used in as part of a nested method or function invocation) then the delete method will not be called. \end{sloppypar} Notes: \begin{enumerate} \item The delete method will not be called if the returned object is null. \item If a method is decorated with \lstinline{@DeleteOnUnusedReturn} then instances in all subclasses will also inherit this decoration. \item The annotation is not applied to refs to methods or functions which have been decorated with it. \end{enumerate} Additionally, for refs which are returned from a function or method and are immediately extracted by the caller, these will also have the delete method called on the ref if the \lstinline{@DeleteOnUnusedReturn} annotation is used. For example: \begin{lstlisting} from com.concurnas.lang import DeleteOnUnusedReturn @DeleteOnUnusedReturn class ClassWithResource{ def delete(){ //... } } def doWorkAndGetClassWithResouce(){ ret = ClassWithResource() //do some work here ret://returned as a local ref } got = doWorkAndGetClassWithResouce()//the delete method on the returned Local ref holding the ClassWithResource object will be called (but not on the ClassWithResource itself) \end{lstlisting} \end{document}
{ "alphanum_fraction": 0.7881627057, "avg_line_length": 41.2830188679, "ext": "tex", "hexsha": "c0a8289fb869df5ba74b7d03aa313dfc2803c87e", "lang": "TeX", "max_forks_count": 21, "max_forks_repo_forks_event_max_datetime": "2022-03-17T06:35:22.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-25T19:17:24.000Z", "max_forks_repo_head_hexsha": "6229ccf610d5db3eca4ebcf85c04b37fe44fcd7d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "michaeldesu/Concurnas", "max_forks_repo_path": "book/delete.tex", "max_issues_count": 68, "max_issues_repo_head_hexsha": "6229ccf610d5db3eca4ebcf85c04b37fe44fcd7d", "max_issues_repo_issues_event_max_datetime": "2022-01-20T13:08:56.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-10T01:37:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "michaeldesu/Concurnas", "max_issues_repo_path": "book/delete.tex", "max_line_length": 592, "max_stars_count": 201, "max_stars_repo_head_hexsha": "6229ccf610d5db3eca4ebcf85c04b37fe44fcd7d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "michaeldesu/Concurnas", "max_stars_repo_path": "book/delete.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-20T12:17:23.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-04T23:19:26.000Z", "num_tokens": 1035, "size": 4376 }
% lecture header include \usepackage{lplfitch,amsmath} \usepackage{qtree,hyperref} \usepackage{pgf,amssymb} \author{Richard Zach} \institute{Department of Philosophy\\ University of Calgary\\ \href{http://ucalgary.ca/rzach/279}{ucalgary.ca/rzach/279}} \definecolor{uofcred}{RGB}{227,39,38} \definecolor{uofcyellow}{RGB}{255,210,0} \DeclareSymbolFont{symbolsC}{U}{txsyc}{m}{n} \DeclareMathSymbol{\strictif}{\mathrel}{symbolsC}{74} \DeclareMathSymbol{\boxright}{\mathrel}{symbolsC}{128} \let\IFF\Leftrightarrow \let\iff\leftrightarrow \let\impl\to \def\T{{\color{green}\begin{colormixin}{25!black}\text{T}\end{colormixin}}} \def\F{{\color{red}\begin{colormixin}{25!black}\text{F}\end{colormixin}}} \long\def\subsec#1#2{\subsection{#1}\frame{\frametitle{#1} #2}} \def\bit{\begin{itemize}[<1->]} \def\eit{\end{itemize}} \def\ben{\begin{enumerate}[<1->]} \def\een{\end{enumerate}} \makeatletter\let\@makefnmark\noindent\makeatother %\setbeamercolor{footnote}{fg=black!70} \def\foot#1{\footnotetext{\color{black!70}#1}} \def\deemph#1{{\color{black!70}#1}} \let\phi\varphi \setbeamertemplate{theorems}[numbered] %\useinnertheme{circles} \setbeamertemplate{itemize subitems}[triangle] \renewcommand{\beamertemplatetransparentcovereddynamic}{ \beamersetuncovermixins {\opaqueness<1>{50}\opaqueness<2>{30}\opaqueness<3>{15}\opaqueness<4->{5}}% {\opaqueness<1>{50}\opaqueness<2>{30}\opaqueness<3>{15}\opaqueness<4->{5}}} %\beamertemplatetransparentcovereddynamic \defbeamertemplate*{footline}{my theme} {% \leavevmode% \hbox{\begin{beamercolorbox}[wd=.5\paperwidth,ht=2.5ex,dp=1.125ex,leftskip=.3cm,rightskip=.3cm]{author in head/foot}% \insertframenumber/\inserttotalframenumber \hfil \usebeamerfont{author in head/foot}\insertshortauthor \end{beamercolorbox}% \begin{beamercolorbox}[wd=.5\paperwidth,ht=2.5ex,dp=1.125ex,leftskip=.3cm,rightskip=.3cm plus1fil]{title in head/foot}% \usebeamerfont{title in head/foot}Logic I F13---\insertshorttitle---\insertdate \end{beamercolorbox}}% \vskip0pt% } \begin{document} \setlength{\fitchargwidth}{7em} \setlength{\fitchprfwidth}{7em} \frame{\frametitle{\insertshorttitle\ (\insertdate)} \tableofcontents[hidesubsections] }
{ "alphanum_fraction": 0.7204019222, "avg_line_length": 29.3461538462, "ext": "tex", "hexsha": "95f3a99b36e45af9e2c7ee2dd1f0d82c25a732f3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "rzach/phil279", "max_forks_repo_path": "header.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "rzach/phil279", "max_issues_repo_path": "header.tex", "max_line_length": 121, "max_stars_count": 5, "max_stars_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "rzach/phil279", "max_stars_repo_path": "header.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-21T10:48:55.000Z", "max_stars_repo_stars_event_min_datetime": "2015-09-23T13:42:54.000Z", "num_tokens": 813, "size": 2289 }
% Template - https://sanskrit.uohyd.ac.in/18WSC/Style_files/CS_and_DH.tex \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \documentclass[11pt]{article} \usepackage{scl} \usepackage{times} \usepackage{url} \usepackage{latexsym} \usepackage{lineno} \usepackage{fontspec, xunicode, xltxtra} \newfontfamily\skt[Script=Devanagari]{Sanskrit 2003} \setmonofont{Sanskrit 2003} \title{Jyotiṣa python library and event repositories} \author{Karthik Raman \\ IIT, Chennai \\ {\tt kraman@iitm·ac·in} \\\And Vishvas Vasuki \\ Dyugaṅgā, Beṅgaḷūru \\ {\tt https://sanskrit.github.io/groups/dyuganga/} \\} \date{} \begin{document} \maketitle %\linenumbers \begin{abstract} In this paper, we introduce a python library to facilitate Jyotiṣa computations, as well as associated but independent festival and event repositories. This library may be used to compute highly customized Hindu calendars, and has been successfully used to determine dates of historical events. \end{abstract} \section{Introduction} Jyotisha \cite{jyotisha_py} is an open source python package to do panchāṅga (traditional vedic astronomical / astrological) calculations, and produce calendars in various formats. It is backed by big events databases, stored in the open source ``adyatithi" repository \cite{adyatithi}. These events databases may be used with other Jyotiṣa libraries as well. \section{Basic computations} \section{Calendar generation} \section{Event databases} % include your own bib file like this: \bibliographystyle{acl} \bibliography{jyotisha_py__fest_db} \end{document}
{ "alphanum_fraction": 0.7823383085, "avg_line_length": 32.16, "ext": "tex", "hexsha": "fb891b492b51fafcc57bb9850bb0cf25e82f7ae7", "lang": "TeX", "max_forks_count": 23, "max_forks_repo_forks_event_max_datetime": "2020-11-14T19:41:58.000Z", "max_forks_repo_forks_event_min_datetime": "2017-08-27T11:54:41.000Z", "max_forks_repo_head_hexsha": "18ecf246dc76d54f6d4ed77fb6d656030887b046", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sanskrit-coders/jyotisha", "max_forks_repo_path": "hugo-source/static/articles/wsc2022/jyotisha_py__fest_db.tex", "max_issues_count": 71, "max_issues_repo_head_hexsha": "18ecf246dc76d54f6d4ed77fb6d656030887b046", "max_issues_repo_issues_event_max_datetime": "2020-12-11T01:16:47.000Z", "max_issues_repo_issues_event_min_datetime": "2017-08-27T13:54:06.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sanskrit-coders/jyotisha", "max_issues_repo_path": "hugo-source/static/articles/wsc2022/jyotisha_py__fest_db.tex", "max_line_length": 359, "max_stars_count": 40, "max_stars_repo_head_hexsha": "18ecf246dc76d54f6d4ed77fb6d656030887b046", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sanskrit-coders/jyotisha", "max_stars_repo_path": "hugo-source/static/articles/wsc2022/jyotisha_py__fest_db.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-30T03:47:57.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-01T04:22:35.000Z", "num_tokens": 468, "size": 1608 }
\documentclass[mla8]{mla} \title{Sample MLA Document} \author{John Doe} \professor{Dr.\ Suzie Que} \course{\LaTeX\ 101} \date{\mladate} % see docs for `\mladate' % The .bib file (explained later) must be included in the preamble \addbibresource{mla-example.bib} \begin{document} \begin{paper} This is an example document using ``mla.cls''. The header is automatically printed upon using the ``paper'' class, which is why there is no ``\textbackslash{}maketitle''. \section{Professors who prefer sections} Sometimes, research papers can become unmanageably lengthy. In that case, section headings can help divide up the ideas to make it more accessible to the reader. Though this paper is short, section headings are employed as an example of the ``mla'' class' capabilities. Some professors may explicitly require or denounce use of headings. Dr.\ Suzie Que of Anytown, PA requires they be used for anything longer than five pages: \begin{blockquote} John---so help me God---if you turn in another twenty-page research paper with no logical breaks I will hang you at the stake. Even though the MLA style guide doesn't say anything about section headings, they're not actually prohibited. So, if you turn in \emph{anything} longer than five pages to me and there isn't a \emph{single} break or section heading, I will dock your grade to an F. Capisce? \cite{que2019} \end{blockquote} Despite her language, she does have a point to say. \subsection{Subsections} Alongside regular top-level sections, one can use ``\textbackslash{}subsection'' commands too\endnote{Section commands in ``mla.cls'' work identical to those of the ``article'' class.}. \section{Lists} Vertical lists are a rarity in MLA format, but if one so pleases, they can be used. The ``itemize'', ``enumerate'' and ``description'' lists work just as expected, even with sublists. \begin{itemize} \item A bogus item \item Lorem ipsum dolor sit amet. This item has a bunch of text just so it covers more than one line in the paper and shows proper indentation. \item Last item! \begin{enumerate} \item Just kidding; there's a subitem. And it's a number! \end{enumerate} \item Okay, now it's the last item. \end{itemize} \section{Figures} On rare occasions, you might have to use figures or tables in your paper. Good news is the ``figure'' and ``table'' environments work exactly as expected! Just make sure to use ``\textbackslash{}begin\{figure\}[H]'' if you want the image to stay exactly where you put it. \begin{figure}[H] \includegraphics[width=0.5\linewidth]{mla-example-image} \caption{A scene from atop Spruce Knob, West Virginia} \end{figure} And yes, I shamelessly used my own image. \section{Using endnotes} As one may notice, the above subsection used an endnote. These can simply be cited with ``Yada yada text\textbackslash{}endnote\{more info\ldots\}.'' Endnotes can be easily printed in correct format by calling ``\textbackslash{}printendnotes'' within the ``notes'' environment. \section{Using bibliographies} Dr.\ Suzie Que was cited in the above blockquote. The ins-and-outs of ``biblatex'' will not be explained in this document, so please refer to online documentation such as the ``BibLaTeX Cheat Sheet''. Just as with the endnotes, the bibliography can be easily printed in correct format by calling ``\textbackslash{}printbibliography[heading=none]'' within the ``workscited'' environment. (The ``heading=none'' part is important; the ``workscited'' environment already prints one.) \end{paper} \begin{notes} \printendnotes \end{notes} \begin{workscited} \printbibliography[heading=none] \end{workscited} \end{document}
{ "alphanum_fraction": 0.761394838, "avg_line_length": 30.35, "ext": "tex", "hexsha": "3d8cc8631fccca33fbd3a63cb78bca0d425423ee", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "09b2facfc15ebfe36040ab73cf8f0ae55fed58e4", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "ssterling/mla.cls", "max_forks_repo_path": "mla-example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "09b2facfc15ebfe36040ab73cf8f0ae55fed58e4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "ssterling/mla.cls", "max_issues_repo_path": "mla-example.tex", "max_line_length": 71, "max_stars_count": 1, "max_stars_repo_head_hexsha": "09b2facfc15ebfe36040ab73cf8f0ae55fed58e4", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "ssterling/mla.cls", "max_stars_repo_path": "mla-example.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-28T04:52:19.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-28T04:52:19.000Z", "num_tokens": 950, "size": 3642 }
\section{The \searchn algorithm} \Label{sec:searchn} The \searchn algorithm in the \cxx Standard Library \cite[\S 28.5.13]{cxx-17-draft} finds the first place where a given value starts to occur a given number of times in a given sequence. For our purposes we have modified the generic implementation to that of an array of type \valuetype. The signature now reads: \begin{lstlisting}[style = acsl-block] size_type search_n(const value_type* a, size_type n, size_type p, value_type v); \end{lstlisting} Note the similarity to the signature of \search (\S\ref{sec:search}). The only difference is that \inl{v} now is a single value rather than an array. \begin{figure}[hbt] \centering \includegraphics[width=0.59\textwidth]{Figures/search_n.pdf} \caption{\Label{fig:searchn} Searching the first occurrence a given constant sequence in \inl{a[0..n-1]}} \end{figure} \FloatBarrier The function \searchn returns the first index \inl{s} of the array \inl{a} where the condition \inl{a[s+k] == v} holds for each index~\inl{k} with \inl{0 <= k < p} (see Figure~\ref{fig:searchn}). If no such index exists, then \searchn returns the length \inl{n} of the array \inl{a}. \subsection{The predicate \HasConstantSubRange} Our specification of \searchn starts with introducing the predicate \logicref{HasConstantSubRange}. \input{Listings/HasConstantSubRange.acsl.tex} This predicate formalizes that the sequence~\inl{a} of length \inl{n} contains a subsequence of \inl{p} times the value~\inl{v}. It thereby reuses the predicate \logicref{AllEqual}. Similar to predicate \logicref{HasSubRange}, in order to contain \inl{p} repetitions, the size of the array \inl{a[0..n-1]} must be at least that large; this is what lemma \logicref{HasConstantSubRangeSizes} says. \subsection{Formal specification of \searchn} Like for \specref{search}, our specification of \specref{searchn} is very similar to that of \specref{findii}. \input{Listings/search_n.h.tex} We again use two behaviors to capture the essential aspects of \searchn. \begin{itemize} \item The behavior \inl{has_match} applies if the sequence \inl{a} contains an \inl{n}-fold repetition of \inl{b}. We express this condition with \inl{assumes} by using the predicate \logicref{HasConstantSubRange}. The \inl{result} ensures clause of behavior \inl{has_match} indicates that the return value must be in the range~\inl{[0..n-p]}. The \inl{match} ensures clause expresses that the return value of \searchn actually points to an index where \inl{b} can be found \inl{p} or more times in \inl{a}. The \inl{first} ensures clause expresses that the minimal index with this property is returned. \item The behavior \inl{no_match} covers the case that there is no matching subsequence in sequence \inl{a}. In this case, \searchn must return the length \inl{n} of the range \inl{a}. \end{itemize} \input{Listings/search_n.c.tex} \subsection{Implementation of \searchn} Although the specification of \specref{searchn} strongly resembles that of \specref{search}, their implementations differ significantly. The implementation of \implref{searchn} has a time complexity of $\mathcal{O}(n)$, whereas the implementation of \implref{search} employs an easy, but a non-optimal algorithm needing $\mathcal{O}(n \cdot p)$ time. Our implementation maintains in the variable \inl{start} the beginning of the most recent consecutive range of values~\inl{v}. The loop invariant \inl{not_found} states that we didn't find an \inl{p}-fold repetition of \inl{b} up to now; if we find one, we terminate the loop, returning \inl{start}. % We handle the boundary cases \inl{n < p} and \inl{p == 0} in explicit else branches. We found this easier when trying to ensure a verification by automatic provers. \clearpage
{ "alphanum_fraction": 0.7628510864, "avg_line_length": 33.6964285714, "ext": "tex", "hexsha": "b7a08618a7fc1c2e744d10ce3bf116f6cc26880e", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_path": "Informal/nonmutating/search_n.tex", "max_issues_count": 22, "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_path": "Informal/nonmutating/search_n.tex", "max_line_length": 105, "max_stars_count": 90, "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_path": "Informal/nonmutating/search_n.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "num_tokens": 1087, "size": 3774 }
\section{Acknowledgements} \mdseries The BrainGrid project has been designed, coded and refactored by multiple individuals over the years. The project has only moved forward thanks to these individuals, who have participated in this team as a labor of love. Those involved include, but are not limited to, the following list: \begin{itemize} \item Michael Stiber \item Fumitaka Kawasaki \item Paul Bunn \item Chris Burgess \item Derek McLean \item Hugo Ribeiro \end{itemize}
{ "alphanum_fraction": 0.7962577963, "avg_line_length": 43.7272727273, "ext": "tex", "hexsha": "2f62ae027bd5271fef2c083dc66633c983ea0638", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "39ad44dfa42d30f140a5a334208caf9a32932b2c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "shastrihm/BrainGrid", "max_forks_repo_path": "old/doc/tex/acknowledgements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "39ad44dfa42d30f140a5a334208caf9a32932b2c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "shastrihm/BrainGrid", "max_issues_repo_path": "old/doc/tex/acknowledgements.tex", "max_line_length": 298, "max_stars_count": null, "max_stars_repo_head_hexsha": "39ad44dfa42d30f140a5a334208caf9a32932b2c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "shastrihm/BrainGrid", "max_stars_repo_path": "old/doc/tex/acknowledgements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 123, "size": 481 }
\documentclass[a4paper]{article} \def\npart{IV} \def\ntitle{Geometric Aspects of p-adic Hodge Theory} \def\nlecturer{T.\ Csige} \def\nterm{Michaelmas} \def\nyear{2020} \input{header} \newcommand{\tilt}{\flat} % tilting \newcommand{\perf}{\mathrm{perf}} %\DeclareMathOperator{\perf}{perf} % perfection \renewcommand{\c}[1]{\mathbf{#1}} \newcommand{\Mod}{{\c{Mod}}} \DeclareMathOperator{\Tor}{Tor} % torsion \DeclareMathOperator{\Ext}{Ext} % extension \newcommand{\sh}[1]{\mathcal{#1}} % sheaf \DeclareMathOperator{\Spa}{Spa} \renewcommand*{\O}{\mathcal{O}} \newtheorem*{construction}{Construction} \iffalse \renewcommand*{\P}{\mathbb{P}} \newcommand{\sh}[1]{\mathcal{#1}} % sheaf \renewcommand*{\O}{\mathcal{O}} \let\Sp\Relax \DeclareMathOperator{\Sp}{Sp} % maximum spectrum \DeclareMathOperator{\Max}{Max} \DeclareMathOperator{\Spf}{Spf} \DeclareMathOperator{\Spa}{Spa} \DeclareMathOperator{\supp}{supp} % support of a valuation \fi \begin{document} \input{titlepage} \tableofcontents \section{Introduction} Course structure: \begin{enumerate} \item introduction \item Hodge-Tate decomposition for abelian varieties with good reduction \item Hodge-Tate decomposition in generale (pro-étale cohomology) \item integral aspects \item some additional topics (Hodge-Tate decomposition theorem for rigid analytic varieties) \end{enumerate} \subsection{Hodge decomposition over \(\C\)} Let \(X\) be a smooth projective variety over \(\C\). The \emph{Hodge decomposition} is a direct sum decomposition for all \(n \geq 0\) \[ H_{\mathrm{sing}}^n(X^{\mathrm{an}}, \C) = \bigoplus_{p + q = n} H^{p, q} \] where LHS is the singular cohomology of the \(\C\)-analytic manifold \(X^{\mathrm{an}}\) (the complex analytification) and on RHS \[ H^{p, q} = H^q(X^{\mathrm{an}}, \Omega_{X^{\mathrm{an}}}^p) \] with \(\Omega_{X^{\mathrm{an}}}^p\) denoting the sheaf of holomorphic \(p\)-forms. Moreover, complex conjugation acts on \[ H^n_{\mathrm{sing}}(X^{\mathrm{an}}, \C) \cong H_{\mathrm{sing}}^n(X^{\mathrm{an}}, \Q) \otimes \C \] via its action on \(\C\) and \(H^{p, q} = \overline{H^{q, p}}\). This is called a \emph{pure structure of weight \(n\)}. These are proven via identifying \(H^{p, q}\) with Dolbeault cohomology and using the (very deep) theory of harmonic forms. However, part of the theory can be understood purely algebraically. It is known that \(H^n_{\mathrm{sing}}(X^{\mathrm{an}}, \C)\) gives the cohomology of the constant sheaf \(\C\) on \(X^{\mathrm{an}}\). On the other hand, consider the de Rham complex \[ \Omega_{X^{\mathrm{an}}}^\bullet = \O_{X^{\mathrm{an}}} \xrightarrow{\d} \Omega_{X^{\mathrm{an}}}^1 \xrightarrow{\d} \Omega_{X^{\mathrm{an}}}^2 \to \cdots \] Here \(\d\) is the usual derivation and the higher \(\d\)'s are given by \[ \d (\omega_1 \wedge \omega_2) = \d \omega_1 \wedge\omega_2 + (-1)^p \omega_1 \wedge\d \omega_2 \] for \(\omega_1 \in \Omega_{X^{\mathrm{an}}}^p, \omega_2 \in \Omega_{X^{\mathrm{an}}}^q\). Taking hypercohomology \[ H^n_{\mathrm{dR}}(X^{\mathrm{an}}) := \H(X^{\mathrm{an}}, \Omega_{X^{\mathrm{an}}}^\bullet) \] we get the so-called de Rham cohomology group. Embedding the constant sheaf \(\C\) into \(\O_{X^{\mathrm{an}}}\) induces a map \(\C \to \Omega_{X^{\mathrm{an}}}^\bullet\) of complexes of sheaves. The (holomorphic) Poincaré lemma states that this map is a quasi-isomorphism of sheaves. More precisely, one can cover \(X^{\mathrm{an}}\) by open balls and for any open ball \(U \subseteq X^{\mathrm{an}}\), the complex \[ 0 \to \C \to \O_{X^{\mathrm{an}}}(U) \xrightarrow{\d} \Omega^1_{X^{\mathrm{an}}} \to \cdots \] is exact: any closed differential form can be integrated on an open ball. Thus \[ H^n_{\mathrm{sing}}(X^{\mathrm{an}}, \C) \cong H^n_{\mathrm{dR}}(X^{\mathrm{an}}). \] This is the comparison theorem between singular and de Rham cohomology. Now the complex \(\Omega_{X^{\mathrm{an}}}^\bullet\) has a decreasing filtration of subcomplexes \[ \Omega_{X^{\mathrm{an}}}^{\geq p} := 0 \to \cdots \to 0 \to \Omega^p_{X^{\mathrm{an}}} \xrightarrow{\d} \Omega_{X^{\mathrm{an}}}^{p + 1} \xrightarrow{\d} \cdots \] We have that \(\operatorname{gr}^p \Omega^\bullet_{X^{\mathrm{an}}} \cong \Omega^p_{X^{\mathrm{an}}}\). It is well-known that there is a convergent spectral sequence associated to \(\Omega_{X^{\mathrm{an}}}^\bullet\) with the filtration above, called the \emph{Hodge to de Rham spectral sequence} \[ E_1^{pq} = H^q(X^{\mathrm{an}}, \Omega^p_{X^{\mathrm{an}}}) \Rightarrow H^{p + q}_{\mathrm{dR}}(X^{\mathrm{an}}). \] The filtration on \(H^n_{\mathrm{dR}}(X^{\mathrm{an}})\) given by the spectral sequence is called the \emph{Hodge filtration}. Fact: the Hodge to de Rham spectral sequence degenerates at \(E_1\). This together with the comparison theorem gives the Hodge decomposition \[ H^n_{\mathrm{sing}}(X^{\mathrm{an}}, \C) = \bigoplus_{p + q = n} H^q(X^{\mathrm{an}}, \Omega_{X^{\mathrm{an}}}^p) \] for all \(n\). \subsection{Algebraisation} On a complex variety \(X\) we may consider the algebraic de Rham complex \[ \Omega_X^\bullet := \O_X \xrightarrow{\d} \Omega_X^1 \xrightarrow{\d} \cdots \] For \(X\) smooth these are locally free sheaves. The same way as above, we get the algebraic Hodge to de Rham spectral sequence \[ E_1^{pq} = H^q(X, \Omega_X^p) \Rightarrow H^{p + q}_{\mathrm{dR}}(X). \] Here we use the Zariski topology. There are two natural maps \begin{align*} H^q(X, \Omega_X^p) &\to H^q(X^{\mathrm{an}}, \Omega_{X^{\mathrm{an}}}^p) \\ H^{p +q}_{\mathrm{dR}}(X) &\to H^{p +q}_{\mathrm{dR}}(X^{\mathrm{an}}) \end{align*} all compatible with the maps in the above spectral sequences. By GAGA the first one is an isomorphism, and by a theorem of Grothendieck the second is also an isomorphism. Hence degeneration of the analytic Hodge to de Rham is equivalent to the degeneration of the algebraic counterpart. However, there is no algebraic Poincaré lemma, the algebraic de Rham complex is not a resolution of \(\C\) and anyway the sheaf cohomology of \(\C\) is trivial in the Zariski topology. \subsection{The case of a \(p\)-adic base field} Let \(p\) be a prime. Recall \(\C_p\) is the completion of the algebraic closure \(\overline Q_p\) of \(\Q_p\). The Galois group \(\gal(\overline \Q_p/\Q_p)\) acts on \(\C_p\) by continuity. Let \(K\) be a finite extension of \(\Q_p\). \printindex \end{document}
{ "alphanum_fraction": 0.6837391987, "avg_line_length": 45.4642857143, "ext": "tex", "hexsha": "b15d900553ee6a27cbd65ff25d05b298c0c5e89b", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "geniusKuang/tripos", "max_forks_repo_path": "IV/geometric_aspects_of_p-adic_hodge_theory.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "geniusKuang/tripos", "max_issues_repo_path": "IV/geometric_aspects_of_p-adic_hodge_theory.tex", "max_line_length": 375, "max_stars_count": 27, "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_path": "IV/geometric_aspects_of_p-adic_hodge_theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "num_tokens": 2177, "size": 6365 }
%!TEX root = ../main.tex \section{Matrices} \paragraph{Matrix} A \textbf{matrix} (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in meaningful rows and columns. The individual items in a matrix are called its elements or entries. \paragraph{Size, Entries} A matrix's size is described by the number of rows, by the number of columns. If a matrix is given a name, an entry may be referred to by a subscript of row and column on that letter. For example, on matrix \textbf{[A]}, one might refer to the entry in the second row and third column as \textbf{A}$_{2,3}$ \begin{example} \exProblem Given that A is 1234 what is $A_{1,4}$? \exSolution 4 \end{example} \subsection{Addition and Scalars} Matrices may be added if and only if they are the exact same size. A matrix maybe multiplied by a number (called a \gls{scalar}), which is simply multiplied against every element in the matrix. Two matrices are added just by adding the for corresponding entries, i.e. $A_{i,j}+B_{i,j}$ produces the new entry at $i,j$. \begin{example} \exProblem If A is 1234, what is 2A? \exSolution 2468? \end{example} \begin{example} \exProblem If A is 1234 and B is 0102 what is A+B? \exSolution 1336 \end{example} \subsection{Matrix Multiplication} The product of two matrices is the coming together of rows of the first, with columns of the second. For example, to compute the top left entry in the product of two matrices, one multiplies each entry in the first row of the first matrix, against the corresponding entry in the first column of the second matrix. (See below for a helpful visual.) Naturally, this means that the rows and columns must match up. \subsection{Square Matrices} identity, inverses, determinants \subsection{Gaussian Elimination} augmented matrices
{ "alphanum_fraction": 0.7579408543, "avg_line_length": 29.9344262295, "ext": "tex", "hexsha": "21d08ae06a11d244f5602cc67011d9424e5ae4ce", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_path": "chAA/appendix07ma.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_path": "chAA/appendix07ma.tex", "max_line_length": 206, "max_stars_count": 2, "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_path": "chAA/appendix07ma.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "num_tokens": 481, "size": 1826 }
\chapter{Performance of Compiled Code} The performance of compiled code is affected by the various optimizations performed by the compiler. This chapter demonstrates the effects of these optimizations on the execution speed of Icon expressions. It also presents speed improvements and memory usage for compiled code versus interpreted code for a set of complete Icon programs. All timing results used in this chapter were obtained on a Sun 4/490 and are the average of the results from three runs. \section{Expression Optimizations} The effects of four categories of optimization are demonstrated. These are assignment optimizations, invocation optimizations, control flow optimizations, and optimizations using information from type inferencing. Expression timings for the first three categories were made using techniques described in the August 1990 issue of \textit{The Icon Analyst} [.ianl1.]. The following program skeleton is used to construct the programs to perform these timings. \goodbreak \begin{iconcode} \>procedure main()\\ \>\>local x, start, overhead, iters\\ \>\>iters := 1000000\\ \>\>start := \&time\\ \>\>every 1 to iters do \{\\ \>\>\>\}\\ \>\>overhead := \&time - start\\ \>\>x := 0\\ \>\>start := \&time\\ \>\>every 1 to iters do \{\\ \>\>\>\textit{expression to be timed (may use x)}\\ \>\>\>\}\\ \>\>write(\&time - start - overhead)\\ \>end\\ \end{iconcode} The timings are performed both with and without the desired optimizations, and the results are compared by computing the ratio of the time without optimization to the time with optimization. The assignment optimizations are described in Chapter 22. The effect of the assignment optimizations on the expression \iconline{ \>x := \&null } \noindent is measured using the program outlined above. The analysis that produces the assignment optimization is disabled by enabling debugging features in the generated code. The only other effect this has on the assignment expression is to insert code to update the line number of the expression being executed. In this test, the line number code is removed before the C code is compiled, insuring that the assignment optimization is the only thing measured. The timing results for this test produce \tablefirsthead{} \tablehead{} \tabletail{} \tablelasttail{} \begin{noIndex} \begin{center} \begin{tabular}{@{}r@{\hspace{0.6in}}r@{\hspace{0.2in}}r@{}} \multicolumn{3}{c}{Assignment Test}\\ \multicolumn{3}{c}{Time in Milliseconds Averaged over Three Runs}\\ Unoptimized & Optimized & Ratio\\ 1122 & 478 & 2.3 \\ \end{tabular} \end{center} \end{noIndex} The tests were performed with type inferencing enabled. Therefore, even the {\textquotedbl}unoptimized{\textquotedbl} version of the assignment has the standard operation optimizations applied to it. This test demonstrates the importance of performing the special-case assignment optimizations. The next category of optimization measured is invocation optimization. This results in the direct invocation of the C functions implementing operations, or in some cases results in the operations being generated in line. The execution time for the expression \iconline{ \>\>\ tab(0) } \noindent is measured with and without invocation optimizations. As with the assignment optimizations, these optimizations are disabled by enabling debugging features. Once again the line number code is removed before the C code is compiled. These optimizations interact with the optimizations that use information from type inferencing. The measurements are made with type inferencing disabled. Therefore, no type checking simplifications are performed. Without the invocation optimizations, the generated code consists of an indirect invocation through the global variable \texttt{tab}. With the invocation optimizations, the generated code consists of type checking/conversion code for the argument to \texttt{tab} and a call to the function implementing the body statement of \texttt{tab}. The timing results for \texttt{tab(0)} produce \begin{center} \begin{tabular}{@{}r@{\hspace{0.6in}}r@{\hspace{0.2in}}r@{}} \multicolumn{3}{c}{Invocation Test}\\ \multicolumn{3}{c}{Time in Milliseconds Averaged over Three Runs}\\ Unoptimized & Optimized & Ratio\\ 8394 & 4321 & 1.9\\ \end{tabular} \end{center} The third category of optimization is control flow optimization. As explained in Chapter 21, these optimizations only perform improvements that a C compiler will not perform when the code contains trivial call chains. One situation that produces trivial call chains is nested alternation. The execution time for the expression \iconline{ \>\>every x := ixor(x, 1 | 2 | 3 | 4 | 5) } \noindent is measured with and without control flow optimizations. The timing results for this every loop produce \begin{center} \begin{tabular}{@{}r@{\hspace{0.6in}}r@{\hspace{0.2in}}r@{}} \multicolumn{3}{c}{Control Flow Test}\\ \multicolumn{3}{c}{Time in Milliseconds Averaged over Three Runs}\\ Unoptimized & Optimized & Ratio\\ 6384 & 4184 & 1.5 \\ \end{tabular} \end{center} The final category of optimization results from type inferencing. The speed improvements result from generating operations in line, eliminating type checking, and generating success continuations in line. Use of the to operation is a good example of where these optimizations can be applied. This is demonstrated by measuring the speed of an every loop using the to operation. The program that performs the measurement is \goodbreak \begin{iconcode} \>procedure main()\\ \>\>local x, start\\ \>\>start := \&time\\ \>\>every x := 1 to 5000000\\ \>\>write(\&time - start)\\ \>end\\ \end{iconcode} The timing results for this program produce \begin{center} \begin{tabular}{@{}r@{\hspace{0.6in}}r@{\hspace{0.2in}}r@{}} \multicolumn{3}{c}{Type Inference Test}\\ \multicolumn{3}{c}{Time in Milliseconds Averaged over Three Runs}\\ Unoptimized & Optimized & Ratio\\ 9233 & 2721 & 3.3 \\ \end{tabular} \end{center} Another approach to determining the effectiveness of type inferencing is to measure how small a set it deduces for the possible types of operands to operations. This indicates whether future work should concentrate on improving type inferencing itself or simply concentrate on using type information more effectively in code generation. A simple measure is used here: the percentage of operands for which type inferencing deduces a unique Icon type. Measurements are made for operands of all operators, except optimized assignment, and for operands of all built-in functions appearing in optimized invocations. For the most part, these are the operations where the code generator can use type information. Measurements were made for a set of 14 programs (described below). Unique operand types within each program range from 63 percent to 100 percent of all operands, with an overall figure for the tests suite of 80 percent (this is a straight unweighted figure obtained by considering all operands in the test suite without regard to what program they belong to); even a perfect type inferencing system will not deduce unique types for 100 percent of all operands, because not all operands have unique types. This suggests that an improved type inferencing system may benefit some programs, but will have only a small overall impact. Future work should give priority to making better use of the type information rather than to increasing the accuracy of type inferencing. \section{Program Execution Speed} It has been demonstrated that the compiler optimizations are effective at improving the kinds of expressions they are directed toward. The question remains: How fast is compiled code (with and without optimizations) for complete programs as compared to interpreted code for the same programs? For some expressions, optimizations may interact to create significant cumulative speed improvements. For example, the fully optimized code for the every loop in the previous example is 30 times faster than the interpreted code; the improvement of 3.3 from type inferencing contributes one factor in the total improvement. Other expressions may spend so much time in the run-time system (which is unaffected by compiler optimizations) that no measurable improvements are produced. A set of 14 programs was selected mostly from contributions to the Icon program library [.tr90-7.] for testing the performance of the compiler. These programs were selected to represent a variety of applications and programming styles (an additional requirement is that they run long enough to obtain good timing results). The following table shows the speed improvements for the compiled code as compared to interpreted code. The compiler and interpreter used for the measurements both implement Version 8 of Icon. The execution time used to compute the speed improvements is the cpu time measured using the Bourne shell's time command. The first column in the table shows the execution time under the interpreter. The second column is for compiled code with debugging features enabled and optimizations disabled. This code is still better than what would be obtained by just removing the interpreter loop, because intelligent code generation is performed, especially for bounded expressions, and keywords are generated in line. The third column is for code with debugging features disabled and full optimization enabled. \eject \begin{center} \tablefirsthead{} \tablehead{} \tabletail{} \tablelasttail{} \begin{xtabular}{@{}r@{\hspace{0.4in}}r@{\hspace{0.4in}}r@{\hspace{0.4in}}r@{}} \multicolumn{4}{c}{Execution Time in Seconds Averaged over Three Runs}\\ & & Compiler & Compiler\\ Program & Interpreter & Unoptimised & Optimized\\ cksol & 49.9 & 33.5 (1.48) & 22.5 (2.21) \\ concord & 31.1 & 18.5 (1.68) & 9.8 (3.17) \\ iidecode & 60.3 & 34.0 (1.77) & 12.9 (4.67) \\ iiencode & 50.4 & 34.4 (1.46) & 10.5 (4.80) \\ impress & 44.6 & 24.8 (1.79) & 14.0 (3.18) \\ list & 43.1 & 24.5 (1.75) & 13.6 (3.16) \\ memfiltr & 60.8 & 34.3 (1.77) & 15.3 (3.97) \\ mf & 30.1 & 18.7 (1.60) & 14.7 (2.04) \\ pssplit & 64.0 & 39.0 (1.64) & 26.6 (2.40) \\ roffcmds & 32.9 & 18.1 (1.81) & 12.0 (2.74) \\ sentence & 34.3 & 23.9 (1.43) & 16.2 (2.11) \\ spandex & 36.8 & 23.3 (1.57) & 14.7 (2.50) \\ textcnt & 36.2 & 18.4 (1.96) & 9.9 (3.65) \\ wrapper & 27.3 & 15.9 (1.71) & 9.4 (2.90) \\ \end{xtabular} \end{center} The numbers in parentheses are speed-up factors obtained by dividing the interpreter execution time by the execution time of compiled code. \section{Code Size} One advantage the compiler has over the interpreter is that, unless a program is compiled with full string invocation enabled, the executable code for a program need not include the full run-time system. For systems with limited memory, this can be a significant advantage. The sizes of executable code presented here are obtained from file sizes. All executable files have had debugging information stripped from them. The size of the executable code in the interpreter system is taken to be the size of the interpreter (278,528 bytes) plus the size of the icode for the program being measured (under Unix systems, the size of the executable header, 12,800 bytes for the Sun 4, is subtracted from the size of the icode file, because it is not present during interpretation). Measurements for the 14 test programs are: \begin{center} \tablefirsthead{} \tablehead{} \tabletail{} \tablelasttail{} \begin{xtabular}{@{}r@{\hspace{0.4in}}r@{\hspace{0.4in}}r@{\hspace{0.4in}}r@{}} \multicolumn{4}{c}{Program Sizes in Bytes}\\ Program & Interpreter & Compiler & Ratio\\ cksol & 282,153 & 81,920 & 0.29 \\ concord & 284,416 & 90,112 & 0.31 \\ iidecode & 285,525 & 98,304 & 0.34 \\ iiencode & 283,567 & 81,920 & 0.28 \\ impress & 295,656 & 114,688 & 0.38 \\ list & 287,376 & 98,304 & 0.34 \\ memfiltr & 296,082 & 114,688 & 0.38 \\ mf & 282,739 & 81,920 & 0.28 \\ pssplit & 279,709 & 73,728 & 0.26 \\ roffcmds & 280,797 & 81,920 & 0.29 \\ sentence & 283,249 & 81,920 & 0.28 \\ spandex & 281,843 & 81,920 & 0.29 \\ textcnt & 280,397 & 73,728 & 0.26 \\ wrapper & 279,780 & 73,728 & 0.26 \\ \end{xtabular} \end{center} Other factors create differences in memory usage between the interpreter and compiled code. For example, the interpreter allocates a stack for expression evaluation. On the Sun 4, this stack is 40,000 bytes. The compiler, on the other hand, allocates work areas on a per-procedure basis and only allocates the maximum needed at any execution point within the procedure.
{ "alphanum_fraction": 0.7461290576, "avg_line_length": 33.3937007874, "ext": "tex", "hexsha": "1f5671b30eaefedb15703b29543fea0d654ffae3", "lang": "TeX", "max_forks_count": 16, "max_forks_repo_forks_event_max_datetime": "2022-03-01T06:01:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-14T04:32:36.000Z", "max_forks_repo_head_hexsha": "df79234dc1b8a4972f3908f601329591c06bd141", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "jschnet/unicon", "max_forks_repo_path": "doc/ib/p2-codePerform.tex", "max_issues_count": 83, "max_issues_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad", "max_issues_repo_issues_event_max_datetime": "2022-03-22T11:32:35.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-03T20:07:12.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "MatthewCLane/unicon", "max_issues_repo_path": "doc/ib/p2-codePerform.tex", "max_line_length": 79, "max_stars_count": 35, "max_stars_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "MatthewCLane/unicon", "max_stars_repo_path": "doc/ib/p2-codePerform.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-01T06:00:40.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-29T13:19:55.000Z", "num_tokens": 3455, "size": 12723 }
\clearpage \phantomsection \addcontentsline{toc}{subsection}{ntohl} \label{subr:ntohl} \subsection*{ntohl: convert long (32 bit) value from network to host byte order} \subsubsection*{Calling convention} \begin{description} \item[\registerop{rd}] value in host byte order \end{description} \subsubsection*{Description} The \subroutine{ntohl} routine takes a 32 bit value as its only argument and returns that value in host byte order. \subsubsection*{Pseudocode} \begin{verbatim} \end{verbatim} \subsubsection*{Constraints} \subsubsection*{Failure modes} This subroutine has no run-time failure modes beyond its constraints.
{ "alphanum_fraction": 0.7830188679, "avg_line_length": 21.9310344828, "ext": "tex", "hexsha": "2ca0d2f34b3a40458b591ec309fffb115851f6d0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "bsdbcr/documentation", "max_forks_repo_path": "specification/subr/ntohl.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "bsdbcr/documentation", "max_issues_repo_path": "specification/subr/ntohl.tex", "max_line_length": 69, "max_stars_count": null, "max_stars_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "bsdbcr/documentation", "max_stars_repo_path": "specification/subr/ntohl.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 167, "size": 636 }
%!TEX root = ../main.tex \chapter{Organic Yellow}\margintoc \section{Jelly Cycle} \lipsum[1-6] \section{Double Zebra} \lipsum[1-3] \section{West of the Equator} \lipsum[1-4]
{ "alphanum_fraction": 0.7225433526, "avg_line_length": 21.625, "ext": "tex", "hexsha": "08e040eb20bda3dd2944dbe98878d1f58146c390", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7009f28fb1f7eafb51209182a875751d199dbe11", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rmathsphys/latex-templates", "max_forks_repo_path": "dirac/chapters/chapter-namec.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7009f28fb1f7eafb51209182a875751d199dbe11", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rmathsphys/latex-templates", "max_issues_repo_path": "dirac/chapters/chapter-namec.tex", "max_line_length": 34, "max_stars_count": 3, "max_stars_repo_head_hexsha": "7009f28fb1f7eafb51209182a875751d199dbe11", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rmathsphys/latex-templates", "max_stars_repo_path": "dirac/chapters/chapter-namec.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-11T16:26:35.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-16T00:20:09.000Z", "num_tokens": 67, "size": 173 }
\chapter{DT Fourier Transform} Recall the complex exponential $z^{n}$ for $z\in\mathbb{C}$ is the Eigenfunction of DT LTI systems. If we can decompose an input into a (possibly infinite) sum of such signals, we can easily determine the output using the superposition principle. In this section we consider the decomposition when the input is aperiodic, called the DT \emph{Fourier Transform} (DTFT). In contrast to the DT Fourier series, in this case the complex exponent of the Eigenfunction becomes $z = e^{j\omega}$ a continuous variable, and the decomposition is an uncountably infinite sum (integral). This gives the input-output relationship for a stable DT LTI system as \[ x[n] = \frac{1}{2\pi}\int\limits_{2\pi} X\left(e^{j\omega}\right) \, e^{j \omega n}\; d\omega \;\longrightarrow\; y[n] = \frac{1}{2\pi}\int\limits_{2\pi} H\left(e^{j\omega}\right) X\left(e^{j\omega}\right) \, e^{j \omega n}\; d\omega \] where $H\left(e^{j \omega}\right)$ are the Eigenvalues, again called the \emph{frequency response}. We now turn to determining under what circumstances the decomposition exists and how to find the function $X\left(e^{j\omega}\right)$. \textbf{Note:} The notation $X\left(e^{j\omega}\right)$ can be confusing. It just emphasizes that $z \rightarrow e^{j\omega}$. The expressions are functions of the independent variable $\omega$. \section{Analysis and Synthesis Equations} Consider the Fourier series of $x[n]$, a periodically extended finite-length DT signal $\tilde{x}[n]$, e.g. \begin{center} \includegraphics[scale=0.8]{graphics/dt-derivation.pdf} \end{center} where $\tilde{x}[n]$ is zero outside the range $[N_1,N_2]$. Since $x[n] = \tilde{x}[n]$ over the interval $-N_1$ to $N_2$ \[ a_k = \frac{1}{N}\sum\limits_{n = -N_1}^{N_2} \tilde{x}[n] e^{-j\frac{2\pi}{N}kn} = \frac{1}{N}\sum\limits_{n = -\infty}^{\infty} x[n] e^{-j\frac{2\pi}{N}kn} \] Define the function $X\left(e^{j\omega}\right) = \sum\limits_{n = -\infty}^{\infty} x[n] e^{-j\omega n}$, then \[ a_k = \frac{1}{N} X\left(e^{jk\omega_0}\right) \] are samples of $X\left(e^{j\omega}\right)$ at locations that are multiples of $\omega_0 = \frac{2\pi}{N}$. Substituting back into the synthesis equation \[ \tilde{x}[n] = \sum\limits_{k = -N_1}^{N_2} a_k e^{j\frac{2\pi}{N}kn} = \sum\limits_{k = -N_1}^{N_2} \frac{1}{N} X\left(e^{jk\omega_0}\right) e^{jk\omega_0 n} \] Now note that $N = \frac{2\pi}{\omega_0}$ so that \[ \tilde{x}[n] = \frac{1}{2\pi} \sum\limits_{k = -N_1}^{N_2} X\left(e^{jk\omega_0}\right) e^{jk\omega_0 n} \; \omega_0 \] Now let $N \rightarrow \infty$. \begin{align*} \lim_{N\rightarrow \infty} \tilde{x}[n] &= \lim_{N\rightarrow \infty} \frac{1}{2\pi} \sum\limits_{k = -N_1}^{N_2} X\left(e^{jk\omega_0}\right) e^{jk\omega_0 n} \; \omega_0\\ x[n] &= \frac{1}{2\pi} \int_{2\pi} X\left(e^{j\omega}\right) e^{j\omega n} \; d\omega \end{align*} This is shown graphically in the figure below. As $N$ approaches infinity the sampling of the unit circle becomes infinite, and the summation approaches an integral. \begin{center} \includegraphics[scale=0.9]{graphics/dt-fourier-limit.pdf} \end{center} This gives the \emph{DT Fourier Transform Pair}. The Analysis Equation or Forward Transform is: \[ X\left(e^{j\omega}\right) = \sum\limits_{n = -\infty}^{\infty} x[n] e^{-j\omega n} \] Note $X\left(e^{j\omega}\right)$ must be a periodic function with period $2\pi$. The Synthesis Equation or Inverse Transform is: \[ x[n] = \frac{1}{2\pi} \int_{2\pi} X\left(e^{j\omega}\right) e^{j\omega n} \; d\omega \] where the integral is over any $2\pi$ period of $X$. \begin{example} Let $x[n] = \delta[n]$ \begin{align*} X\left(e^{j\omega}\right) &= \sum\limits_{n = -\infty}^{\infty} x[n] e^{-j\omega n}\\ &= \sum\limits_{n = -\infty}^{\infty} \delta[n] e^{-j\omega n}\\ &= e^{-j\omega (0)}\\ &= 1 \end{align*} $\blacksquare$ \end{example} \begin{example} Let $x[n] = \left( \gamma \right)^n\; u[n]$ \begin{align*} X\left(e^{j\omega}\right) &= \sum\limits_{n = -\infty}^{\infty} x[n] e^{-j\omega n}\\ &= \sum\limits_{n = 0}^{\infty} \left( \gamma \right)^n \; e^{-j\omega n}\\ &= \sum\limits_{n = 0}^{\infty} \left( \gamma e^{-j\omega} \right)^n \end{align*} Using the geometric series $ \sum\limits_{n = 0}^{\infty} z^n = \frac{1}{1-z}$ for $|z| < 1$ gives: \[ X\left(e^{j\omega}\right) = \sum\limits_{n = 0}^{\infty} \left( \gamma e^{-j\omega} \right)^n = \frac{1}{1-\gamma e^{-j\omega}} = \frac{e^{j\omega}}{e^{j\omega} - \gamma} \] If $\mid\gamma e^{-j\omega}\mid < 1$ or equivalently $\mid \gamma \mid < 1$. \[ \left( \gamma \right)^n\; u[n] \; \stackrel{\mathcal{F}}{\longrightarrow} \; \frac{1}{1-\gamma e^{-j\omega}} \] Below is a plot of the original signal and the magnitude and phase spectrum when $\gamma = \tfrac{1}{2}$. \begin{center} \includegraphics[scale=0.5]{graphics/dtft-example1-x.pdf} \end{center} \begin{center} \includegraphics[scale=0.5]{graphics/dtft-example1-Xmag.pdf} \includegraphics[scale=0.5]{graphics/dtft-example1-Xarg.pdf} \end{center} $\blacksquare$ \end{example} \begin{example} Let \[ X\left(e^{j\omega}\right) = \left\{ \begin{array}{lc} 1 & |\omega -2\pi k| < \omega_c\\ 0 & \text{else} \end{array} \right. \; \text{for}\; k\in\mathbb{Z} \;\text{and}\; \omega_c < \pi \] \begin{align*} x[n] &= \frac{1}{2\pi} \int_{2\pi} X\left(e^{j\omega}\right) e^{j\omega n} \; d\omega\\ &= \frac{1}{2\pi} \int\limits_{-\omega_c}^{\omega_c} e^{j\omega n} \; d\omega\\ &= \frac{1}{2\pi} \frac{1}{jn} e^{j\omega n} \Bigg|_{-\omega_c}^{\omega_c}\\ &= \frac{1}{\pi n} \left( \frac{1}{2j} e^{j\omega_c n} - \frac{1}{2j} e^{-j\omega_c n} \right)\\ &= \frac{1}{\pi n} \sin(\omega_c n) \end{align*} $\blacksquare$ \end{example} \begin{example} Let \[ X\left(e^{j\omega}\right) = \sum\limits_{k = -\infty}^{\infty} \delta(\omega-\omega_0 -2\pi k) \] for $-\pi < \omega_0 < \pi$ \begin{center} \includegraphics[scale=1]{graphics/dtft-periodic-ex.pdf} \end{center} \begin{align*} x[n] &= \frac{1}{2\pi} \int_{2\pi} X\left(e^{j\omega}\right) e^{j\omega n} \; d\omega\\ &= \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \delta(\omega-\omega_0)e^{j\omega n} \; d\omega\\ &= \frac{1}{2\pi} e^{j\omega_0 n} \end{align*} $\blacksquare$ \end{example} \section{Existence of the DT Fourier Transform} The example of the exponential $x[n] = \left(\gamma\right)^n\,u[n]$ above showed that for the DT Fourier transform to exist, the Fourier (analysis) sum must exist. Similar to the CT Fourier transform, a mild conditions is a sufficient prerequisite for the Fourier transform of a signal $x[n]$ to exist: it must be absolutely summable \[ \sum\limits_{n = -\infty}^{\infty} |x[n]| < \infty \] This conditions is not necessary however, and we can extend the Fourier transform to a broader class of signals, if we allow delta functions in the transform, as in the sinusoidal examples above. \section{Properties of the DT Fourier Transform} There are several useful properties of the DT Fourier Transform that, when combined with a table of transforms (see Table 5.2, page 392 of OW), allow us to take the Fourier transform of wide array of signals, and one, the convolution property, that allows us to determine the output of a system in the frequency domain easily. We state these here without proof in rough order of usefulness. See the course text for detailed derivations. We use the following notation \[ \mathcal{F}\left\{ x[n] \right\} = X\left(e^{j\omega}\right) = \sum\limits_{n = -\infty}^{\infty} x[n] e^{-j\omega n} \] \[ \mathcal{F}^{-1}\left\{ X\left(e^{j\omega}\right) \right\} = x[n] = \frac{1}{2\pi} \int_{2\pi} X\left(e^{j\omega}\right) e^{j\omega n} \; d\omega \] \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] Important: $X\left(e^{j\omega}\right)$ is periodic in $2\pi$ such that \[ X\left(e^{j(\omega + 2\pi k)}\right) = X\left(e^{j\omega}\right) \;\text{for}\; k \in \mathbb{Z} \] \begin{itemize} \item Linearity Property. Let $x_1[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X_1\left(e^{j\omega}\right)$ and $x_2[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X_2\left(e^{j\omega}\right)$ then for $a,b\in\mathbb{C}$ \[ a x_1[n] + b x_2[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; a X_1\left(e^{j\omega}\right) + b X_2\left(e^{j\omega}\right) \] Example: \begin{align*} \mathcal{F}\left\{ 2\left( \frac{1}{2}\right)^nu[n] -5 \left( -\frac{1}{4}\right)^nu[n] \right\} &= \frac{2}{1-\frac{1}{2}e^{-j\omega}} - \frac{5}{1+\frac{1}{4}e^{-j\omega}} \end{align*} \item Time-shift Property. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ x[n-n_0] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; e^{-j\omega n_0} X\left(e^{j\omega}\right) \] Example: \[ \mathcal{F}\left\{ \delta[n-5] \right\} = e^{-j5\omega} \] \item Frequency Shift Property. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ e^{j\omega_0 n} x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j(\omega-\omega_0)}\right) \] Example: \[ \mathcal{F}^{-1}\left\{ \frac{1}{1-\frac{1}{2}e^{-j\omega}e^{j\frac{\pi}{20}}} \right\} = e^{j\frac{\pi}{20} n} \left( \frac{1}{2}\right)^n u[n] \] \item Conjugation Property. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ x^*[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X*\left(e^{-j\omega}\right) \] Thus, if $x[n]$ is real $X\left(e^{j\omega}\right)$ has conjugate symmetry \[ X\left(e^{-j\omega}\right) = X^*\left(e^{j\omega}\right) \] and the magnitude spectrum is an even function and the phase spectrum is an odd function. \item Differencing and Accumulation Property. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ x[n] - x[n-1] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) - e^{-j\omega} X\left(e^{j\omega}\right) = \left(1-e^{-j\omega}\right)X\left(e^{j\omega}\right) \] and \[ \sum\limits_{m = -\infty}^{n} x[m] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; \frac{1}{1-e^{-j\omega}}X\left(e^{j\omega}\right) + \pi X\left(e^{j0}\right)\sum\limits_{k = -\infty}^{\infty} \delta(\omega - 2\pi k) \] \item Time Expansion Property. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ x_{(k)}[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{jk\omega}\right) \] where \[ x_{(k)}[n] = \left\{ \begin{array}{lc} x[n/k] & \text{if}\; n = \; \text{multiple of}\; k\\ 0 & \text{if}\; n \neq \; \text{multiple of}\; k\\ \end{array} \right. \] \begin{center} \includegraphics[scale=0.8]{graphics/dt=transform-property-expand.pdf} \end{center} \item Frequency Differentiation Property Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ n\, x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; j \frac{d}{d\omega} X\left(e^{j\omega}\right) \] Example: \begin{align*} \mathcal{F}\left\{ n\left( \frac{1}{8}\right)^n u[n] \right\} &= j \frac{d}{d\omega} \left\{\frac{1}{1-\frac{1}{8}e^{-j\omega}} \right\}\\ &= j \frac{-\left(-\frac{1}{8} (-j)e^{-j\omega}\right)}{\left( 1-\frac{1}{8}e^{-j\omega} \right)^2}\\ &= \frac{\frac{1}{8}e^{-j\omega}}{\left( 1-\frac{1}{8}e^{-j\omega} \right)^2} \end{align*} \item Parseval's Relation. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] then \[ \sum\limits_{n = -\infty}^{\infty} |x[n]|^2 \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; \frac{1}{2\pi} \int_{2\pi} \left| X\left(e^{j\omega}\right) \right|^2 \; d\omega \] The energy is also the integral over one period of the DTFT magnitude squared. \item Convolution Property. Recall for a DT LTI system with impulse response $h[n]$ the output is \[ y[n] = h[n]*x[n] \] In the frequency domain this is equivalent to \[ Y\left(e^{j\omega}\right) = H\left(e^{j\omega}\right) \, X\left(e^{j\omega}\right) \] As in CT systems, convolution in the discrete-time domain is equivalent to multiplication in the frequency domain. Example: suppose a DT system has impulse response \[ h[n] = \left( \gamma_1^n + \gamma_2^n \right)\, u[n] \] and the input is $x[n] = n\, \gamma_3^n\, u[n]$ where $|\gamma_1| < 1$, $|\gamma_2| < 1$, $|\gamma_3| < 1$. The output in the frequency domain is \begin{align*} Y\left(e^{j\omega}\right) &= H\left(e^{j\omega}\right) \, X\left(e^{j\omega}\right)\\ &= \left[ \frac{1}{1-\gamma_1 e^{-j\omega}} + \frac{1}{1-\gamma_2 e^{-j\omega}} \right] \frac{\gamma_3 e^{-j\omega}}{\left( 1-\gamma_3 e^{-j\omega} \right)^2}\\ &= \frac{\gamma_3 e^{-j\omega}}{\left(1-\gamma_1 e^{-j\omega}\right)\left( 1-\gamma_3 e^{-j\omega} \right)^2} + \frac{\gamma_3 e^{-j\omega}}{\left(1-\gamma_2 e^{-j\omega}\right)\left( 1-\gamma_3 e^{-j\omega} \right)^2} \end{align*} \item Multiplication (modulation) Property. Let \[ x[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; X\left(e^{j\omega}\right) \] and \[ y[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; Y\left(e^{j\omega}\right) \] then \[ x[n]\,y[n] \; \stackrel{\mathcal{F}}{\longleftrightarrow} \; \frac{1}{2\pi} \int_{2\pi} X\left(e^{j\theta}\right)\, Y\left(e^{j(\omega-\theta)}\right)\; d\theta \] \end{itemize} \section{DT Fourier Transform of a Periodic Signal} The DTFS allows us to write any periodic function with period $N$ as \[ x[n] = \sum\limits_{k = N_0}^{N_0 + N -1} a_k e^{j\frac{2\pi}{N}kn} \] taking the DT Fourier Transform \[ X\left(e^{j\omega}\right) = \sum\limits_{k = N_0}^{N_0 + N -1} a_k \mathcal{F}\left\{e^{j\frac{2\pi}{N}kn}\right\} \] Using the previously derived transform shows, similar to CT, the DT Fourier Transform of a periodic signal is \[ X\left(e^{j\omega}\right) = \sum\limits_{k = -\infty}^{\infty} 2\pi a_k \delta\left(\omega - \frac{2\pi k}{N}\right) \] Example \[ x[n] = \cos\left(\frac{2\pi}{10} n\right) = \frac{1}{2}e^{j\frac{2\pi}{10} n} + \frac{1}{2}e^{-j\frac{2\pi}{10} n} \] Using the previous transform \[ X\left(e^{j\omega}\right) = \sum\limits_{k = -\infty}^{\infty} \pi \delta\left(\omega - \frac{2\pi}{10} -2\pi k\right) + \pi \delta\left(\omega + \frac{2\pi}{10} -2\pi k\right) \] Which looks like \begin{center} \includegraphics[scale=1]{graphics/dtft-periodic-ex2.pdf} \end{center}
{ "alphanum_fraction": 0.6279196459, "avg_line_length": 43.5756676558, "ext": "tex", "hexsha": "488e85330c30328b5ebb0381d8368c8899390454", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4715455db62b5455a05e274f25c5b9fb21ed7573", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "clwyatt/notes-2714", "max_forks_repo_path": "17-dtft.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4715455db62b5455a05e274f25c5b9fb21ed7573", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "clwyatt/notes-2714", "max_issues_repo_path": "17-dtft.tex", "max_line_length": 437, "max_stars_count": null, "max_stars_repo_head_hexsha": "4715455db62b5455a05e274f25c5b9fb21ed7573", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "clwyatt/notes-2714", "max_stars_repo_path": "17-dtft.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5654, "size": 14685 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS637: Database-Backed Websites % Copyright 2015 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/beacon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 4} \begin{enumerate}[label=(\alph*)] \item Consider the \texttt{index.html} available at \href{http://topcat.cs.umb.edu/book\_apps/ch01\_product\_discount}{here}, where it is displayed using the book's \texttt{main.css}. A different \texttt{main.css} is used to display it as shown in \ref{fig2}. Note the following: different colors, different font styles, different font sizes, wider overall (600px), more space around contents, wider border. Write a CSS file that shows at least approximately this display and show it in your paper. \begin{figure}\centering \includegraphics{\pngDirectory/hw02/hw02q04f02.png} \caption{Front-End of Product Discount Calculator as Given}\label{fig2} \end{figure} \item If you have a Windows system, describe how the above-mentioned \href{http://topcat.cs.umb.edu/book_apps/ch01_product_discount/}{page} shows in your version of Internet Explorer. \end{enumerate} \subsection*{Solution} \begin{enumerate}[label=(\alph*)] \item The web-page has been redesigned to appear just as identical as possible to the front-end given in Figure \ref{fig2}. The redesigned form is shown in \ref{fig3}. \begin{figure}\centering \includegraphics{\pngDirectory/hw02/hw02q04f03.png} \caption{Redesigned version of Product Discount Calculator} \label{fig3} \end{figure} And the \textit{CSS} code to convert the front-end given by the book application to what it is now is shown below. \begin{lstlisting} %\begin{minted}[fontsize=\small,tabsize=8,linenos, firstnumber=1,frame=lines,framerule=1pt]{css} body { font-family: Arial, Helvetica, sans-serif; } main { width: 90%; margin: 15px auto; padding: 2em; background: rgb(190, 230, 231); border: 5px solid navy; } h1 { margin-top: 0; color: red; } label { width: 10em; float: left; padding-right: 1em; padding-bottom: .5em; line-height: 25px; font-family: initial; font-size: 13px; } #data input { float: left; width: 15em; margin-bottom: .5em; } #data span { padding-left: .25em; } #buttons input { float: left; margin-bottom: .5em; } br { clear: left; } %\end{minted} \end{lstlisting} \item Although all browsers load the same source page, they interpret CSS differently and thus give possibly different presentations of a certain page. Internet Explorer, for instance, presented the web page with following differences. The web page looks as depicted in Figure \ref{fig4} in Internet Explorer. \begin{enumerate}[label=\arabic*.] \item The \texttt{margin: auto;} property is not recognized and thus the form is not aligned at the center. \item The border property of the \texttt{<main>} element has been disregarded. \item hovering over input elements of the form would change default border color of the elements to light blue. \item hovering over \texttt{input[type=submit]} element changes its background color to light blue. \item focusing on \texttt{input[type=text]} elements does not change their \texttt{outline}. \end{enumerate} \begin{figure}[H]\centering \includegraphics{\pngDirectory/hw02/hw02q04f04.png} \caption{Front-end view of Product Discount Calculator using Internet Explorer v10.0.9200.16635}\label{fig4} \end{figure} \end{enumerate}
{ "alphanum_fraction": 0.7202000556, "avg_line_length": 38.6989247312, "ext": "tex", "hexsha": "07cf9f093207c39e8ef1033b8a54d83a9f906fb2", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_path": "umb-cs637-2015s/src/tex/hw02/hw02q04.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_path": "umb-cs637-2015s/src/tex/hw02/hw02q04.tex", "max_line_length": 498, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_path": "umb-cs637-2015s/src/tex/hw02/hw02q04.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "num_tokens": 947, "size": 3599 }
\documentclass[twoside]{article} \usepackage{graphicx} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \usepackage{lipsum} % Package to generate dummy text throughout this template \usepackage[ backend=biber, style=alphabetic, sorting=ynt ]{biblatex} \bibliography{ref} \usepackage{listings} \lstset{ basicstyle=\small\ttfamily, columns=flexible, breaklines=true } \usepackage[sc]{mathpazo} % Use the Palatino font \usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs \linespread{1.05} % Line spacing - Palatino needs more space between lines \usepackage{microtype} % Slightly tweak font spacing for aesthetics \usepackage[hmarginratio=1:1,top=32mm,columnsep=20pt]{geometry} % Document margins \usepackage{multicol} % Used for the two-column layout of the document \usepackage[hang, small,labelfont=bf,up,textfont=it,up]{caption} % Custom captions under/above floats in tables or figures \usepackage{booktabs} % Horizontal rules in tables \usepackage{float} % Required for tables and figures in the multi-column environment - they need to be placed in specific locations with the [H] (e.g. \begin{table}[H]) \usepackage{hyperref} % For hyperlinks in the PDF \usepackage{lettrine} % The lettrine is the first enlarged letter at the beginning of the text \usepackage{paralist} % Used for the compactitem environment which makes bullet points with less space between them \usepackage{abstract} % Allows abstract customization \renewcommand{\abstractnamefont}{\normalfont\bfseries} % Set the "Abstract" text to bold \renewcommand{\abstracttextfont}{\normalfont\small\itshape} % Set the abstract itself to small italic text \usepackage{titlesec} % Allows customization of titles \renewcommand\thesection{\Roman{section}} % Roman numerals for the sections \renewcommand\thesubsection{\Roman{subsection}} % Roman numerals for subsections \titleformat{\section}[block]{\large\scshape\centering}{\thesection.}{1em}{} % Change the look of the section titles \titleformat{\subsection}[block]{\large}{\thesubsection.}{1em}{} % Change the look of the section titles \usepackage{fancyhdr} % Headers and footers \pagestyle{fancy} % All pages have headers and footers \fancyhead{} % Blank out the default header \fancyfoot{} % Blank out the default footer \fancyhead[C]{Distributed Systems $\bullet$ December 2016} % Custom header text \fancyfoot[RO,LE]{\thepage} % Custom footer text %---------------------------------------------------------------------------------------- % TITLE SECTION %---------------------------------------------------------------------------------------- \title{\vspace{-15mm}\fontsize{24pt}{10pt}\selectfont\textbf{SpeedReader: Read-Optimized Distributed Key Value Store}} % Article title \author{ \large \textsc{Alex Dao, Gautam Hathi, Joy Patel}\\[2mm] % Your name \normalsize Duke University \\ % Your institution \vspace{-5mm} } \date{14 December 2016} %---------------------------------------------------------------------------------------- \begin{document} \maketitle % Insert title \thispagestyle{fancy} % All pages have headers and footers %---------------------------------------------------------------------------------------- % ABSTRACT %---------------------------------------------------------------------------------------- \begin{abstract} \noindent SpeedReader is a read-optimized distributed key-value store. It is built on DDDFS, which load balances files based on the number of accesses to files and servers, rather than the storage capacity of servers. This allows for higher throughput for reads to keys that have more read demand. SpeedReader adds the capability to write to the system in a read optimized way using versioned data and asynchronous, store-to-store writes. This allows the system to handle a read-heavy workload in a scalable manner while still accomodating writes without interruption to reads. In this paper, we describe the SpeedReader system and detail an implementation of SpeedReader. \end{abstract} %---------------------------------------------------------------------------------------- % ARTICLE CONTENTS %---------------------------------------------------------------------------------------- \begin{multicols}{2} % Two-column layout throughout the main article text \section{Introduction} Read-optimized storage is useful to have in situations with high read workload but low write workload. An example of such a workload is one that a CDN might face when serving a website that requires occaional content updates that can be propagated slowly through the system but which has large numbers of users accessing the website. SpeedReader provides such a service using a multi-tiered system that makes information available for reads while delaying writes to minimize read interference. This is accomplished through two main mechanisms: load-balancing and asynchronous write propagation.\\\indent Load balancing allows for read-optimization by distributing read workload across a number of stores to optimize resource use. Many classic load-balancing systems balance load in a familiar way: a central server sends heartbeats to its nodes, which in turn return statistics such as its storage capacity. Using this information, the central server balances replicas such that the storage utilization of its nodes is as balanced as possible. The methods of achieving balancing without a large overhead involves many tactics including in-memory metadata to make operations occurring in the central server fast. We will refer to this well-explored form of load balancing as "storage load balancing".\\\indent Storage load balancing works well if there is the assumption that all keys are accessed with similar frequency. In this case, the number of accesses to a server is proportional to the number of keys it stores, so the accesses to the servers will be distributed evenly. However, there are many cases in which certain important keys are accessed more frequently than others. In this case, the servers that store this file will have an increased share of accesses and may become a bottleneck. Thus, to further optimize our system for a read-heavy workload, we use a "performance load balancing" system developed in DDDFS, which aims to balance the location and replication of keys based on performance metrics, such as number of accesses or response times, rather than storage metrics. This will prevent bottlenecks occurring due to many accesses to a single key or server. \\\indent However, with replication comes the issue of latency and consistency with writes. SpeedReader is designed to ensure that writes to a given key do not interfere with reads to that key, especially for keys that are frequently accessed. To account for this, we use versioned writes and implement an algorithm which propogates writes throughout all replicas that hold a given key. This preserves availability and latency for reads while allowing write propagation throughout the system.\\\indent Using Redis, an in-memory data structure store, we have implemented a simulation of a SpeedReader system in DDDFS. %------------------------------------------------ \section{Implementation} \subsection*{Infrastructure} SpeedReader is a file system based on HDFS that uses performance load balancing in a write-occasionally read-often environment. Thus, the architecture is focused on read performance. The ideal system will balance both performance load and storage load in order to truly optimize the use of resources. However, by showing that performance load and storage load can be balanced respectively, combining them naturally becomes a question of algorithmic feasibility, which would be left to future research.\\\indent Like HDFS, all metadata will be stored in memory on the central server, while the application data is separately stored on follower servers. \includegraphics[width=6.5cm]{res/server_diagram.jpg}\\\indent The above figure shows the design infrastructure. Clients send file system requests through the master server, which stores all metadata associated with the key-value pairs and followers. Requests are accepted by a Java Spark web API through HTTP requests. On initial write, the master randomly chooses among the known followers to store the file. This server is designated as the "original" server, and the file can never be removed from this server via rebalancing procedures. The original server is kept as a simple guarantee that a file will always exist somewhere in the file system. In addition to a mapping of each file to its original server, the master server also maintains a mapping of each file to all duplicates, a mapping of each server to all files stored on that server, timestamps per file, and a buffer of the most recent files and servers accessed. Our implementation sets the buffer size at 40 for testing, but in production this can be much higher. SpeedReader was primarily built with speed (reads) in mind and does not plan for frequent updates. Thus, older unread file metadata can be un-cached from Redis memory and eventually garbage-collected (or backed up to disk). \subsection*{Storage Tier} We have designed our storage tier to handle versioned reads and writes issues by the master. All writes to stores are versioned. When writing a value, clients submit a version number along with the key and value to the master, which then sends the key, value, and version to a randomly-chosen Follower. The master also gives the chosen Follower a list of stores which the write should be propagated to. If the version sent by the client is greater than the latest version seen by chosen Follower, the Follower increments the version number for that key and the changes issues by the client overwrite previous versions. Otherwise, the Follower adds the client write value is added to version list for that key. The Follower then returns a list of currently stored values for the written key to the master. Once a key/value is written to an individual Follower, that Follower then begins the propagation of the write to all other Followers which have that key. The chosen Follower issues an asynchronous write command to some number of the other "second-level" Followers designated by the master for propagation of the write. The chosen Follower also splits the remaining stores designated by the master for propagation among the second-level Followers. These second-level Followers then repeat the process until the write is propagated to all designated replicas. Reads are issued to any store which holds a desired key. Each read returns an object containing all versions of the value for that key as well as the latest version number for that key. Each store may contain a different set of values for a key, depending on which versions the store has seen. The storage tier also presents an interface for key/value replication and value reconciliation. During key/value replication, each Follower accepts a key with all versions of the value written for that key. If the key is already present on the Follower, it rejects the replication. Value reconciliation works in much the same way as a write. \subsection*{Master Tier} SpeedReader uses a single master node that stores metadata about each follower and its data. Redis, an open-source and networked server, was chosen for use on the master server two main reasons: \begin{enumerate} \item In-memory: Many of the rebalancing procedures and metadata maintenance require accesses to a large portion of metadata. By keeping this in-memory, overhead for maintaining master is minimized. In-memory procedures done in master should be overshadowed by the actual I/O to followers, which are known to take much longer to finish. \item Data structure store: Redis supports storage of much more than just key-value pairs. This simplified our development process, as we could use its built-in set, map, and list operations. \end{enumerate} The master node is the bridge between the client tier and the server tier (which consists of $n$ number of followers). Clients send GET and POST requests to the master node, which has the appearance of a monolithic server. We use Java Spark as the framework for creating and controlling the endpoints. The master node then completes these requests by querying the appropriate followers in the server tier. In our implementation, we have implemented this by calling functions in references to \emph{FollowerService} objects, since we are testing locally. In real-world scenarios, this can easily be extended to remote procedure calls with minimal modification. Communication between the master and the server tier is handled via a simple, internal API that allows for reads, writes, and replication. Since the master node controls the location of all keys-value pairs, it is able to redistribute data across followers without client intervention. An important application of this feature is load balancing, which spreads the burden of handling read requests across multiple followers. In general terms, this works by duplicating heavily-requested data to more followers via read rebalancing. We also move data from busy servers to less busy servers via server rebalancing. Details of the algorithm will be described further in this paper. It is important to note that each individual follower in the server tier has no knowledge of the status of other followers, which allows for SpeedReader to be extremely flexible in adding new followers. \subsection*{Client Tier} The main purpose of the client is to handle reconciliation. On a given read, the client gets the latest version number of a file along with all the values corresponding to that version number. If there is more than one values, the client must follow up the read with a write that reconciles by picking one of the values, similar to the way Amazon's Dynamo key-value store pushes reconciliation to the client. If concurrent reconciliations occur, the client continues to reconcile until there is only one value. On a given write, the client must pass in the version of the file being written (usually the latest plus one) along with the new value to be written. This makes the assumption that the client has cached the file with a read before the write. \subsection*{Load Balancing Design} We have updated the two algorithms for load balancing from DDDFS for SpeedReader: file balancing and server balancing. For file balancing, the basic idea is to predict when a file will be read by many clients and to duplicate that file across many servers prior to that event. By doing so, clients can read in parallel, eliminating constraints imposed by network throughput or server performance. Server balancing focuses on a related idea when files on a specific server will be read by many clients. In this case, files are moved from the busy server to less busy servers. This should lead to the same performance improvements as file balancing. \\\indent It is necessary for SpeedReader to keep track of recent reads by clients in its metadata in order to perform these two operations. The file balancing operation determines the number of desired replicas for each file using the recent read metadata, and duplicates or deletes files accordingly. Servers designated to store duplicates are chosen at random among the cluster. In the simplest algorithm, a file's desired number of replicas is proportional to the number of recent reads. In our implementation, file balancing occurs every 10 seconds and server balancing occurs every 22 seconds. These intervals can be modified to fit the specific needs and read requests of each application. Thus, SpeedReader has only eventual consistency, depending on both the balancing operations and when each follower schedules its actual file transactions. Such a model is fitting for a system with a large number of reads of unlinked data. By implementing these two operations with low performance and space overhead, we hope to show that performance load balancing should be a consideration for future distributed file systems. Like HDFS, all metadata will be stored in memory on the central server, while the application data is separately stored on child nodes. Additionally, we will model our file movements and replications after HDFS not only for simplicity, but also as proof that such movements and replications can be done quickly (as shown by HDFS). By showing the viability of the low overhead "performance load balancer" in DDDFS, we aim to elucidate the possibility of using such a load balancer in existing file systems such as HDFS. \section{Difficulties} A difficulty encountered while using Redis was to find a memory and time-efficient algorithm for calculating how many duplicates were desired for a file. A naive algorithm would be to just keep access counts for every file from the creation of the file but that potentially uses memory exceeding the system (practical limit of around 100 GB). %------------------------------------------------ \section{Results} Our implementation's source code can be found at \href{https://github.com/alexdao/SpeedReader}{https://github.com/alexdao/SpeedReader}. To demonstrate the functionality of SpeedReader, a test suite is provided to simulate write, read, and update requests for a number of transactions. In the following test cases, files correspond to keys. \\\indent \subsection{Simple test case} Two clients perform writes for two different files: file1 and file2 of version 0. \begin{lstlisting} Client id 1 : Write file1 {version: 0, values: [2]} Client id 1 : Read file1 {version: 0, values: [2]} Client id 2 : Write file2 {version: 0, values: [4]} Client id 2 : Read file1 {version: 0, values: [2]} \end{lstlisting} To simulate a concurrent write to file2, we perform another write with the same version number. We use version numbers as a proxy for time in our application, to handle concurrency. If a client writes with a version number that is equal to or less than the current version number in the datastore, then we know that the client is writing without up-to-date knowledge. For this reason, we store all values of the current version number in a list, and resolve the inconsistency when a client reads the value later. As you can see below, file2 then gets mapped to both values 4 and 5, which is eventually reconciled by the client. \begin{lstlisting} Client id 1 : Read file2 {version: 0, values: [4]} Client id 1 : Write file2 {version: 0, values: [4, 5]} Client id 1 : Reconciling file2 {version: 0, values: [4, 5]} \end{lstlisting} To demonstrate performance load balancing, we perform 10 reads in a row on file2. After the balancing operation, file2 gets duplicated to 6 more servers (out of 10), because it is such a heavily-requested file. \begin{lstlisting} File locations with (serverNum, value) pairs: file2: (0,[10]) (1,[10]) (2,[10]) (4,[10]) (6,[10]) (7,[10]) (9,[10]) file1: (4,[2]) (7,[2]) \end{lstlisting} \subsection{Testing write propagation and concurrent writes} To demonstrate concurrent writes on file systems, we first do 100 reads that 10 clients that copy file1 10 times. Then, each of the clients does a write. Due to the same version vectors, all the values for these writes are stored. \begin{lstlisting} File locations with (serverNum, value) pairs: file1: (0,[0]) (1,[0]) (2,[0]) (3,[0]) (4,[0]) (5,[0]) (6,[0]) (7,[0]) (8,[0]) (9,[0]) Client id 1 : Write file1 {version: 0, values: [0]} Client id 2 : Write file1 {version: 0, values: [0, 1]} Client id 3 : Write file1 {version: 0, values: [0, 1, 2]} Client id 4 : Write file1 {version: 0, values: [0, 1, 2, 3]} Client id 5 : Write file1 {version: 0, values: [0, 1, 2, 3, 4]} Client id 6 : Write file1 {version: 0, values: [0, 1, 2, 3, 4, 5]} Client id 7 : Write file1 {version: 0, values: [0, 1, 2, 3, 4, 5, 6]} Client id 8 : Write file1 {version: 0, values: [0, 1, 2, 3, 4, 5, 6, 7]} Client id 9 : Write file1 {version: 0, values: [0, 1, 2, 3, 4, 5, 6, 7, 8]} Client id 10 : Write file1 {version: 0, values: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]} \end{lstlisting} Finally, on reads, concurrent reconciles occur as the multiple values are reduced down to just one value. \begin{lstlisting} Client id 1 : Reconciling file1 {version: 0, values: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]} Client id 1 : Read file1 {version: 1, values: [0]} Client id 2 : Write file1 {version: 1, values: [0, 1]} Client id 3 : Reconciling file1 {version: 1, values: [0, 1]} Client id 3 : Read file1 {version: 2, values: [1]} File locations with (serverNum, value) pairs: file1: (0,[1]) (1,[1]) (2,[1]) (3,[1]) (4,[1]) (5,[1]) (6,[1]) (7,[1]) (8,[1]) (9,[1]) \end{lstlisting} %------------------------------------------------ \section{Future Work} SpeedReader keeps open a number of avenues for further improvements in systems for read-optimized workloads. The DDDFS paper itself notes that the performance read balancing algorithms implemented by DDDFS and used here are quite simple and can be improved. Work can be done to benchmark performance increases and compare with existing systems. SpeedReader also currently does not provide full eventual consistency for writes and our current implementation is a simulation rather than an actual distributed implementation. \subsection*{Workload Analysis} The GFS paper claims that the disproportionate access of a single file occurred in some rare cases, but that this was not a concern [2]. By benchmarking performance of existing distributed file systems and comparing the performance difference between a "regular" workload, a workload consisting of evenly distributed file accesses, and a workload with concentrated file accesses, we can see how much of a bottleneck concentrated file accesses can cause. \subsection*{Rebalancing Algorithm} The rebalancing algorithms implemented for the DDDFS are extremely simple, as the DDDFS paper points out. It is possible to use different algorithms, or improve upon the existing ones so inefficient changes do not occur. For example, moving a key $x$ from server 1 to server 2, then replicating key $x$ on server 1 would be inefficient. More work should be spent on avoiding such cases. An example of a completely different algorithm would be using heuristics to predict future accesses and the number of desired replicas. Currently, in server rebalancing, we move a random key from the most busy server to the least busy. Instead of a randomly chosen key, we could move the most heavily accessed files (in near $O(1)$ find speed). In addition, read balancing and server balancing are currently both performed at fixed intervals. The balancing should ideally be performed during lulls of activity. As the actual balancing process across follower servers is expensive (due to I/O), balance detection and calculation should be performed often but only executed when deemed necessary. \subsection*{Consistency} SpeedReader currently handles failures by writing to live servers and waiting for subsequent writes. This does not provide full eventual consistency, since if a Follower misses a write (either because of a failure or a network issue) it will not eventually get that update. We believe that the system we have will work for most scenarios, since writes will be written to most replicas and new writes after a failure will be written to all replicas. However, a future effort could implement full eventual consistency for SpeedReader. \subsection*{Implementation} We have currently implemented SpeedReader as a simulation on a single machine with limited asynchronous components. The system could be implemented on an actual distributed system in the future. %---------------------------------------------------------------------------------------- % REFERENCE LIST %---------------------------------------------------------------------------------------- \begin{thebibliography}{99} % Bibliography - this is intentionally simple in this template \bibitem{DDDFS} Alex Dao, Jiawei Zhang, Danny Oh \newblock Detailed Diagnostic Distributed File System \newblock \textit{CS510: Graduate Operating Systems} \newblock Duke. 2016. \bibitem{Dynamo} Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and Werner Vogels \newblock Dynamo: Amazon's Highly Available Key-value Store \newblock \textit{SOSP 2007, October 14?17, 2007} \newblock All Things Distributed. \bibitem{Dynamic Load Balancing} Valeria Cardellini, Michele Colajanni, Philip S. Yu \newblock Dynamic Load Balancing on Web-server Systems \newblock \textit{1999 Internet Computing vol. 3, no. 3, pp. 28-39, May-June 1999} \newblock IEEE. 1999. \bibitem{GFS} Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung \newblock The Google File System \newblock \textit{ACM Symposium on Operating Systems Principles} \newblock Google. 19 October 2003. \bibitem{Spark} Java-Spark \newblock http://sparkjava.com/documentation.html \bibitem{HDFS} Jeffrey J. Hanson \newblock An introduction to the Hadoop Distributed File System \newblock \textit{developerWorks} \newblock IBM. 1 February 2011. \bibitem{Hadoop} Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler \newblock The Hadoop Distributed File System \newblock \textit{MSST '10 Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies} \newblock Yahoo!. 2010. \bibitem{Redis} Redis \newblock http://redis.io/ \end{thebibliography} %---------------------------------------------------------------------------------------- \end{multicols} \end{document}
{ "alphanum_fraction": 0.7592029325, "avg_line_length": 97.8778625954, "ext": "tex", "hexsha": "a8722e8c0af72e09dbd45650a607a0aa56268d32", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "022a51b850869c1b443a2698c98889faa88d67d7", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "alexdao/LBDS", "max_forks_repo_path": "doc/project_final_report.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "022a51b850869c1b443a2698c98889faa88d67d7", "max_issues_repo_issues_event_max_datetime": "2016-04-20T20:39:54.000Z", "max_issues_repo_issues_event_min_datetime": "2016-04-20T20:39:54.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "alexdao/LBDS", "max_issues_repo_path": "doc/project_final_report.tex", "max_line_length": 1115, "max_stars_count": null, "max_stars_repo_head_hexsha": "022a51b850869c1b443a2698c98889faa88d67d7", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "alexdao/LBDS", "max_stars_repo_path": "doc/project_final_report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5734, "size": 25644 }
% !TEX root = ../zeth-protocol-specification.tex \chapter{Fuzzy message detection}\label{appendix:fmd} As explained in~\ref{zeth-protocol:zeth-receive} and~\ref{client-security:syncing}, in order to receive $\zethnotes$, a $\zeth$ user must listen on a broadcast channel, and try to decrypt all encrypted events emitted by the $\mixer$ contract. While providing the best potential for indistinguishability (all users scan the chain data and expose the same behavior), such routine is particularly expensive to carry out, especially for computationally restricted users (i.e.~users with computationally limited devices). As a way to trade-off the users' anonymity and the cost of the message detection routine in privacy-preserving protocols, Beck et al.~\cite{DBLP:journals/iacr/BeckLMG21} introduced the notion of \emph{fuzzy message detection schemes}. These protocols allow the delegation of message detection to untrustworthy servers, without revealing precisely which messages belong to the receiver, by allowing receivers to enforce false-positive detection rates. Such schemes provide a promising avenue for reconciliating recipient anonymity (via \emph{key ambiguity} and message \emph{detection ambiguity}) and the performance of the $\zethnotes$ receiving algorithm that currently needs to run on a machine belonging to (or trusted by) the recipient. Nevertheless, the selection of the fuzzy detection parameters for $\zeth$ is a challenge, especially the selection of the false-positive rate. Under the scheme presented in \cite{DBLP:journals/iacr/BeckLMG21}, not only is this parameter public (an additional ``leakage'' of information\footnote{limited to one server (in the best case), or to the whole network (in the worst case --- if the adversary broadcasts all its known information)}, including to potentially adversarial nodes), but this parameter is likely to be set to different values by different users, based on the number of payments they receive through $\zeth$. This, coupled with the existing gas-related leakages, will increase the set of information leakages in the protocol, the consequences of which are hard to properly estimate. Furthermore, letting such parameters be set by users raises other challenges for wallet developers, user experience (UX) engineers and documentation engineers. In fact, any degree of liberty given to the user increases the potential for ``deviation'' from the ``expected/indistinguishable'' behavior. Hence, UX/documentation/wallet engineers must be able to suggest sensible default values for such parameters, must extensively document the purpose of these parameters and must extensively educate the end-users to maximize the chances of adequate parameter selections. While feasible, such tasks largely rely on modeling efforts\footnote{See e.g.~\url{https://git.openprivacy.ca/openprivacy/fuzzytags-sim}}, which simplify real-world systems and can only be used to simulate a limited set of situations. Moreover, not being able to easily (i.e.~without distributing new keys) update the false-positive rate over time is problematic in the context of $\zeth$ as it does not allow users to have adaptable false-positive probabilities to account for potential spikes in the number of payments they receive (e.g.~a merchant during sales). On the other hand, and as mentioned above, being able to use \emph{fuzzy message detection schemes} in $\zeth$ would also widen the user base of the protocol, which, as a consequence, would widen the anonymity set.
{ "alphanum_fraction": 0.8056265985, "avg_line_length": 234.6, "ext": "tex", "hexsha": "709a263c6249e866557cfbd1c2458e69434b4d78", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-07-26T04:51:29.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-26T04:51:29.000Z", "max_forks_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "clearmatics/zeth-specifications", "max_forks_repo_path": "appendices/appendix05-fmd.tex", "max_issues_count": 13, "max_issues_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b", "max_issues_repo_issues_event_max_datetime": "2021-04-16T10:57:05.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-27T10:41:50.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "clearmatics/zeth-specifications", "max_issues_repo_path": "appendices/appendix05-fmd.tex", "max_line_length": 1463, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ba29c67587395f5c7b26b52ee7ab9cba12f1cc6b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "clearmatics/zeth-specifications", "max_stars_repo_path": "appendices/appendix05-fmd.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-29T18:22:00.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-29T18:22:00.000Z", "num_tokens": 737, "size": 3519 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{url} \usepackage[margin=0.75in]{geometry} \setlength{\parskip}{0.7em} \setlength{\parindent}{0em} \begin{document} \begin{center} % MAKE SURE YOU TAKE OUT THE SQUARE BRACKETS \LARGE{\textbf{CSE 6730, Group Proposal}} \\ \vspace{1em} \Large{Project 2: Complex Simulation} \\ \end{center} \begin{normalsize} \section{Project Title} Simulation of Predator-Prey Population Dynamics \section{Team Members} \begin{enumerate} \item D. Aaron Hillegass (GTID 901988533) \item Siawpeng Er (GTID 903413430) \item Xiaotong Mu (GTID 903529807) \end{enumerate} \section{Problem Description and Purpose} The predator and prey relationship is an important ecological system. Their populations rise and fall over time as they interact and impact one another. These interactions are the prime movers of energy through food chains. Both prey and predators are affecting each other. In simplest interaction, predators depend on the prey as the food source. However, any abuse of the food source may result in decease in population of the prey, and subsequently decrease the number of the predators due to lack of food. Because of such interaction, the population of the predators and the prey may oscillate, and inversely proportional to each others. \\ Predator prey releationship is important for us to understand the impact of the relationship on the ecological system in one area. Such relationship is always complicated. Without predators, prey (normally herbivors) will cause detrimental impact on the plants in that area. However, overkill by the predators may also impact the balance of the nature. Besides, there are effects from human intervention on such relationship (eg: hunting and destroy of the habitat). Furthermore, predator-prey model can be used to describe many fundamental characteristics of ecological systems and can even be extended to other ideas like military response \cite{derrik}.\\ One of the mathematical models that simulates predator and prey interactions is the Lotka-Volterra model proposed by Alfred Lotka and Vito Volterra. Lotka helped develop the logistic equation to explain autocatalytic chemical reactions. Volterra interconnected the logistic equation to two separate populations in competition to explain predator and prey relationships. We hope to use this intuitive model in our complex system simulation, so that we could gain more understanding on the relationship, as well as the impact of our activities on such relationship. \section{Data Source} For this project, we plan to obtain some data from the National Park. However, it is also possible from literature review we could obtained some of the data source used by the their simulation and use it as our data source. \section{Methodology} Our simulation will first simulate predators and prey entering and exiting a predefined area. Then through interactions, their population may affecting each others. Traditionally, there is the nonlinear Lotka-Volterra Model of the predator-prey dynamic system \cite{inproceedings, 1102729}. LVM approach is a simplified model and suitable for detailed stability analysis. However, it is also very limited model and lack of flexibility for complex interaction. Hence, we also hope to incorporate the Agent-Based Model \cite{Hodzic} in this project to increase the completeness of our analysis. In our project, some of the ideas that we wish to investigate include: \begin{enumerate} \item Long-term population interaction among predators and prey. \item Introduction of the uncertainties like diseases. \item Introduction of the third parties interaction: human activity, natural disasters etc. \end{enumerate} \section{Development Platform} The programming language is Python 3. We will provide a Jupyter notebook for user interaction. In the Jupyter notebook, we will allow the user to change some of the probability and the simulation parameters to see different result of the simulation. \section{Division of Labor} As we move forward on our project, we plan to work concurrently. The timeline is as below: \begin{center} \begin{tabular}{ |c|c|c| } \hline Task & Duration \\ \hline Data collection & 2 weeks \\ Modeling design and implementaion & 4 weeks \\ Modeling revised & 4 weeks \\ \hline \end{tabular} \end{center} \bibliographystyle{plain} \bibliography{reference} \end{normalsize} \end{document}
{ "alphanum_fraction": 0.7744541485, "avg_line_length": 57.25, "ext": "tex", "hexsha": "ad958ab856b1f615f396a19eaebd8752a483d4a6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hillegass/complex-sim", "max_forks_repo_path": "documentation/proposal/proposal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hillegass/complex-sim", "max_issues_repo_path": "documentation/proposal/proposal.tex", "max_line_length": 660, "max_stars_count": 2, "max_stars_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hillegass/complex-sim", "max_stars_repo_path": "documentation/proposal/proposal.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-17T00:45:54.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-05T20:57:14.000Z", "num_tokens": 1081, "size": 4580 }
\subsection{Qualitative} \label{subsec:experiments-qualitative} \def\BSDSCroppedScale{0.25} \def\SBDCroppedScale{0.3} \def\FashCroppedScale{0.195} \begin{figure*} \centering \vspace{-0.5cm} % W %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaai}\W} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \BSDS \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/bsds500/w/cropped/w_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sbd/w/cropped/w_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \begin{center} \Fash \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/fash/w/cropped/w_010_contours} \end{subfigure} % EAMS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\EAMS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \BSDS \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/bsds500/eams/cropped/eams_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sbd/eams/cropped/eams_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \begin{center} \Fash \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/fash/eams/cropped/eams_010_contours} \end{subfigure}\\ % NC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\NC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/nc/cropped/nc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/nc/cropped/nc_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/nc/cropped/nc_010_contours} \end{subfigure} % FH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\FH} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/fh/cropped/fh_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/fh/cropped/fh_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/fh/cropped/fh_010_contours} \end{subfigure}\\ % RW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\RW} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/rw/cropped/rw_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/rw/cropped/rw_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/rw/cropped/rw_010_contours} \end{subfigure} % QS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\QS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/qs/cropped/qs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/qs/cropped/qs_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/qs/cropped/qs_010_contours} \end{subfigure}\\ % PF %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PF} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/pf/cropped/pf_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/pf/cropped/pf_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/pf/cropped/pf_010_contours} \end{subfigure} % TP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\TP} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/tp/cropped/tp_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/tp/cropped/tp_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/tp/cropped/tp_010_contours} \end{subfigure}\\ % CIS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CIS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/cis/cropped/cis_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/cis/cropped/cis_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/cis/cropped/cis_010_contours} \end{subfigure} % SLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\SLIC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/slic/cropped/slic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/slic/cropped/slic_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/slic/cropped/slic_010_contours} \end{subfigure}\\ % CRS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CRS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/crs/cropped/crs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/crs/cropped/crs_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/crs/cropped/crs_010_contours} \end{subfigure} % ERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\ERS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/ers/cropped/ers_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/ers/cropped/ers_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/ers/cropped/ers_010_contours} \end{subfigure}\\ % PB %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PB} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/pb/cropped/pb_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/pb/cropped/pb_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/pb/cropped/pb_010_contours} \end{subfigure} % SEEDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\SEEDS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/seeds/cropped/seeds_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/seeds/cropped/seeds_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/seeds/cropped/seeds_010_contours} \end{subfigure}\\ % TPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\TPS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/tps/cropped/tps_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/tps/cropped/tps_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/tps/cropped/tps_010_contours} \end{subfigure} % VC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\VC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/vc/cropped/vc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/vc/cropped/vc_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/vc/cropped/vc_010_contours} \end{subfigure}\\ % CCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CCS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/ccs/cropped/ccs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/ccs/cropped/ccs_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/ccs/cropped/ccs_010_contours} \end{subfigure} % CW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\CW} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/cw/cropped/cw_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/cw/cropped/cw_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/cw/cropped/cw_010_contours} \end{subfigure}\\ % ERGC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ERGC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/ergc/cropped/ergc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/ergc/cropped/ergc_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/ergc/cropped/ergc_010_contours} \end{subfigure} % MSS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\MSS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/mss/cropped/mss_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/mss/cropped/mss_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/mss/cropped/mss_010_contours} \end{subfigure}\\ % preSLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\preSLIC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/preslic/cropped/preslic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/preslic/cropped/preslic_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/preslic/cropped/preslic_010_contours} \end{subfigure} % WP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\WP} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/wp/cropped/wp_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/wp/cropped/wp_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/wp/cropped/wp_010_contours} \end{subfigure}\\ % ETPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ETPS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/etps/cropped/etps_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/etps/cropped/etps_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/etps/cropped/etps_010_contours} \end{subfigure} % LSC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\LSC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/lsc/cropped/lsc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/lsc/cropped/lsc_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/lsc/cropped/lsc_010_contours} \end{subfigure}\\ % POISE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\POISE} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/poise/cropped/poise_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/poise/cropped/poise_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/poise/cropped/poise_010_contours} \end{subfigure} % SEAW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\SEAW} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/seaw/cropped/seaw_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/seaw/cropped/seaw_0004774_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/seaw/cropped/seaw_010_contours} \end{subfigure} \caption{Qualitative results on the \BSDS, \SBD and \Fash datasets. Excerpts from the images in Figure \ref{fig:datasets} are shown for $K \approx 400$ in the upper left corner and $K \approx 1200$ in the lower right corner. Superpixel boundaries are depicted in black; best viewed in color. We judge visual quality on the basis of boundary adherence, compactness, smoothness and regularity. Boundary adherence can be judged both on the caterpillar image as well as on the woman image -- the caterpillar's boundaries are hard to detect and the woman's face exhibits small details. In contrast, compactness, regularity and smoothness can be evaluated considering the background in the caterpillar and see images. \textbf{Best viewed in color.}} \label{fig:experiments-qualitative-bsds500-sbd-fash} \end{figure*} \def\NYUCroppedScale{0.18} \def\SUNRGBDCroppedScale{0.14} \begin{figure*} \centering % NC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\NC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/nc/cropped/nc_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/nc/cropped/nc_00000285_contours_scaled} \end{subfigure} % RW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\RW} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/rw/cropped/rw_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/rw/cropped/rw_00000285_contours_scaled} \end{subfigure} % SEAW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\SEAW} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/seaw/cropped/seaw_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/seaw/cropped/seaw_00000285_contours_scaled} \end{subfigure}\\[4px] % W %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaai}\W} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/w/cropped/w_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SUNRGBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sunrgbd/w/cropped/w_00004732_contours} \end{subfigure} % EAMS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\EAMS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/eams/cropped/eams_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SUNRGBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sunrgbd/eams/cropped/eams_00004732_contours} \end{subfigure} % FH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\FH} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/fh/cropped/fh_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SUNRGBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sunrgbd/fh/cropped/fh_00004732_contours} \end{subfigure}\\ % QS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\QS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/qs/cropped/qs_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/qs/cropped/qs_00004732_contours} \end{subfigure} % PF %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PF} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/pf/cropped/pf_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/pf/cropped/pf_00004732_contours} \end{subfigure} % TP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\TP} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/tp/cropped/tp_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/tp/cropped/tp_00004732_contours} \end{subfigure}\\ % CIS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CIS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/cis/cropped/cis_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/cis/cropped/cis_00004732_contours} \end{subfigure} % SLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\SLIC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/slic/cropped/slic_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/slic/cropped/slic_00004732_contours} \end{subfigure} % CRS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CRS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/crs/cropped/crs_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/crs/cropped/crs_00004732_contours} \end{subfigure}\\ % ERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\ERS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/ers/cropped/ers_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/ers/cropped/ers_00004732_contours} \end{subfigure} % PB %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PB} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/pb/cropped/pb_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/pb/cropped/pb_00004732_contours} \end{subfigure} % DASP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\DASP} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/dasp/cropped/dasp_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/dasp/cropped/dasp_00004732_contours} \end{subfigure}\\ % SEEDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\SEEDS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/seeds/cropped/seeds_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/seeds/cropped/seeds_00004732_contours} \end{subfigure} % TPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\TPS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/tps/cropped/tps_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/tps/cropped/tps_00004732_contours} \end{subfigure} % VC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaai}\VC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/vc/cropped/vc_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/vc/cropped/vc_00004732_contours} \end{subfigure}\\ % CCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CCS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/ccs/cropped/ccs_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/ccs/cropped/ccs_00004732_contours} \end{subfigure} % VCCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\VCCS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/vccs/cropped/vccs_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/vccs/cropped/vccs_00004732_contours} \end{subfigure} % CW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\CW} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/cw/cropped/cw_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/cw/cropped/cw_00004732_contours} \end{subfigure}\\ % ERGC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ERGC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/ergc/cropped/ergc_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/ergc/cropped/ergc_00004732_contours} \end{subfigure} % MSS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\MSS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/mss/cropped/mss_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/mss/cropped/mss_00004732_contours} \end{subfigure} % preSLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\preSLIC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/preslic/cropped/preslic_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/preslic/cropped/preslic_00004732_contours} \end{subfigure}\\ % WP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\WP} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/wp/cropped/wp_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/wp/cropped/wp_00004732_contours} \end{subfigure} % LRW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{subfigure}[b]{0.02\textwidth} % \rotatebox{90}{\small\LRW} %\end{subfigure} %\begin{subfigure}[b]{0.1375\textwidth} % \includegraphics[height=1.65cm]{pictures/nyuv2/lrw/cropped/lrw_00000561_contours} %\end{subfigure} %\begin{subfigure}[b]{0.129\textwidth} % \includegraphics[height=1.65cm]{pictures/nyuv2/lrw/cropped/lrw_00000561_contours} %\end{subfigure}\\ % ETPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ETPS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/etps/cropped/etps_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/etps/cropped/etps_00004732_contours} \end{subfigure} % LSC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\LSC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/lsc/cropped/lsc_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/lsc/cropped/lsc_00004732_contours} \end{subfigure}\\ % POISE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\POISE} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/poise/cropped/poise_00000561_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/poise/cropped/poise_00004732_contours} \end{subfigure} % Puffer \begin{subfigure}[b]{0.02\textwidth} \hphantom{aa} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaii} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaa} \end{subfigure} % Puffer \begin{subfigure}[b]{0.02\textwidth} \hphantom{aa} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaii} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaa} \end{subfigure} \caption{Qualitative results on the \NYU and \SUNRGBD datasets. Excerpts from the images in Figure \ref{fig:datasets} are shown for $K \approx 400$ in the upper left corner and $K \approx 1200$ in the lower right corner. Superpixel boundaries are depicted in black; best viewed in color. \NC, \RW and \SEAW could not be evaluated on the \SUNRGBD dataset due to exhaustive memory usage of the corresponding MatLab implementations. Therefore, results for the \NYU dataset are shown. Visual quality is judged regarding boundary adherence, compactness, smoothness and regularity. We also find that depth information, as used in \DASP and \VCCS, may help resemble the underlying 3D-structure. \textbf{Best viewed in color.}} \label{fig:experiments-qualitative-nyuv2-sunrgbd} \end{figure*} \begin{figure*} \centering % SLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\SLIC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/slic/score/1/cropped/slic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/slic/score/10/cropped/slic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/slic/score/80/cropped/slic_35028_contours} \end{subfigure} % CRS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CRS} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/crs/score/0.001/cropped/crs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/crs/score/0.01/cropped/crs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/crs/score/0.1/cropped/crs_35028_contours} \end{subfigure} \caption{The influence of a low, on the left, and high, on the right, compactness parameter demonstrated on the caterpillar image from the \BSDS datasets using \SLIC and \CRS for $\K \approx 400$. Superpixel boundaries are depicted in black; best viewed in color. Superpixel algorithms providing a compactness parameter allow to trade boundary adherence for compactness. \textbf{Best viewed in color.}} \label{fig:experiments-qualitative-bsds500-compactness} \end{figure*} Visual quality is best determined by considering compactness, regularity and smoothness on the one hand and boundary adherence on the other. Here, compactness refers to the area covered by individual superpixels (as captured in Equation \eqref{eq:co}); regularity corresponds to both the superpixels' sizes and their arrangement; and smootness refers to the superpixels' boundaries. Figures \ref{fig:experiments-qualitative-bsds500-sbd-fash} and \ref{fig:experiments-qualitative-nyuv2-sunrgbd} show results on all datasets. We begin by discussing boundary adherence, in particular with regard to the difference between superpixel and oversegmentation algorithms, before considering compactness, smoothness and regularity. The majority of algorithms provides solid adherence to important image boundaries, especially for large \K. %Except for a handful of algorithms, boundary adherence is mostly good, especially %for larger \K. We consider the woman image -- in particular, the background -- and the caterpillar image in Figure~\ref{fig:experiments-qualitative-bsds500-sbd-fash}. Algorithms with inferior boundary adherence are easily identified as those not capturing the pattern in the background or the silhouette of the caterpillar: \FH, \QS, \CIS, \PF, \PB, \TPS, \TP and \SEAW. The remaining algorithms do not necessarily capture all image details, as for example the woman's face, but important image boundaries are consistently captured. We note that of the three evaluated oversegmentation algorithms, \ie \EAMS, \FH and \QS, only \EAMS demonstrates adequate boundary adherence. Furthermore, we observe that increasing \K results in more details being captured by all algorithms. Notable algorithms regarding boundary adherence include \CRS, \ERS, \SEEDS, \ERGC and \ETPS. These algorithms are able to capture even smaller details such as the coloring of the caterpillar or elements of the woman's face. %Overall, most algorithms are capable of capturing important %image boundaries. %After discussing boundary adherence, we are going to focus more closely on compactness, smoothness and regularity. %While these properties are inherently linked, we discuss compactness separately %as many superpixel algorithms provide a compactness parameter. Compactness strongly varies across algorithms and a compactness parameter is beneficial to control the degree of compactness as it allows to gradually trade boundary adherence for compactness. We consider the caterpillar image in Figure~\ref{fig:experiments-qualitative-bsds500-sbd-fash}. \TP, \RW, \W, and \PF are examples for algorithms not providing a compactness parameter. While \TP generates very compact superpixels and \RW tends to resemble grid-like superpixels, \W and \PF generate highly non-compact superpixels. In this regard, compactness depends on algorithm and implementation details (\eg grid-like initialization) and varies across algorithms. For algorithms providing control over the compactness of the generated superpixels, we find that parameter optimization has strong impact on compactness. Examples are \CRS, \LSC, \ETPS and \ERGC showing highly irregular superpixels, while \SLIC, \CCS, \VC and \WP generate more compact superpixels. For \DASP and \VCCS, requiring depth information, similar observations can be made on the kitchen image in Figure \ref{fig:experiments-qualitative-nyuv2-sunrgbd}. Inspite of the influence of parameter optimization, we find that a compactness parameter is beneficial. This can best be observed in Figure~\ref{fig:experiments-qualitative-bsds500-compactness}, showing superpixels generated by \SLIC and \CRS for different degrees of compactness. We observe that compactness can be increased while only gradually sacrificing boundary adherence. %Altogether, we find that a compactness parameter is important for controlling the %degree of compactness as not all algorithms necessarily generate compact superpixels. We find that compactness does not necessarily induce regularity and smoothness; some algorithms, however, are able to unite compactness, regularity and smoothness. Considering the sea image in Figure~\ref{fig:experiments-qualitative-bsds500-sbd-fash} for \CIS and \TP, we observe that compact superpixels are not necessarily arranged regularly. Similarly, compact superpixels do not need to exhibit smooth boundaries, as can be seen for \PB. On the other hand, compact superpixels are often generated in a regular fashion, as can be seen for many algorithms providing a compactness parameter such as \SLIC, \VC and \CCS. In such cases, compactness also induces smoother and more regular superpixels. We also observe that many algorithms exhibiting excellent boundary adherence such as \CRS, \SEEDS or \ETPS generate highly irregular and non-smooth superpixels. These observations also justify the separate consideration of compactness, regularity and smoothness to judge visual quality. While the importance of compactness, regularity and smoothness may depend on the application at hand, these properties represent the trade-off between abstraction from and sensitivity to low-level image content which is inherent to all superpixel algorithms. In conclusion, we find that the evaluated path-based and density-based algorithms as well as oversegmentation algorithms show inferior visual quality. On the other hand, clustering-based, contour evolution and iterative energy optimization algorithms mostly demonstrate good boundary adherence and some provide a compactness parameter, \eg \SLIC, \ERGC and \ETPS. Graph-based algorithms show mixed results -- algorithms such as \FH, \CIS and \PB show inferior boundary adherence, while \ERS, \RW, \NC and \POISE exhibit better boundary adherence. However, good boundary adherence, especially regarding details in the image, often comes at the price of lower compactness, regularity and/or smoothness as can be seen for \ETPS and \SEEDS. Furthermore, compactness, smoothness and regularity are not necessarily linked and should be discussed separately. \subsubsection{Compactness} \label{subsubsec:experiments-qualitative-compactness} \begin{figure}[t] \centering \input{plots/qualitative-compactness} \caption{\CO on the \BSDS and \NYU datasets. Considering Figures \ref{fig:experiments-qualitative-bsds500-sbd-fash} and \ref{fig:experiments-qualitative-nyuv2-sunrgbd}, \CO appropriately reflects compactness. However, it does not take into account other aspects of visual quality such as regularity and smoothness. Therefore, we find that \CO is of limited use in a quantitative assessment of visual quality. \textbf{Best viewed in color.}} \label{fig:experiments-qualitative-compactness} \vskip 12px \input{legends/full-half} \end{figure} \CO measures compactness, however, does not reflect regularity or smoothness; therefore, \CO is not sufficient to objectively judge visual quality. We consider Figure \ref{fig:experiments-qualitative-compactness}, showing \CO on the \BSDS and \NYU datasets, and we observe that \CO correctly measures compactness. For example, \WPr, \TPr and \CISr, exhibiting high \CO, also present very compact superpixels in Figures \ref{fig:experiments-qualitative-bsds500-sbd-fash} and \ref{fig:experiments-qualitative-nyuv2-sunrgbd}. However, these superpixels do not necessarily have to be visually appealing, \ie may lack regularity and/or smoothness. This can exemplarily be seen for \TPS, exhibiting high compactness bur poor regularity, or \PB showing high compactness but inferior smoothness. Overall, we find that \CO should not be considered isolated from a qualitative evaluation. \subsubsection{Depth} \label{subsubsec:experiments-qualitative-depth} Depth information helps superpixels resemble the 3D-structure within the image. Considering Figure \ref{fig:experiments-qualitative-nyuv2-sunrgbd}, in particular both images for \DASP and \VCCS, we deduce that depth information may be beneficial for superpixels to resemble the 3D-structure of a scene. For example, when considering planar surfaces (e.g. the table) in both images from Figure \ref{fig:experiments-qualitative-nyuv2-sunrgbd} for \DASP, we clearly see that the superpixels easily align with the surface in a way perceived as 3-dimensional. For \VCCS, this effect is less observable which may be due to the compactness parameter.
{ "alphanum_fraction": 0.6906861589, "avg_line_length": 46.2486368593, "ext": "tex", "hexsha": "0b3906c90fadd0122d6af1f99d740cb144418a7e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "83e0db95cff91fee26ea04d5ecdb221d441e940b", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "davidstutz/cviu2018-superpixels", "max_forks_repo_path": "paper/experiments/qualitative.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83e0db95cff91fee26ea04d5ecdb221d441e940b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "davidstutz/cviu2018-superpixels", "max_issues_repo_path": "paper/experiments/qualitative.tex", "max_line_length": 126, "max_stars_count": null, "max_stars_repo_head_hexsha": "83e0db95cff91fee26ea04d5ecdb221d441e940b", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "davidstutz/cviu2018-superpixels", "max_stars_repo_path": "paper/experiments/qualitative.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13790, "size": 42410 }
% DIVAS image processing workshop % OpenCV images slides % % Mark M. Meysenburg % 5/14/2017 \documentclass{beamer} \mode<presentation> { % set theme and color \usetheme{CambridgeUS} \usecolortheme{crane} } \usepackage{graphicx} % Allows including images \usepackage{listings} % Allow sourcecode \usepackage{hyperref} % Allow clickable hyperlinks %---------------------------------------------------------------------------------------- % TITLE PAGE AND FRONT MATTER %---------------------------------------------------------------------------------------- \title[Image Basics]{IP Workshop: OpenCV Images} % The short title appears at the bottom of every slide, the full title is only on the title page \author{Mark M. Meysenburg} % Your name \institute[Doane DIVAS] % Your institution as it will appear on the bottom of every slide, may be shorthand to save space { Doane University \\ % Your institution for the title page \medskip \textit{[email protected]} % Your email address } \date{\today} % Date, can be changed to a custom date \begin{document} \lstset{basicstyle=\footnotesize,language=Python} \begin{frame} \titlepage % Print the title page as the first slide \end{frame} \begin{frame} \frametitle{Overview} \tableofcontents \end{frame} %---------------------------------------------------------------------------------------- % PRESENTATION SLIDES %---------------------------------------------------------------------------------------- \section{Images / Arrays} \begin{frame} \frametitle{OpenCV images are NumPy arrays} \begin{itemize} \item OpenCV images are stored in a manner consistent with our raster graphics model \begin{itemize} \item An image is rectangular array of pixels \item Each pixel has three color channel values (RGB) \end{itemize} \item OpenCV images = 3D NumPy arrays \item Remember the coordinate system, and think BGR vs. RGB \end{itemize} \end{frame} \begin{frame} \frametitle{OpenCV images are NumPy arrays} \begin{center} \includegraphics[width=0.75\textwidth]{../../fig/02-chair-orig.jpg} \end{center} \end{frame} \begin{frame} \frametitle{OpenCV images are NumPy arrays} \begin{center} \includegraphics[width=0.75\textwidth]{../../fig/02-chair-layers.png} \end{center} \end{frame} \section{Image I/O} \begin{frame} \frametitle{Reading, displaying, saving} \begin{itemize} \item OpenCV has methods for reading, displaying, and saving images \item All popular formats supported \item In Python programs, we gain access to the OpenCV library via the \lstinline!import cv2! statement \end{itemize} \end{frame} \begin{frame} \frametitle{Reading, displayng, saving} See the {\tt Workshops/image-processing/02-opencv-images/Open.py} program for examples of reading, displaying, and saving images. Key methods: \begin{itemize} \item \lstinline!cv2.imread()! \item \lstinline!cv2.namedWindow()! \item \lstinline!cv2.imshow()! \item \lstinline!cv2.waitKey()! \item \lstinline!cv2.imwrite()! \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6518350114, "avg_line_length": 22.3115942029, "ext": "tex", "hexsha": "52057911c3e54534ac994b4c52f83dd40942a153", "lang": "TeX", "max_forks_count": 71, "max_forks_repo_forks_event_max_datetime": "2022-03-22T09:30:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-29T00:11:28.000Z", "max_forks_repo_head_hexsha": "595e702e337729844625cd6d5d8252fcc9b63a6a", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "rahulisaac/image-processing", "max_forks_repo_path": "files/02-opencv-images/02-opencv-images.tex", "max_issues_count": 166, "max_issues_repo_head_hexsha": "595e702e337729844625cd6d5d8252fcc9b63a6a", "max_issues_repo_issues_event_max_datetime": "2022-03-30T09:15:25.000Z", "max_issues_repo_issues_event_min_datetime": "2019-05-28T21:09:42.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "rahulisaac/image-processing", "max_issues_repo_path": "files/02-opencv-images/02-opencv-images.tex", "max_line_length": 145, "max_stars_count": 49, "max_stars_repo_head_hexsha": "595e702e337729844625cd6d5d8252fcc9b63a6a", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "rahulisaac/image-processing", "max_stars_repo_path": "files/02-opencv-images/02-opencv-images.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T04:34:03.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-27T07:01:04.000Z", "num_tokens": 826, "size": 3079 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % GEANT manual in LaTeX form % % % % Version 1.00 % % % % Last Mod. 9 June 1993 19:30 MG % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Authors{S.Ravndal} \Origin{GEANT} \Version{Geant 3.16}\Routid{ZZZZ010} \Submitted{01.10.84} \Revised{08.11.93} \Makehead{List of COMMON Blocks} \section{Introduction} Here, the main features of the common blocks used in {\tt GEANT} are summarised, with special mention of the variables initialised in \Rind{GINIT} and of the possibility of overriding them through data records {\tt [BASE040]} or interactive commands {\tt [XINT]}. In most of the cases there is a correspondence between a given data structure and a given common block where the current contents of the banks are stored. The labelled common blocks are accessible through Patchy/CMZ sequences identified by the name of the {\tt COMMON}. They are defined in the Patch \Rind {GCDES}. {\bf Note:} Unless otherwise specified, the long range variables are initialised in \Rind{GINIT}. When non-zero, default values are quoted between brackets. If the value may be modified the keyword for the data record and for the interactive command is also given in bold characters between brackets. \subsection{Dynamic memory} The {\tt GEANT} data structures are stored in the common \FCind{/GCBANK/} accessible through the following Patchy sequence: The \FCind{/GCLINK/} variables are pointers to the {\tt GEANT} data structures in the \FCind{/GCBANK/} common. They belong to a permanent area declared in \Rind{GZINIT}. \FComm{GCBANK}{Dynamic core for the GEANT data structures} \begin{verbatim} PARAMETER (KWBANK=69000,KWWORK=5200) COMMON/GCBANK/NZEBRA,GVERSN,ZVERSN,IXSTOR,IXDIV,IXCONS,FENDQ(16) + ,LMAIN,LR1,WS(KWBANK) DIMENSION IQ(2),Q(2),LQ(8000),IWS(2) EQUIVALENCE (Q(1),IQ(1),LQ(9)),(LQ(1),LMAIN),(IWS(1),WS(1)) EQUIVALENCE (JCG,JGSTAT) COMMON/GCLINK/JDIGI ,JDRAW ,JHEAD ,JHITS ,JKINE ,JMATE ,JPART + ,JROTM ,JRUNG ,JSET ,JSTAK ,JGSTAT,JTMED ,JTRACK,JVERTX + ,JVOLUM,JXYZ ,JGPAR ,JGPAR2,JSKLT C \end{verbatim} \subsection{Other user accessed common blocks} \FComm{GCCUTS}{Tracking thresholds} \begin{verbatim} COMMON/GCCUTS/CUTGAM,CUTELE,CUTNEU,CUTHAD,CUTMUO,BCUTE,BCUTM + ,DCUTE ,DCUTM ,PPCUTM,TOFMAX,GCUTS(5) C \end{verbatim} This common contains the threshold for various processes and particles. The energy values are the kinetic energy in GeV: \begin{DLtt}{MMMMMMMM} \item[CUTGAM] threshold for gamma transport ({\tt 0.001, CUTS}); \item[CUTELE] threshold for electron and positron transport ({\tt 0.001, CUTS}); \item[CUTNEU] threshold for neutral hadron transport ({\tt 0.01, CUTS}); \item[CUTHAD] threshold for charged hadron and ion transport ({\tt 0.01, CUTS}); \item[CUTMUO] threshold for muon transport ({\tt 0.01, CUTS}); \item[BCUTE] threshold for photons produced by electron bremsstrahlung ({\tt CUTGAM, CUTS}); \item[BCUTM] threshold for photons produced by muon bremsstrahlung ({\tt CUTGAM, CUTS}); \item[DCUTE] threshold for electrons produced by electron $\delta$-rays ({\tt CUTELE, CUTS}); \item[DCUTM] threshold for electrons produced by muon or hadron $\delta$-rays ({\tt CUTELE, CUTS}); \item[PPCUTM] threshold for \Pep\Pem direct pair production by muon ({\tt 0.002, CUTS}); \item[TOFMAX] threshold on time of flight counted from primary interaction time ({\tt $10^{10}$, CUTS}); \item[GCUTS] free for user applications ({\tt CUTS}). \end{DLtt} {\bf Note:} The cuts {\tt BCUTE, BCUTM} and {\tt DCUTE, DCUTM} are given the respective default values {\tt CUTGAM} and {\tt CUTELE}. Experienced users can make use of the facility offered (command {\tt CUTS}) to change {\tt BCUTE, DCUTE, BCUTM} and {\tt DCUTM}. \FComm{GCDRAW}{Variables used by the drawing package} \begin{verbatim} COMMON/GCDRAW/NUMNOD,MAXNOD,NUMND1,LEVVER,LEVHOR,MAXV,IPICK, + MLEVV,MLEVH,NWCUT,JNAM,JMOT,JXON,JBRO,JDUP,JSCA,JDVM,JPSM, + JNAM1,JMOT1,JXON1,JBRO1,JDUP1,JSCA1,JULEV,JVLEV, + LOOKTB(16), + GRMAT0(10),GTRAN0(3),IDRNUM,GSIN(41),GCOS(41),SINPSI,COSPSI, + GTHETA,GPHI,GPSI,GU0,GV0,GSCU,GSCV,NGVIEW, + ICUTFL,ICUT,CTHETA,CPHI,DCUT,NSURF,ISURF, + GZUA,GZVA,GZUB,GZVB,GZUC,GZVC,PLTRNX,PLTRNY, + LINATT,LINATP,ITXATT,ITHRZ,IPRJ,DPERS,ITR3D,IPKHIT,IOBJ,LINBUF, + MAXGU,MORGU,MAXGS,MORGS,MAXTU,MORTU,MAXTS,MORTS, + IGU,IGS,ITU,ITS,NKVIEW,IDVIEW, + NOPEN,IGMR,IPIONS,ITRKOP,IHIDEN, + ZZFU,ZZFV,MYISEL, + DDUMMY(15) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NUMNOD] number of nodes in non-optimized tree; \item[MAXNOD] max. number of nodes of non-optimized tree ({\tt MIN(NLEFT, 16,000)}); \item[NUMND1] number of nodes in optimized tree; \item[LEVVER] vertical level in the tree currently scanned by tree routines; \item[LEVHOR] horizontal node in the tree currently scanned by tree routines; \item[MAXV] max vertical levels in the tree to be scanned by tree routines; \item[IPICK] node selected by \Rind{GDTREE}; \item[MLEVV] number of vertical levels in the last tree scanned; \item[MLEVH] number of horizontal nodes in the last tree scanned; \item[NWCUT] max. workspace allocated by cut routines, ({\tt 5000}); \item[JNAM-JVLEV] pointers used by the tree routines; \item[LOOKTB] colour look-up table, ({\tt LOOKTB(I)=I,I=1,16}); \item[GRMAT0] rotation matrix saved by \Rind{GDRVOL}, ({\tt unitary matrix}); \item[GTRAN0] translation vector saved by \Rind{GDRVOL}, ({\tt 0.,0.,0.}); \item[IDRNUM] flag for \Rind{GDRAW}, set to 1 when called by \Rind{GDRVOL} ({\bf 0}); \item[GSIN] sine table (at $9^{\circ}$ steps); \item[GCOS] cosine table (at $9^{\circ}$ steps); \item[SINPSI] {\tt SIN(GPSI*DEGRAD)}; \item[COSPSI] {\tt COS(GPSI DEGRAD)}; \item[GTHETA] $\theta$ angle of the parallel projection of 3-dimensional images on the screen (${\tt 45^{\circ}}$); \item[GPHI] $\phi$ angle of the parallel projection of 3-dimensional images on the screen (${\tt 135^{\circ}}$); \item[GPSI] $\psi$ angle of rotation of the image on the screen (${\tt 0^{\circ}}$); \item[GU0] U position (X in screen coordinates) of the origin of the drawing in screen units ({\tt 10.}); \item[GV0] V position (Y in screen coordinates) of the origin of the drawing in screen units ({\tt 10.}); \item[GSCU] scale factor for the U screen coordinate ({\tt 0.015}); \item[GSCV] scale factor for the V screen coordinate ({\tt 0.015}); \item[NGVIEW] flag informing \Rind{GDFR3D} and \Rind{GD3D3D} if the view point has changed ({\tt 0}); \item[ICUTFL] flag informing \Rind{GDRAW} if it was called by {\it cut} drawing routines; \item[ICUT] axis along which the cut is performed (1, 2 or 3, 0 if no cut); \item[CTHETA] $\theta$ angle of cut supplied to \Rind{GDRAWX} (used by \Rind{GDCUT}); \item[CPHI] $\phi$ angle of cut supplied to \Rind{GDRAWX} (used by \Rind{GDCUT}); \item[DCUT] coordinate value (along axis {\tt ICUT)} at which the cut is performed; \item[NSURF] number of surfaces stored in {\tt SURF} to be cut; \item[ISURF] pointer for array {\tt SURF}; \item[GZUA] zoom parameter (horizontal scale factor) ({\tt 1.}); \item[GZVA] zoom parameter (vertical scale factor) ({\tt 1.}); \item[GZUB] zoom parameter ({\tt 0.}); \item[GZVB] zoom parameter ({\tt 0.}); \item[GZUC] zoom parameter ({\tt 0.}); \item[GZVC] zoom parameter ({\tt 0.}); \item[PLTRNX] drawing X range in cm ({\tt 20.}); \item[PLTRNY] drawing Y range in cm ({\tt 20.}); \item[LINATT] current line attributes ({\tt colour=1, width=1, style=1, fill=1}); \item[LINATP] permanent line attributes ({\tt LINATT}); \item[ITXATT] current text attributes ({\tt colour = 1, width = 1}); \item[ITHRZ] string containing the status of {\tt THRZ} option of \Rind{GDOPT} ({\tt 'OFF '}); \item[IPRJ] string containing the status of {\tt PROJ} option of \Rind{GDOPT} ({\tt 'PARA'}); \item[DPERS] distance of the view point from the origin for perspective drawing ({\tt 1000.}); \item[ITR3D]track being scanned (used together with {\tt THRZ} option); \item[IPKHIT]flag for \Rind{GPHITS}, if $>0$ then print only hit number, ({\tt 0}); \item[IOBJ]type of the object being drawn (detector, track, hit, etc.) ({\tt 0}); \item[LINBUF]flag informing \Rind{GDRAWV} if line buffering is wanted or not ({\tt 0}); \item[MAXGU]current number of words of graphic unit banks; \item[MORGU]number of words to extend graphic unit banks; \item[MAXGS]current number of words of graphic segment banks; \item[MORGS]number of words to extend graphic segment banks; \item[MAXTU]current number of words of text unit banks; \item[MORTU]number of words to extend text unit banks; \item[MAXTS]current number of words of text segment banks; \item[MORTS]number of words to extend text segment banks; \item[IGU]pointer to current graphic unit bank; \item[IGS]pointer to current graphic segment bank; \item[ITU]pointer to current text unit bank; \item[ITS]pointer to current text segment bank; \item[NKVIEW]number of view data banks ({\tt 0}); \item[IGVIEW]current view bank number or 0 if none active ({\tt 0}); \item[NOPEN]unused ({\tt 0}); \item[IGMR]flag informing if {\tt APOLLO-GMR} is being used ({\tt 0}); \item[IPIONS]unused ({\tt 0}); \item[ITRKOP]string containing the status of {\tt TRAK} option of \Rind{GDOPT} ({\tt 'LINE'}); \item[ZZFU] \item[ZZFV] \item[MYISEL] \item[DDUMMY]array of dummy words; \end{DLtt} \FComm{GCFLAG}{Flags and variables to control the run} \begin{verbatim} COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IEOTRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) COMMON/GCFLAX/BATCH, NOLOG LOGICAL BATCH, NOLOG C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IDEBUG]flag set internally to 1 to activate debug output if {\tt IDEMIN} $\leq$ {\tt IEVENT} $\leq$ {\tt IDEMAX} and {\tt IEVENT} is a multiple of {\tt ITEST}; \item[IDEMIN] first event to debug ({\tt DEBU}); \item[IDEMAX] last event to debug ({\tt DEBU}); \item[ITEST] number of events between two activations of the debug printing; \item[IDRUN]current user run number ({\tt 1, RUNG}); \item[IDEVT]current user event number ({\tt 1, RUNG}); \item[IEORUN]flag to terminate run if non-zero; \item[IEOTRI]flag to abort current event if non-zero; \item[IEVENT]current event sequence number ({\tt 1}); \item[ISWIT]user flags, the first three are used by \Rind{GDEBUG} to select the debug output ({\tt 0, SWIT}); \item[IFINIT]internal initialisation flags; \item[NEVENT]number of events to be processed ({\tt 10000000, TRIG}); \item[NRNDM]initial seeds for the random number generator. If {\tt NRNDM(2)=0} the sequence number {\tt NRNDM(1)} is taken from a predefined set of 215 independent sequences. Otherwise the random number generator is initialised with the two seeds {\tt NRNDM(1), NRNDM(2)} ({\tt 9876, 54321}); \item[BATCH] true if the job is running in batch; set by the {\tt GXINT} interactive program; \item[NOLOG] true if no login kumac file is requested; set by the {\tt GXINT} interactive program; \end{DLtt} \FComm{GCGOBJ}{CG package variables} \begin{verbatim} PARAMETER (NTRCG=1) PARAMETER (NWB=207,NWREV=100,NWS=1500) PARAMETER (C2TOC1=7.7, C3TOC1=2.,TVLIM=1296.) COMMON /GCGOBJ/IST,IFCG,ILCG,NTCUR,NFILT,NTNEX,KCGST + ,NCGVOL,IVFUN,IVCLOS,IFACST,NCLAS1,NCLAS2,NCLAS3 COMMON /CGBLIM/IHOLE,CGXMIN,CGXMAX,CGYMIN,CGYMAX,CGZMIN,CGZMAX C C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NTRCG] \item[NWB] \item[NWREV] \item[NWS] \item[C2TOC1] \item[C3TOC1] \item[TVLIM] \item[IST] \item[IFCG] \item[ILCG] \item[NTCUR] \item[NFILT] \item[NTNEX] \item[KCGST] \item[NCGVOL] \item[IVFUN] \item[IVCLOS] \item[IFACST] \item[NCLAS1] \item[NCLAS2] \item[NCLAS3] \item[IHOLE] \item[CGXMIN] \item[CGXMAX] \item[CGYMIN] \item[CGYMAX] \item[CGZMIN] \item[CGZMAX] \end{DLtt} \FComm{GCHILN}{Temporary link area for the CG package} \begin{verbatim} COMMON/GCHILN/LARECG(2), JCGOBJ, JCGCOL, JCOUNT, JCLIPS, + IMPOIN, IMCOUN, JSIX, JSIY, JSIZ, + JPXC, JPYC, JPZC, ICLIP1, ICLIP2 * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[LARECG] \item[JCGOBJ] \item[JCGCOL] \item[JCOUNT] \item[JCLIPS] \item[IMPOIN] \item[IMCOUN] \item[JSIX] \item[JSIY] \item[JSIZ] \item[JPXC] \item[JPYC] \item[JPZC] \item[ICLIP1] \item[ICLIP2] \end{DLtt} \FComm{GCJLOC}{JMATE substructure pointers for current material} This common block contains the pointers to various {\tt ZEBRA} data structures which refer to the current material during tracking. \begin{verbatim} COMMON/GCJLOC/NJLOC(2),JTM,JMA,JLOSS,JPROB,JMIXT,JPHOT,JANNI + ,JCOMP,JBREM,JPAIR,JDRAY,JPFIS,JMUNU,JRAYL + ,JMULOF,JCOEF,JRANG C COMMON/GCJLCK/NJLCK(2),JTCKOV,JABSCO,JEFFIC,JINDEX,JCURIN + ,JPOLAR,JTSTRA,JTSTCO,JTSTEN,JTASHO C EQUIVALENCE (JLASTV,JTSTEN) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NJLOC] {\tt ZEBRA} link area control variables; \item[JTM] tracking medium; \item[JMA] material; \item[JLOSS] energy loss table; \item[JPROB] bank containing some physic constants of the material; \item[JMIXT] mixture parameters; \item[JPHOT] photoelectric effect cross-section; \item[JANNI] positron annihilation cross-section; \item[JCOMP] Compton effect cross-section; \item[JBREM] bremsstrahlung cross-section; \item[JPAIR] photon pair production and muon direct pair production cross-section; \item[JDRAY] $\delta$-ray production cross-section; \item[JPFIS] photo-fission cross-section; \item[JMUNU] muon-nucleus interaction cross-section; \item[JRAYL] Rayleigh effect cross-section; \item[JMULOF] {\tt STMIN}, see {\tt [PHYS010]}; \item[JCOEF] bank containing the coefficients of the parabolic range-energy fit; \item[JRANG] range; \item[NJLCK] {\tt ZEBRA} link area control variables; \item[JTCKOV] \v{C}erenkov photons energy binning; \item[JABSCO] absorption coefficient; \item[JEFFIC] quantum efficiency; \item[JINDEX] refraction index; \item[JCURIN] \v{C}erenkov angle integral; \item[JPOLAR] polarisation information; \item[JTSTRA] top level bank for PAI energy loss fluctuations model; \item[JTSTCO] coefficients for PAI energy loss fluctuations model; \item[JTSTEN] energy binning for PAI energy loss fluctuations model; \item[JTASHO] coefficients for ASHO energy loss fluctuations model; \end{DLtt} For more information see {\tt [CONS199]}. \FComm{GCJUMP}{Pointers for the jump package} Variable {\tt JU$\dots$} contains the address of the routine {\tt GU$\dots$}. \begin{verbatim} PARAMETER (MAXJMP=30) COMMON/GCJUMP/JUDCAY, JUDIGI, JUDTIM, JUFLD , JUHADR, JUIGET, + JUINME, JUINTI, JUKINE, JUNEAR, JUOUT , JUPHAD, + JUSKIP, JUSTEP, JUSWIM, JUTRAK, JUTREV, JUVIEW, + JUPARA DIMENSION JMPADR(MAXJMP) EQUIVALENCE (JMPADR(1), JUDCAY) * \end{verbatim} \FComm{GCKINE}{Kinematics of current track} \begin{verbatim} COMMON/GCKINE/IKINE,PKINE(10),ITRA,ISTAK,IVERT,IPART,ITRTYP + ,NAPART(5),AMASS,CHARGE,TLIFE,VERT(3),PVERT(4),IPAOLD C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IKINE] user integer word ({\tt 0, KINE}); \item[PKINE] user array of real ({\tt 0, KINE}); \item[ITRA] track number; \item[ISTAK] stack track number; \item[IVERT] vertex number; \item[IPART] particle number; \item[ITRTYP] particle tracking type; \item[NAPART] name of current particle (ASCII codes stored in an integer array, 4 characthers per word); \item[AMASS] mass of current particle in GeV c$^{-2}$; \item[CHARGE] charge of current particle in electron charge unit; \item[TLIFE] average life time of current particle in seconds; \item[VERT] coordinates of origin vertex for current track; \item[PVERT] track kinematics at origin vertex ({\tt PVERT(4)} not used); \item[IPAOLD] particle number of the previous track. \end{DLtt} \FComm{GCKMAX}{Size of the \FCind{/GCKING/} stack} \begin{verbatim} INTEGER MXGKIN PARAMETER (MXGKIN=100) \end{verbatim} \FComm{GCMUTR}{Auxiliary variables for the CG package} \begin{verbatim} * PARAMETER (MULTRA=50) CHARACTER*4 GNASH, GNNVV, GNVNV COMMON/GCMUTR/NCVOLS,KSHIFT,NSHIFT,ICUBE,NAIN,JJJ, + NIET,IOLDSU,IVOOLD,IWPOIN,IHPOIN,IVECVO(100), + PORGX,PORGY,PORGZ,POX(15),POY(15),POZ(15),GBOOM, + PORMIR(18),PORMAR(18),IPORNT, + ICGP,CLIPMI(6),CLIPMA(6), + ABCD(4),BMIN(6),BMAX(6),CGB(16000),CGB1(16000), + GXMIN(MULTRA),GXMAX(MULTRA),GYMIN(MULTRA), + GYMAX(MULTRA),GZMIN(MULTRA),GZMAX(MULTRA), + GXXXX(MULTRA),GYYYY(MULTRA),GZZZZ(MULTRA) * COMMON/GCMUTC/ GNASH(MULTRA),GNNVV(MULTRA),GNVNV(MULTRA) * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NCVOLS] \item[KSHIFT] \item[NSHIFT] \item[ICUBE] \item[NAIN] \item[JJJ] \item[NIET] \item[IOLDSU] \item[IVOOLD] \item[IWPOIN] \item[IHPOIN] \item[IVECVO] \item[PORGX] \item[PORGY] \item[PORGZ] \item[POX] \item[POY] \item[POZ] \item[GBOOM] \item[PORMIR] \item[PORMAR] \item[IPORNT] \item[ICGP] \item[CLIPMI] \item[CLIPMA] \item[ABCD] \item[BMIN] \item[BMAX] \item[CGB] \item[CGB1] \item[GXMIN] \item[GXMAX] \item[GYMIN] \item[GYMAX] \item[GZMIN] \item[GZMAX] \item[GXXXX] \item[GYYYY] \item[GZZZZ] \item[GNASH] \item[GNNVV] \item[GNVNV] \end{DLtt} \FComm{GCKING}{Kinematics of generated secondaries} \begin{verbatim} +SEQ, GCKMAX COMMON/GCKING/KCASE,NGKINE,GKIN(5,MXGKIN), + TOFD(MXGKIN),IFLGK(MXGKIN) INTEGER KCASE,NGKINE ,IFLGK,MXPHOT,NGPHOT REAL GKIN,TOFD,XPHOT C PARAMETER (MXPHOT=800) COMMON/GCKIN2/NGPHOT,XPHOT(11,MXPHOT) C COMMON/GCKIN3/GPOS(3,MXGKIN) REAL GPOS C \end{verbatim} \begin{DLtt}{MMMMMMMMMMMMMM} \item[KCASE] Mechanism which has generated the secondary particles; \item[NGKINE]Number of generated secondaries; \item[GKIN(1,I)]x component of momentum of I$^{th}$ particle; \item[GKIN(2,I)]y component of momentum; \item[GKIN(3,I)]z component of momentum; \item[GKIN(4,I)]total energy; \item[GKIN(5,I)]particle code (see {\tt [CONS300]}); \item[TOFD(I)]time offset with respect to current time of flight; \item[IFLGK(I)]Flag controlling the handling of track by \Rind{GSKING}, \Rind{GSSTAK}; \begin{DLtt}{MMMM} \item[$<$0]particle is discarded; \item[~0]({\bf D}) particle is stored in the temporary stack {\tt JSTAK} for further tracking; \item[~1] like {\tt 0} but particle is stored in {\tt JVERTX/JKINE} structure as well; \item[$>$1] particle is attached to vertex {\tt IFLGK(I)}. \end{DLtt} \item[GPOS(1,I)] x position of I$^{th}$ particle; \item[GPOS(2,I)] y position; \item[GPOS(3,I)] z position; \item[NGPHOT] number of \v{C}erenkov photons generated in the current step; \item[XPHOT(1,I)] x position of the I$^{th}$ photon; \item[XPHOT(2,I)] y position; \item[XPHOT(3,I)] z position; \item[XPHOT(4,I)] x component of momentum; \item[XPHOT(5,I)] y component of momentum; \item[XPHOT(6,I)] z component of momentum; \item[XPHOT(7,I)] momentum of the photon; \item[XPHOT(8,I)] x component of the polarisation vector; \item[XPHOT(9,I)] y component of the polarisation vector; \item[XPHOT(10,I)] z component of the polarisation vector; \item[XPHOT(11,I)] time of flight in seconds of the photon. \end{DLtt} \FComm{GCLINK}{See \FCind{/GCBANK/} above} \FComm{GCLIST}{Various system and user lists} \begin{verbatim} COMMON/GCLIST/NHSTA,NGET ,NSAVE,NSETS,NPRIN,NGEOM,NVIEW,NPLOT + ,NSTAT,LHSTA(20),LGET (20),LSAVE(20),LSETS(20),LPRIN(20) + ,LGEOM(20),LVIEW(20),LPLOT(20),LSTAT(20) C \end{verbatim} \begin{DLtt}{MMMMMMMMMMMMMMMMMM} \item[NHSTA] number of histograms on data record {\tt HSTA}; \item[NGET] number of data structures on data record {\tt GET}; \item[NSAVE]number of data structures on data record {\tt SAVE}; \item[NSETS]number of items on data record {\tt SETS}; \item[NPRIN]number of items on data record {\tt PRIN}; \item[NGEOM]number of items on data record {\tt GEOM}; \item[NVIEW]number of items on data record {\tt VIEW}; \item[NPLOT]number of items on data record {\tt PLOT}; \item[NSTAT]number of items on data record {\tt STAT} (obsolete); \item[LHSTA \ldots LSTAT] lists of items set via the input records ({\tt HSTA \ldots,STAT}). \end{DLtt} {\tt LSTAT(1)} is reserved by the system for volume statistics. \FComm{GCMATE}{Parameters of current material} \begin{verbatim} COMMON/GCMATE/NMAT,NAMATE(5),A,Z,DENS,RADL,ABSL C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NMAT] current material number; \item[NAMATE]name of current material (ASCII codes stored in an integer array, 4 characthers per word); \item[A]atomic weight of current material; \item[Z]atomic number of current material; \item[DENS]density of current material in g cm$^{-3}$; \item[RADL]radiation length of current material; \item[ABSL]absorption length of current material. \end{DLtt} \FComm{GCMULO}{Energy binning and multiple scattering} Precomputed quantities for multiple scattering and energy binning for {\tt JMATE} banks. See also {\tt [CONS199]} for the energy binning and {\tt [PHYS325]} for a description of the variables {\tt OMCMOL} and {\tt CHCMOL}. \begin{verbatim} COMMON/GCMULO/SINMUL(101),COSMUL(101),SQRMUL(101),OMCMOL,CHCMOL + ,EKMIN,EKMAX,NEKBIN,NEK1,EKINV,GEKA,GEKB,EKBIN(200),ELOW(200) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[SINMUL] not used any more; \item[COSMUL] not used any more; \item[SQRMUL] not used any more; \item[OMCMOL] constant $\Omega_0$ of the Moli\'ere theory; \item[CHCMOL] $\chi_{cc}$ constant of the Moli\'ere theory; \item[EKMIN] lower edge of the energy range of the tabulated cross sections ({\tt $10^{-5}$, ERAN}); \item[EKMAX] upper edge of the energy range of the tabulated cross sections ({\tt $10^{4}$, ERAN}); \item[NEKBIN] number of energy bins to be used ({\tt 90, ERAN}); \item[NEK1] {\tt NEKBIN+1}; \item[EKINV] $1/ \left ( \log_{10}(\mbox{\tt EKMAX})- \log_{10}(\mbox{\tt EKMIN}) \right )$; \item[GEKA] {\tt NEKBIN*EKINV}; \item[GEKB] {\tt 1-GEKA*EKBIN(1)}; \item[EKBIN] $\log \left ( \mbox{\tt ELOW} \right ) $; \item[ELOW] low edges of the energy bins. \end{DLtt} \FComm{GCMZFO}{I/O descriptors of GEANT banks} \begin{verbatim} COMMON/GCMZFO/IOMATE,IOPART,IOTMED,IOSEJD,IOSJDD,IOSJDH,IOSTAK + ,IOMZFO(13) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IOMATE] I/O descriptor for the {\tt JMATE} bank; \item[IOPART] I/O descriptor for the {\tt JPART} bank; \item[IOTMED] I/O descriptor for the {\tt JTMED} bank; \item[IOSEJD] I/O descriptor for the detector banks; \item[IOSJDD] I/O descriptor for the second dependent bank of the detector banks; \item[IOSJDH] I/O descriptor for the first dependent bank of the detector banks; \item[IOSTAK] I/O descriptor for the {\tt JSTAK} bank; \item[IOMZFO] free I/O descriptors. \end{DLtt} \FComm{GCNUM}{Current number for various items} \begin{verbatim} COMMON/GCNUM/NMATE ,NVOLUM,NROTM,NTMED,NTMULT,NTRACK,NPART + ,NSTMAX,NVERTX,NHEAD,NBIT COMMON /GCNUMX/ NALIVE,NTMSTO C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NMATE] number of material banks; \item[NVOLUM] number of volume banks; \item[NROTM] number of rotation matrix banks; \item[NTMED] number of tracking media banks; \item[NTMULT] total number of tracks processed in current event (including secondaries); \item[NTRACK] number of tracks in the {\tt JKINE} bank for current event; \item[NPART] maximum particle code; \item[NSTMAX] maximum number of tracks ({\it high-water mark}) in stack {\tt JSTAK} for current event; \item[NVERTX] number of vertices in {\tt JVERTX} bank for current event; \item[NHEAD] number of data words in the {\tt JHEAD} bank ({\tt 10}); \item[NBIT] number of bits per word (initialised in \Rind{GINIT} via {\tt ZEBRA}); \item[NALIVE]number of particles to be tracked in the parallel tracking stack (this mode of tracking is disabled in the current {\tt GEANT} version); \item[NTMSTO]total number of tracks tracked in the current event so far. Same as {\tt NTMULT} in \FCind{/GCTRAK/}; \end{DLtt} \FComm{GCOMIS}{Variables for the {\tt COMIS} package} Variable {\tt JU\ldots} contains the {\tt COMIS} address of routine {\tt GU\ldots}. \begin{verbatim} COMMON/GCOMIS/ICOMIS,JUINIT,JUGEOM,JUKINE,JUSTEP,JUOUT,JULAST * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[ICOMIS] flag to avoid a double initialisation of {\tt COMIS}; \end{DLtt} \FComm{GCONST}{Basic constants} See next section for the value of these parameters. \begin{verbatim} COMMON/GCONST/PI,TWOPI,PIBY2,DEGRAD,RADDEG,CLIGHT,BIG,EMASS COMMON/GCONSX/EMMU,PMASS,AVO C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[PI] $\pi$; \item[TWOPI] $2\pi$; \item[PIBY2] $\pi/2$; \item[DEGRAD] degrees to radiants conversion factor ($\pi/180$); \item[RADDEG] radiants to degrees conversion factor ($180/\pi$); \item[CLIGHT] light velocity in cm s$^{-1}$; \item[BIG] arbitrary large number; \item[EMASS] electron mass in GeV c$^{-2}$; \item[EMMU] muon mass in GeV c$^{-2}$; \item[PMASS] proton mass in GeV c$^{-2}$; \item[AVO] Avogadro's number $\times 10^{-24}$. \end{DLtt} \FComm{GCONSP}{Basic constants} These parameters are in {\tt SINGLE PRECISION} on 64 bits machines. \begin{verbatim} DOUBLE PRECISION PI,TWOPI,PIBY2,DEGRAD,RADDEG,CLIGHT,BIG,EMASS DOUBLE PRECISION EMMU,PMASS,AVO * PARAMETER (PI=3.14159265358979324D0) PARAMETER (TWOPI=6.28318530717958648D0) PARAMETER (PIBY2=1.57079632679489662D0) PARAMETER (DEGRAD=0.0174532925199432958D0) PARAMETER (RADDEG=57.2957795130823209D0) PARAMETER (CLIGHT=29979245800.D0) PARAMETER (BIG=10000000000.D0) PARAMETER (EMASS=0.0005109990615D0) PARAMETER (EMMU=0.105658387D0) PARAMETER (PMASS=0.9382723128D0) PARAMETER (AVO=0.60221367D0) * \end{verbatim} \FComm{GCOPTI}{Control of geometry optimisation} \begin{verbatim} COMMON/GCOPTI/ IOPTIM C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IOPTIM]Optimisation flag \begin{DLtt}{MMM} \item[-1] no optimisation at all; \Rind{GSORD} calls disabled; \item[~0] no optimisation; only user calls to \Rind{GSORD} kept; \item[~1] all non-\Rind{GSORD}ered volumes are ordered along the best axis; \item[~2] all volumes are ordered along the best axis. \end{DLtt} \end{DLtt} \FComm{GCPARA}{Control of parametrized energy deposition} \begin{verbatim} INTEGER BITPHI, BITTET, BITPOT LOGICAL SYMPHI, SYMTEU, SYMTED PARAMETER (LSTACK = 5000) C BITPOT is for Phi.Or.Tet C C --------------------------------------------------------- COMMON /GCPARA/ + EPSIX0 (LSTACK) , + IDRPHI (LSTACK ) , IDRTET (LSTACK ), + IDROUT (LSTACK ) , JPLOST (LSTACK ), + IPHTMP (LSTACK ) , + BITPHI (LSTACK ) , BITTET (LSTACK ), + BITPOT (LSTACK ) , JJLOST, JJFILL, + JENTRY, JEMPTY, + EPSMAX, + JJTEMP, JJWORK , JJSTK1, + J1TEMP, J1STK1, + IFOUNP, IFOUNT , IFNPOT, + SYMPHI, + SYMTEU, SYMTED C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[LSTACK] dimension of the energy ray stack; \item[JJLOST] number of energy rays lost in each tracking step; \item[EPSMAX] maximum number of radiation lengths that an energy ray can travel; \item[JJTEMP] temporary pointer; \item[JJWORK] actual size of the energy ray stack; \item[JJSTK1] \item[J1TEMP] \item[J1STK1] \item[IFOUNP] Number of energy rays that change cell in $\phi$ direction; \item[IFOUNT] Number of energy rays that change cell in $\theta$ direction; \item[IFNPOT] Number of energy rays that change cell either in $\phi$ or in $\theta$; \item[SYMPHI] {\tt .TRUE.} if {\tt PHIMAX-PHIMIN = }$360^{\circ}$; \item[SYMTEU] {\tt .TRUE.} if {\tt TETMIN = }$0^{\circ}$; \item[SYMTED] {\tt .TRUE.} if {\tt TETMAX = }$180^{\circ}$. \end{DLtt} \FComm{GCPARM}{Control of parameterisation} \begin{verbatim} COMMON/GCPARM/IPARAM,PCUTGA,PCUTEL,PCUTNE,PCUTHA,PCUTMU + ,NSPARA,MPSTAK,NPGENE REAL PACUTS(5) EQUIVALENCE (PACUTS(1),PCUTGA) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IPARAM]Parameterisation flag ({\tt 0, PCUT}); \begin{DLtt}{MMMMMMMM} \item[0 =]parameterisation is not in effect, normal tracking will be used; \item[1 =]parameterisation is in effect; \end{DLtt} \item[PCUTGA]parameterisation threshold for photons ({\tt 0., PCUT}) \item[PCUTEL]parameterisation threshold for electrons and positrons ({\tt 0., PCUT}); \item[PCUTNE]parameterisation threshold for neutral hadrons ({\tt 0., PCUT}); \item[PCUTHA]parameterisation threshold for charged hadrons ({\tt 0., PCUT}); \item[PCUTMU]parameterisation threshold for muons ({\tt 0., PCUT}); \item[NSPARA] not used; \item[MPSTAK] optimum size of the Energy ray stack ({\tt 2000}); \item[NPGENE] number of Energy rays generated per primary particle ({\tt 20}); \end{DLtt} \FComm{GCPHYS}{Control of physics processes} \begin{verbatim} COMMON/GCPHYS/IPAIR,SPAIR,SLPAIR,ZINTPA,STEPPA + ,ICOMP,SCOMP,SLCOMP,ZINTCO,STEPCO + ,IPHOT,SPHOT,SLPHOT,ZINTPH,STEPPH + ,IPFIS,SPFIS,SLPFIS,ZINTPF,STEPPF + ,IDRAY,SDRAY,SLDRAY,ZINTDR,STEPDR + ,IANNI,SANNI,SLANNI,ZINTAN,STEPAN + ,IBREM,SBREM,SLBREM,ZINTBR,STEPBR + ,IHADR,SHADR,SLHADR,ZINTHA,STEPHA + ,IMUNU,SMUNU,SLMUNU,ZINTMU,STEPMU + ,IDCAY,SDCAY,SLIFE ,SUMLIF,DPHYS1 + ,ILOSS,SLOSS,SOLOSS,STLOSS,DPHYS2 + ,IMULS,SMULS,SOMULS,STMULS,DPHYS3 + ,IRAYL,SRAYL,SLRAYL,ZINTRA,STEPRA COMMON/GCPHLT/ILABS,SLABS,SLLABS,ZINTLA,STEPLA + ,ISYNC + ,ISTRA * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IPAIR] control variable for the \Pem/\Pep pair production process; \item[SPAIR] distance to the next pair production in the current material; \item[SLPAIR] total distance travelled by the $\gamma$ when pair production occurs; \item[ZINTPA] number of interaction lengths to the next pair production; \item[STEPPA] interaction length for pair production for the current material and energy; \item[ICOMP] control variable for the Compton scattering process; \item[SCOMP] distance to the next Compton scattering in the current material; \item[SLCOMP] total distance travelled by the $\gamma$ when Compton scattering occurs; \item[ZINTCO] number of interaction lengths to the next Compton scattering; \item[STEPCO] interaction length for Compton scattering for the current material and energy; \item[IPHOT] control variable for the photoelectric effect process; \item[SPHOT] distance to the next photoelectric effect in the current material; \item[SLPHOT] total distance travelled by the $\gamma$ when photoelectric effect occurs; \item[ZINTPH] number of interaction lengths to the next photoelectric effect; \item[STEPPH] interaction length for photoelectric effect for the current material and energy; \item[IPFIS] control variable for the $\gamma$-induced nuclear fission process; \item[SPFIS] distance to the next $\gamma$-induced nuclear fission in the current material; \item[SLPFIS] total distance travelled by the $\gamma$ when $\gamma$-induced nuclear fission occurs; \item[ZINTPF] number of interaction lengths to the next $\gamma$-induced nuclear fission; \item[STEPPF] interaction length for $\gamma$-induced nuclear fission for the current material and energy; \item[IDRAY] control variable for the $\delta$-ray production process; \item[SDRAY] distance to the next $\delta$-ray production in the current material; \item[SLDRAY] total distance travelled by the particle when $\delta$-ray production occurs; \item[ZINTDR] number of interaction lengths to the next $\delta$-ray production; \item[STEPDR] interaction length for $\delta$-ray production for the current material and energy; \item[IANNI] control variable for the positron annichilation process; \item[SANNI] distance to the next positron annichilation in the current material; \item[SLANNI] total distance travelled by the positron when positron annichilation occurs; \item[ZINTAN] number of interaction lengths to the next positron annichilation; \item[STEPAN] interaction length for positron annichilation for the current material and energy; \item[IBREM] control variable for the bremsstrahlung process; \item[SBREM] distance to the next bremsstrahlung in the current material; \item[SLBREM] total distance travelled by the particle when bremsstrahlung occurs; \item[ZINTBR] number of interaction lengths to the next bremsstrahlung; \item[STEPBR] interaction length for bremsstrahlung for the current material and energy; \item[IHADR] control variable for the hadronic interaction process; \item[SHADR] distance to the next hadronic interaction in the current material; \item[SLHADR] total distance travelled by the particle when hadronic interaction occurs; \item[ZINTHA] number of interaction lengths to the next hadronic interaction; \item[STEPHA] interaction length for hadronic interaction for the current material and energy; \item[IMUNU] control variable for the $\mu$ nuclear interaction process; \item[SMUNU] distance to the next $\mu$ nuclear interaction in the current material; \item[SLMUNU] total distance travelled by the $\mu$ when $\mu$ nuclear interaction occurs; \item[ZINTMU] number of interaction lengths to the next $\mu$ nuclear interaction; \item[STEPMU] interaction length for $\mu$ nuclear interaction for the current material and energy; \item[IDCAY] control variable for the decay in flight process; \item[SDCAY] distance to the next decay in flight in the current material; \item[SLIFE] total distance travelled by the particle when decay in flight occurs; \item[SUMLIF] time to the next interaction point in $ct$ units; \item[DPHYS1] not used; \item[ILOSS] control variable for the energy loss process; \item[SLOSS] step limitation due to continuous processes: energy loss, bending in magnetic field, \v{C}erenkov photon generation and multiple scattering; \item[SOLOSS] not used; \item[STLOSS] not used; set equal to {\tt STEP} for backward compatibility; \item[DPHYS2] not used; \item[IMULS] control variable for the energy loss process; \item[SMULS] maximum step allowed by the multiple scattering simulation; \item[SOMULS] not used; \item[STMULS] not used; set equal to step for backward compatibility; \item[DPHYS3] not used. \item[ILABS] control variable for the \v{C}erenkov photon absorption process; \item[SLABS] distance to the next \v{C}erenkov photon absorption process; \item[SLLABS] not used; \item[ZINTLA] number of interaction lengths to the next \v{C}erenkov photon absorption process; \item[STEPLA] interaction length for \v{C}erenkov photon absorption process; \item[ISYNC] control variable for synchrotron radiation production; \item[ISTRA] control variable for energy loss fluctuation simulation; \end{DLtt} For more details on {\tt IDRAY} and {\tt ILOSS} see {\tt [BASE040]}. For all other variables see {\tt [PHYS010]}. \FComm{GCPOLY}{Internal flags for polygon and polycone shapes} \begin{verbatim} COMMON/GCPOLY/IZSEC,IPSEC C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IZSEC] Z section number; \item[IPSEC] $\phi$ sector number. \end{DLtt} \FComm{GCPUSH}{Initial and incremental size of some mother banks} \begin{verbatim} COMMON/GCPUSH/NCVERT,NCKINE,NCJXYZ,NPVERT,NPKINE,NPJXYZ C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NCVERT] initial size of bank {\tt JVERTX } ({\tt 5}); \item[NCKINE] initial size of bank {\tt JKINE} ({\tt 50}); \item[NCJXYZ] initial size of bank {\tt JXYZ} ({\tt 50}); \item[NPVERT] increment for size of bank {\tt JVERTX} ({\tt 5}); \item[NPKINE] increment for size of bank {\tt JKINE} ({\tt 10}); \item[NPJXYZ] increment for size of bank {\tt JXYZ} ({\tt 10}). \end{DLtt} \FComm{GCRZ}{Direct access files control variables} \begin{verbatim} COMMON/GCRZ1/NRECRZ,NRGET,NRSAVE,LRGET(20),LRSAVE(20) INTEGER NRECRZ,NRGET,NRSAVE,LRGET ,LRSAVE COMMON/GCRZ2/RZTAGS CHARACTER*8 RZTAGS(4) C \end{verbatim} \begin{DLtt}{MMMMMMMMMMMMM} \item[NRECRZ] record size (argument of {\tt RZMAKE}); \item[NRGET] number of data structures declared on data record {\tt RGET}; \item[NRSAVE] number of data structures declared on data record {\tt RSAV}; \item[LRGET,LRSAVE] corresponding user lists of items; \item[RZTAGS]key names (argument of {\tt RZMAKE}). \end{DLtt} \FComm{GCSCAL}{Scan geometry {\tt ZEBRA} pointers} \begin{verbatim} PARAMETER(MXSLNK=100) COMMON/GCSCAL/ ISLINK(MXSLNK) EQUIVALENCE (LSLAST,ISLINK(MXSLNK)) EQUIVALENCE (LSCAN ,ISLINK(1)),(LSTEMP,ISLINK(2)) EQUIVALENCE (LSPARA,ISLINK(3)),(LSERAY,ISLINK(4)) * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[LSCAN] \item[LSTEMP] \item[LSPARA] \item[LSERAY] \item[LSLAST] \end{DLtt} \FComm{GCSCAN}{Scan geometry control parameters} \begin{verbatim} PARAMETER (MSLIST=32,MAXMDT=3) COMMON/GCSCAN/SCANFL,NPHI,PHIMIN,PHIMAX,NTETA,TETMIN,TETMAX, + MODTET,IPHIMI,IPHIMA,IPHI1,IPHIL,NSLMAX, + NSLIST,ISLIST(MSLIST),VSCAN(3),FACTX0,FACTL, + FACTR,IPHI,ITETA,ISCUR,SX0,SABS,TETMID(MAXMDT), + TETMAD(MAXMDT) + ,SX0S,SX0T,SABSS,SABST,FACTSF + ,DLTPHI,DLTETA,DPHIM1,DTETM1 + ,FCX0M1,FCLLM1,FCRRM1 LOGICAL SCANFL COMMON/GCSCAC/SFIN,SFOUT CHARACTER*80 SFIN,SFOUT * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[MSLIST] dimension of {\tt ISLIST} array ({\tt 32}); \item[MAXMDT] number of $\theta$ division types ({\tt 3}); \item[SCANFL] SCAN flag ({\tt .FALSE., SCAN, STURN}); \begin{DLtt}{MMMMMMMM} \item[.TRUE.]creation of {\tt SCAN} geometry, geantinos will be tracked; \item[.FALSE.]normal tracking; \end{DLtt} \item[NPHI] number of $\phi$ divisions ({\tt 90, SCAN}, {\tt PHI}); \item[PHIMIN] minimum $\phi$ in degrees (${\tt 0^{\circ}}$, {\tt SCAN}, {\tt PHI}); \item[PHIMAX] maximum $\phi$ in degrees (${\tt 360^{\circ}}$, {\tt SCAN}, {\tt PHI}); \item[NTETA] number of $\theta$ divisions ({\tt 90}, {\tt SCAN}, {\tt TETA}); \item[TETMIN] minimum value of $\theta$ ({\tt 0}$^{\circ}$, {\tt SCAN}, {\tt TETA}); \item[TETMAX] maximum value of $\theta$ (180, {\tt SCAN}, {\tt $\theta$}); \item[MODTET] type of $\theta$ division (1, {\tt SCAN}, {\tt $\theta$}); \begin{DLtt}{MMM} \item[1] $\theta$ is expressed in terms of degrees; \item[2] $\theta$ is expressed in terms of pseudorapidity; \item[3] $\theta$ is expressed in terms of $\cos(\theta)$; \end{DLtt} \item[IPHIMI] not used; \item[IPHIMA] not used; \item[IPHI1] internal index ({\tt PHIMIN}); \item[IPHIL] internal index ({\tt PHIMAX}); \item[NSLMAX] not used; \item[NSLIST] number of volumes to be scanned ({\tt 1}, {\tt SCAL}); \item[ISLIST] list of volumes to be scanned ({\tt SCAL}, {\tt SLIST}); \item[VSCAN] scan vertex origin ({\tt SCAP}, {\tt VERTEX}); \item[FACTX0] scale factor for {\tt SX0} ({\tt 100.}, {\tt SCAP}, {\tt SFACTORS}); \item[FACTL] scale factor for {\tt SABS} ({\tt 10.}, {\tt SCAP}, {\tt SFACTORS}); \item[FACTR] scale factor for {\tt R} ({\tt 100.}, {\tt SCAP}, {\tt SFACTORS}); \item[IPHI] $\phi$ bin of the current cell; \item[ITETA] $\theta$ bin of the current cell; \item[ISCUR] pointer in {\tt LPHI} to first triplet of words for a given {\tt ITETA} cell; \item[SX0] sum of radiation lengths up to current {\tt R} boundary; \item[SABS] sum of absorption lengths up to current {\tt R} boundary; \item[TETMID] bound value for {\tt TETMIN} ({\tt 0., -10., -1.} if {\tt MODTET} is 1, 2 or 3 respectively); \item[TETMAD] bound value for {\tt TETMAX} ({\tt 180., 10., 1.} if {\tt MODTET} is 1, 2 or 3 respectively); \item[SX0S] sum of radiation lengths for the sensitive mediums in the current cell; \item[SX0T] sum of radiation lengths in the current cell; \item[SABSS] sum of absorption lengths for the sensitive mediums in the current cell; \item[SABST] sum of absorption lengths in the current cell; \item[FACTSF] scale factor for the sampling fractions ({\tt 1000.}); \item[DLTPHI] bin in $\phi$, {\tt (PHIMAX-PHIMIN)/NPHI}; \item[DLTETA] bin in $\theta$, {\tt (TETMAX-TETMIN)/NTETA}; \item[DPHIM1] {\tt DLTPHI}$^{-1}$; \item[DTETM1] {\tt DLTETA}$^{-1}$; \item[FCX0M1] {\tt FACTX0}$^{-1}$; \item[FCLLM1] {\tt FACTL}$^{-1}$; \item[FCRRM1] {\tt FACTR}$^{-1}$; \item[SFIN] not used; \item[SFOUT] not used. \end{DLtt} \FComm{GSECTI}{Hadronic partial cross-sections for {\tt GHEISHA}} \begin{verbatim} COMMON/GSECTI/ AIEL(20),AIIN(20),AIFI(20),AICA(20),ALAM,K0FLAG C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[AIEL]elastic cross-sections. {\tt AIEL(I)} is the elastic cross-section for the {\tt I}$^{th}$ element composing the current material; \item[AIIN] inelastic cross-sections; \item[AIFI] fission cross-sections; \item[AICA] nuclear capture cross-sections; \item[ALAM] total cross-section; \item[K0FLAG] obsolete. \end{DLtt} \FComm{GCSETS}{Identification of current sensitive detector} \begin{verbatim} COMMON/GCSETS/IHSET,IHDET,ISET,IDET,IDTYPE,NVNAME,NUMBV(20) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IHSET] set identifier, ASCII equivalent of 4 characters; \item[IHDET] detector identifier, ASCII equivalent of 4 characters; \item[ISET] position of set in bank {\tt JSET}; \item[IDET] position of detector in bank {\tt JS=LQ(JSET-ISET)}; \item[IDTYPE] user defined detector type; \item[NVNAME] number of elements in {\tt NUMBV}; \item[NUMBV] list of volume copy numbers to identify the detector. \end{DLtt} \FComm{GCSHNO}{Symbolic codes for system shapes} \begin{verbatim} PARAMETER ( NSBOX=1, NSTRD1=2, NSTRD2=3, NSTRAP=4, NSTUBE=5, + NSTUBS=6, NSCONE=7, NSCONS=8, NSSPHE=9, NSPARA=10,NSPGON=11, + NSPCON=12,NSELTU=13,NSHYPE=14,NSGTRA=28, NSCTUB=29 ) \end{verbatim} \FComm{GCSPEE}{Auxiliary variables for the CG package} \begin{verbatim} COMMON/GCSPEE/S1,S2,S3,SS1,SS2,SS3,LEP,IPORLI,ISUBLI, + SRAGMX,SRAGMN,RAINT1,RAINT2,RMIN1,RMIN2, + RMAX1,RMAX2,JPORJJ,ITSTCU,IOLDCU,ISCOP, + NTIM,NTFLAG,LPASS,JSC * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[S1] \item[S2] \item[S3] \item[SS1] \item[SS2] \item[SS3] \item[LEP] \item[IPORLI] \item[ISUBLI] \item[SRAGMX] \item[SRAGMN] \item[RAINT1] \item[RAINT2] \item[RMIN1] \item[RMIN2] \item[RMAX1] \item[RMAX2] \item[JPORJJ] \item[ITSTCU] \item[IOLDCU] \item[ISCOP] \item[NTIM] \item[NTFLAG] \item[LPASS] \item[JSC] \end{DLtt} \FComm{GCSTAK}{Control variables for parallel tracking} \begin{verbatim} PARAMETER (NWSTAK=12,NWINT=11,NWREAL=12,NWTRAC=NWINT+NWREAL+5) COMMON /GCSTAK/ NJTMAX, NJTMIN, NTSTKP, NTSTKS, NDBOOK, NDPUSH, + NJFREE, NJGARB, NJINVO, LINSAV(15), LMXSAV(15) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NWSTAK] \item[NWINT] \item[NWREAL] \item[NWTRAC] \item[NJTMAX] \item[NJTMIN] \item[NTSTKP] \item[NTSTKS] \item[NDBOOK] \item[NDPUSH] \item[NJFREE] \item[NJGARB] \item[NJINVO] \item[LINSAV] \item[LMXSAV] \end{DLtt} \FComm{GCTIME}{Execution time control} \begin{verbatim} COMMON/GCTIME/TIMINT,TIMEND,ITIME,IGDATE,IGTIME C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[TIMINT] time reqeusted for the run phase, after initialisation ({\tt TIME}, not used); \item[TIMEND] time requested for program termination phase ({\tt 1, TIME}); \item[ITIME] number of events between two tests of time left ({\tt 1, TIME}); \item[IGDATE]current date in integer format {\tt YYMMDD}; \item[IGTIME] current time in integer format {\tt HHMM}; \end{DLtt} \FComm{GCTMED}{Array of current tracking medium parameters} \begin{verbatim} COMMON/GCTMED/NUMED,NATMED(5),ISVOL,IFIELD,FIELDM,TMAXFD,STEMAX + ,DEEMAX,EPSIL,STMIN,CFIELD,PREC,IUPD,ISTPAR,NUMOLD COMMON/GCTLIT/THRIND,PMIN,DP,DNDL,JMIN,ITCKOV,IMCKOV,NPCKOV C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NUMED] current tracking medium number; \item[NATMED] name of current tracking medium (ASCII codes stored in an integer array, 4 characthers per word); \item[ISVOL] \begin{DLtt}{MMM} \item[-1] non-sensitive volume with sensitive volume tracking parameters; \item[~0] non-sensitive volume; \item[~1] sensitive volume; \end{DLtt} \item[IFIELD] \begin{DLtt}{MMM} \item[0] no field; \item[1] user defined field (\Rind{GUFLD}); \item[2] user defined field (\Rind{GUFLD}) along z; \item[3] uniform field ({\tt FIELDM}) along z; \end{DLtt} \item[FIELDM] maximum field; \item[TMAXFD] maximum turning angle in one step due to the magnetic field; \item[STEMAX] maximum step allowed; \item[DEEMAX] maximum fraction of energy loss in one step due to continuous processes; \item[EPSIL] boundary crossing accuracy; \item[STMIN] minimum step size limitation due to: energy loss, multiple scattering, magnetic field bending or, if active, \v{C}erenkov photons production; \item[CFIELD]constant for field step evaluation; \item[PREC]effective step for boundary crossing ($0.1 \times$ {\tt EPSIL}); \item[IUPD] \begin{DLtt}{MMM} \item[0] new particle or new medium in current step; \item[1] no change of medium or particle; \end{DLtt} \item[ISTPAR] \begin{DLtt}{MMM} \item[0] global tracking parameters are used; \item[1] special tracking parameters are used for this medium; \end{DLtt} \item[NUMOLD] number of the previous tracking medium; \item[THRIND] $\beta^{-1}$ of the current particle; \item[PMIN] minimum momentum in GeV c$^{-1}$ for the photon transport; \item[DP] momentum window to generate the photons; \item[DNDL] number of photons generated per centimeter; \item[JMIN] pointer to the photon threshold energy bin; \item[ITCKOV] flag for the \v{C}erenkov photon generation: \begin{DLtt}{MMM} \item[0] disactivated; \item[1] activated; \end{DLtt} \item[IMCKOV] flag for the \v{C}erenkov photon generation in current material, same meaning than above; \item[NPCKOV] number of energy bins for the \v{C}erenkov photons; \end{DLtt} \FComm{GCTRAK}{Track parameters at the end of the current step} \begin{verbatim} PARAMETER (MAXMEC=30) COMMON/GCTRAK/VECT(7),GETOT,GEKIN,VOUT(7),NMEC,LMEC(MAXMEC) + ,NAMEC(MAXMEC),NSTEP ,MAXNST,DESTEP,DESTEL,SAFETY,SLENG + ,STEP ,SNEXT ,SFIELD,TOFG ,GEKRAT,UPWGHT,IGNEXT,INWVOL + ,ISTOP ,IGAUTO,IEKBIN, ILOSL, IMULL,INGOTO,NLDOWN,NLEVIN + ,NLVSAV,ISTORY PARAMETER (MAXME1=30) COMMON/GCTPOL/POLAR(3), NAMEC1(MAXME1) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[VECT] track parameters ($x,y,z,p_x/p,p_y/p,p_z/p,p$); \item[GETOT]particle total energy; \item[GEKIN]particle kinetic energy; \item[VOUT]track parameters at the end of the step, used internally by {\tt GEANT}; \item[NMEC]number of mechanisms active for current step; \item[LMEC]list of mechanism numbers for current step; \item[NAMEC]list of mechanism names for current step (ASCII codes stored in an integer, 4 characthers per word); \item[NSTEP]number of steps for current track; \item[MAXNST]maximum number of steps allowed (10000); \item[DESTEP]total energy lost in current step; \item[DESTEL]same as {\tt DESTEP}, kept for backward compatibility; \item[SAFETY]underestimated distance to closest medium boundary; \item[SLENG]current track length; \item[STEP] size of current tracking step; \item[SNEXT]distance to current medium boundary along the direction of the particle; \item[SFIELD]obsolete; \item[TOFG]current time of flight in seconds; \item[GEKRAT]interpolation coefficient in the energy table {\tt ELOW}; \item[UPWGHT]user word for current particle; \item[IGNEXT] indicates whether the particles is reaching a medium boundary in the current step: \begin{DLtt}{MMM} \item[0]{\tt SNEXT} has not been computed in current step; \item[1]{\tt SNEXT} has been computed in current step: particle is reaching a boundary; \end{DLtt} \item[INWVOL] \begin{DLtt}{MMM} \item[0]track is inside a volume; \item[1]track has entered a new volume or it is a new track; \item[2]track is exiting current volume; \item[3]track is exiting the setup; \end{DLtt} \item[ISTOP] \begin{DLtt}{MMM} \item[0]particle will continue to be tracked; \item[1]particle has disappeared (decay, inelastic interaction \dots); \item[2]particle has fallen below the cutoff energy or has interacted but no secondaries have been generated; \end{DLtt} \item[IGAUTO] \begin{DLtt}{MMM} \item[0]tracking parameters are given by the user; \item[1]tracking parameters are calculated by {\tt GEANT}; \end{DLtt} \item[IEKBIN]current kinetic energy bin in table {\tt ELOW}; \item[ILOSL]local energy loss flag (see \FCind{/GCPHYS/}); \item[IMULL]local multiple scattering flag (see \FCind{/GCPHYS/}); \item[INGOTO] daughter number, in the current mother, which the particle will enter if continuing along a straight line for {\tt SNEXT} centimeters; \item[NLDOWN]lowest level reached down the tree (parallel tracking only); \item[NLEVIN]number of levels currently filled and valid in \FCind{/GCVOLU/}; \item[NLVSAV]current level (parallel tracking only); \item[ISTORY]User flag for current track history (reset to $0$ in \Rind{GLTRAC}); \item[POLAR]polarisation vector for current \v{C}erenkov photon; \item[NAMEC1]additional list of mechanism names for current step (ASCII codes stored in an integer, 4 characthers per word); \end{DLtt} List of mechanisms active in the current step. \begin{verbatim} CHARACTER*4 MEC(MAXMEC),MEC1(MAXME1),DFLT(2) PARAMETER (LEFTM1=MAXME1-9) DATA MEC/'NEXT','MULS','LOSS','FIEL','DCAY','PAIR','COMP','PHOT' + ,'BREM','DRAY','ANNI','HADR','ECOH','EVAP','FISS','ABSO' + ,'ANNH','CAPT','EINC','INHE','MUNU','TOFM','PFIS','SCUT' + ,'RAYL','PARA','PRED','LOOP','NULL','STOP'/ DATA MEC1/'LABS','LREF','SMAX','SCOR','CKOV','REFL','REFR', + 'SYNC','STRA',LEFTM1*' '/ \end{verbatim} \begin{DLtt}{MMMMMMMMM} \item[NEXT ~~1] particle has reached the boundary of current volume; \item[MULS ~~2] multiple scattering; \item[LOSS ~~3] continuous energy loss; \item[FIEL ~~4] bending in magnetic field; \item[DCAY ~~5] particle decay; \item[PAIR ~~6] photon pair-production or muon direct pair production; \item[COMP ~~7] Compton scattering; \item[PHOT ~~8] photoelectric effect; \item[BREM ~~9] bremsstrahlung; \item[DRAY ~10] $\delta$-ray production; \item[ANNI ~11] positron annihilation; \item[HADR ~12] hadronic interaction; \item[ECOH ~13] hadronic elastic coherent scattering; \item[EVAP ~14] nuclear evaporation; \item[FISS ~15] nuclear fission; \item[ABSO ~16] nuclear absorption; \item[ANNH ~17] anti-proton annihilation; \item[CAPT ~18] neutron capture; \item[EINC ~19] hadronic elastic incoherent scattering; \item[INHE ~20] hadronic inelastic scattering; \item[MUNU ~21] muon-nuclear interaction; \item[TOFM ~22] exceeded time of flight cut; \item[PFIS ~23] nuclear photo-fission; \item[SCUT ~24] the particle due to bending in magnetic field was unexpectedly crossing volume boundaries and the step has been halved to avoid this; \item[RAYL ~25] Rayleigh effect; \item[PARA ~26] parametrisation activated; \item[PRED ~27] error matrix computed ({\tt GEANE} tracking); \item[LOOP ~28] not used; \item[NULL ~29] no mechanism is active, usually at the entrance of a new volume; \item[STOP ~30] particle has fallen below energy threshold and tracking stops; \item[LABS 101] \v{C}erenkov photon absorption; \item[LREF 102] \v{C}erenkov photon reflection/refraction; \item[SMAX 103] step limited by {\tt STEMAX}; \item[SCOR 104] correction against loss of precision in boundary crossing; \item[CKOV 105] \v{C}erenkov photon generation; \item[REFL 106] \v{C}erenkov photon reflection; \item[REFR 107] \v{C}erenkov photon refraction; \item[SYNC 108] synchrotron radiation generation; \item[STRA 109] PAI or ASHO model used for energy loss fluctuations. \end{DLtt} \FComm{GCUNIT}{Description of logical units } \begin{verbatim} COMMON/GCUNIT/LIN,LOUT,NUNITS,LUNITS(5) INTEGER LIN,LOUT,NUNITS,LUNITS COMMON/GCMAIL/CHMAIL CHARACTER*132 CHMAIL C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[LIN]input unit to read data records; \item[LOUT]output unit; \item[NUNITS]number of additional units; \item[LUNITS]list of additional units; \item[CHMAIL]character string containing the message to be printed by \Rind{GMAIL}. \end{DLtt} {\tt LIN} and {\tt LOUT} are defined in \Rind{GINIT} through {\tt ZEBRA}. {\tt NUNITS} and {\tt LUNITS} are reserved for user {\tt ZEBRA} files. \FComm{GCVOLU}{Current geometrical information} \begin{verbatim} COMMON/GCVOLU/NLEVEL,NAMES(15),NUMBER(15), +LVOLUM(15),LINDEX(15),INFROM,NLEVMX,NLDEV(15),LINMX(15), +GTRAN(3,15),GRMAT(10,15),GONLY(15),GLX(3) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NLEVEL] level in the geometrical tree reached by the last successful search; \item[NAMES]volume names at each level in the current tree (ASCII codes stored in an integer, 4 characters per word); \item[NUMBER]volume copy or division numbers at each level in the tree; \item[LVOLUM]volume numbers in the {\tt JVOLU} bank at each level in the tree; \item[LINDEX]number of the daughter where the current track is at each level in the tree; \item[INFROM] daughter of the current volume from which the particle exited; \item[NLEVMX] maximum number of levels in the geometry tree; \item[NLDEV] number of the volumes at each level whose structure has been {\it developed}; \item[LINMX] number of positioned contents or cells from division at each level; \item[GTRAN]x,y,z offsets of the cumulative coordinate transformation from the master system to the system at each level; \item[GRMAT]rotation matrix elements for the cumulative transformation from the master system to the system at each level; ${\tt GRMAT(10,LEVEL)}=0$ indicates the null rotation; \item[GONLY] flag indicating if the volume is {\tt ONLY} (1) or {\tt MANY} (0) at each level in the tree; \item[GLX]current point in local coordinates system (local use only!). \end{DLtt} \FComm{GCVOL2}{Back-up for \FCind{/GCVOLU/}} The variables have the same meaning of the variables in common \FCind{/GCVOLU/} with similar names. \begin{verbatim} COMMON/GCVOL2/NLEVE2,NAMES2(15),NUMB2(15), +LVOL2(15),LIND2(15),INFRO2,NLDEV2(15),LINMX2(15), +GTRAN2(3,15),GRMAT2(10,15),GONLY2(15),GLX2(15) INTEGER NLEVE2,NAMES2,NUMB2,LVOL2,LIND2,INFRO2,NLDEV2,LINMX2 C \end{verbatim} \FComm{GCXLUN}{Logical units number for the interactive version} \begin{verbatim} COMMON/GCXLUN/LUNIT(128) * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[LUNIT]Logical units numbers. \end{DLtt} \FComm{GCCURS}{Cursor position information for interactive graphics} \begin{verbatim} COMMON/GCCURS/INTFLA,SIZD2,FACHV,HALF,SAVPLX,SAVPLY,YPLT,XPLT * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[INTFLA] \item[SIZD2] \item[FACHV] \item[HALF] \item[SAVPLX] \item[SAVPLY] \item[YPLT] \item[XPLT] \end{DLtt} \FComm{GCURSB}{} \begin{verbatim} COMMON/GCURSB/NUMNDS,IADDI,NUMND2,NNPAR,IISELT COMMON/GCURSC/MOMO CHARACTER*4 MOMO * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[NUMNDS] \item[IADDI] \item[NUMND2] \item[NNPAR] \item[IISELT] \item[MOMO] \end{DLtt} \FComm{GCSTRA}{Variables for the PAI energy loss model} \begin{verbatim} PARAMETER (ILTAB=200) COMMON /GCSTR2 / EMAX,EM(200),SFINT,EPSR(ILTAB),EPSI(ILTAB), + FINT(ILTAB),EMIN,EPPS,BETA2,GAMMA2,WP2,S2,MEEV,EMM(200), + GAMLOG(21),NP,NTAB,IE,NFACT,NICOLL * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[EMAX] \item[EM] \item[SFINT] \item[EPSR] \item[EPSI] \item[FINT] \item[EMIN] \item[EPPS] \item[BETA2] \item[GAMMA2] \item[WP2] \item[S2] \item[MEEV] \item[EMM] \item[GAMLOG] \item[NP] \item[NTAB] \item[IE] \item[NFACT] \item[NICOLL] \end{DLtt} \FComm{GCASHO}{Variables for the ASHO energy loss model} \begin{verbatim} COMMON/GCASHO/ZMED,AMED,DMED,E0MED,ZSMED(50),ESMED(50),ALFA, * STEP,PLIN,PLOG,BE2,PLASM,TRNSMA, * BOSC(50),AOSC(50),EOSC(50),ZOSC(50),EMEAN, * CMGO(2000),EMGO,EMGOMI, * NSMED,IOSC(50),NOSC,NMGO,NMGOMA C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[ZMED] \item[AMED] \item[DMED] \item[E0MED] \item[ZSMED] \item[ESMED] \item[ALFA] \item[STEP] \item[PLIN] \item[PLOG] \item[BE2] \item[PLASM] \item[TRNSMA] \item[BOSC] \item[AOSC] \item[EOSC] \item[ZOSC] \item[EMEAN] \item[CMGO] \item[EMGO] \item[EMGOMI] \item[NSMED] \item[IOSC] \item[NOSC] \item[NMGO] \item[NMGOMA] \end{DLtt} \FComm{GCHIL2}{Temporary {\tt ZEBRA} link area for the drawing of the geometrical tree} \begin{verbatim} COMMON/GCHIL2/LARETT(2),JTICK,JMYLL,JFIMOT,JFISCA,JFINAM, + JAASS1,JAASS2, + JAASS3,JAASS4,JTICKS,JMYLLS,JMYMOT * \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[LARETT] {\tt ZEBRA} control variables for the link area; \item[JTICK] \item[JMYLL] \item[JFIMOT] \item[JFISCA] \item[JFINAM] \item[JAASS1] \item[JAASS2] \item[JAASS3] \item[JAASS4] \item[JTICKS] \item[JMYLLS] \item[JMYMOT] \end{DLtt} \FComm{GCVOL1}{Push-pop stack of the volume tree for \v{C}erenkov tracking} These variables are used to save and restore the variables with the similar name in the \FCind{/GCVOLU/} common block. \begin{verbatim} COMMON/GCVOL1/NLEVL1,NAMES1(15),NUMBR1(15),LVOLU1(15) C \end{verbatim} For more information on the meaning of these variables see the {\tt JETSET} documentation~\cite{bib-JETS}. \FComm{GCLUND}{Control variables for the interface with {\tt JETSET}} \begin{verbatim} COMMON/GCLUND/IFLUND,ECLUND C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[IFLUND] flavour of the quarks to be generated, first input variable to \Rind{LUEEVT}; \item[ECLUND] energy in GeV of the \Pem\Pep collision, second input variable to \Rind{LUEEVT}. \end{DLtt} \FComm{GCPMXZ}{Number of elements with photoelectric cross-section} Number of elements for which the Sandia parametrisation is used for the photoelectric cross-sections. \begin{verbatim} PARAMETER (MAXELZ=100) C \end{verbatim} \FComm{GC10EV}{Lower limit for Sandia parametrisation} \begin{verbatim} PARAMETER (G10EV=1.0E-8) PARAMETER (TENEV=1.E-2) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[G10EV] lower limit in GeV; \item[TENEV] lower limit in keV; \end{DLtt} \FComm{GCSHPT}{Shell potentials} The meaning of the variables is explained in the comments. \begin{verbatim} C Shells are numbered from 1 to 24. C Shells used: C K,L1,L2,L3,M1,M2,M3,M4,M5 C N1,N2,N3,N4,N5,N6,N7, C O1,O2,O3,O4,O5,P1,P2,P3 C VARIABLES: C NSHLST - value of Z for which the shells starts to be present C N1ST - pointer to K shell of a given Z (in ESHELL array) C NSHLLS - Number of used shells for a given Z C ESHELL - Shells potentials in eV !!! INTEGER LENGTH,MAXSHL PARAMETER (LENGTH= 1409) PARAMETER (MAXSHL=24) INTEGER NSHLST,N1ST,NSHLLS REAL ESHELL DIMENSION NSHLST(MAXSHL),N1ST(MAXELZ),NSHLLS(MAXELZ) DIMENSION ESHELL(LENGTH) COMMON /GCSHPT/NSHLST,N1ST,NSHLLS,ESHELL C \end{verbatim} \FComm{GCPHPR}{Probability of radiative decay mode} \begin{verbatim} C Probability of radiative decay mode. COMMON /GCPHPR/ GFLUPR(4,MAXELZ) C \end{verbatim} \FComm{GCPHNR}{Nonradiative decay mode for photoelectric effect} \begin{verbatim} C INRFIN - nonradiative decay mode COMMON /GCPHNR/ IGNRFN(8,MAXELZ) C \end{verbatim} \FComm{GCPHRD}{Radiative rates for photoelectric effect} \begin{verbatim} C GRATE - radiative modes' rates PARAMETER (KSHLS=6) PARAMETER (L1SHLS=8) PARAMETER (L2SHLS=7) PARAMETER (L3SHLS=8) PARAMETER (ISHLS=29) COMMON / GCPHRD / GPHRAT(ISHLS,MAXELZ),ISHLUS(24,4),ISHLTR(ISHLS) C \end{verbatim} \FComm{GCPHXS}{Sandia parametrisation coefficients} \begin{verbatim} +KEEP,GCPHXS. PARAMETER (MAXPOW=4) PARAMETER (MAXINT=13) CHARACTER*6 CRNGUP COMMON /GCPXRN/ CRNGUP(MAXINT,MAXELZ) COMMON /GCPXCF/ COFS(MAXPOW,MAXINT,MAXELZ),GPOMIN(MAXELZ) C \end{verbatim} \begin{DLtt}{MMMMMMMM} \item[MAXPOW] maximum power of the variable $E^{-1}$ in the parametrisation; \item[MAXINT] maximum number of parametrisation intervals; \item[MAXELZ] maximum number of elements included in the parametrisation; \item[CRNGUP] limits of the energy intervals for the parametrisation; \item[COFS] coefficients of the parametrisation; \item[GPOMIN] minimum value of the parametrisation; \end{DLtt}
{ "alphanum_fraction": 0.7032031378, "avg_line_length": 37.7949351452, "ext": "tex", "hexsha": "bfcc9eeda592a51199509cc325b59a9a4ab847d2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "geant/zzzz010.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "geant/zzzz010.tex", "max_line_length": 88, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "geant/zzzz010.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 20360, "size": 61190 }
\section*{Abstract} Especially in daily rush-hour-scenarios, a street-network requires enough capacity to support the amount of drivers. On the other hand, a street-network of too much capacity would be inefficient outside of rush-hour-scenarios. Hence, the overload during rush-hour should be spreaded over the network to reduce the impact of the overload (\eg\ traffic-jams). To improve the distribution of routes, alternative routes in multicriteria settings are computed. To support multicriteria settings, Dijkstra's algorithm is combined with personalized routing. Here, these multicriteria or multiple metrics, \eg\ travel-time or travel-distance, can be reduced to scalars by applying preferences to them. Resulting routes or paths are called personalized paths. However, many previous approaches for computing alternative routes need too much parameter-tuning or simply lack in their computational complexity, their needed runtime or in the diversity of their found routes. Therefore, this thesis presents a combination of enumerating personalized paths with the creation of a new penalizing metric. The goal is to compensate other popular metrics with this new metric, which then improves the spread of many routes in the network. This new metric can be processed by every routing-algorithm, that is capable of dealing with multicriteria-routes. By enumerating personalized paths with this new penalization, found routes can be distributed successfully over the network. In addition, user-provided tolerances for preferred metrics are hold. In the end, some experiments on street-networks from OpenStreetMap are presented. Here, results are compared between routes from Dijkstra (with personalized routing) and the enumeration of personalized routes. To speed the route-queries up significantly, the underlying graphs of the networks are contracted via contraction-hierarchies before the searches happen. Here, the contraction is realized by a linear program to improve the performance with multicriteria contractions.
{ "alphanum_fraction": 0.828979793, "avg_line_length": 106.7894736842, "ext": "tex", "hexsha": "65b4fb9947cb2a7ad51ae548b964e362dfc1c6f8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0215902fc26180df102deaed03fbf3a8b2d03801", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dominicparga/master-thesis", "max_forks_repo_path": "core/src/abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0215902fc26180df102deaed03fbf3a8b2d03801", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dominicparga/master-thesis", "max_issues_repo_path": "core/src/abstract.tex", "max_line_length": 211, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0215902fc26180df102deaed03fbf3a8b2d03801", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dominicparga/master-thesis", "max_stars_repo_path": "core/src/abstract.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-11T16:28:31.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-04T23:53:38.000Z", "num_tokens": 378, "size": 2029 }
\section{Acknowledgments} The authors wish to express their gratitude to Dr. Nikola Petrovic for providing us the neccessary support and help in the course of this project.
{ "alphanum_fraction": 0.8208092486, "avg_line_length": 43.25, "ext": "tex", "hexsha": "7eac07dee7a0589f3c7fb33170a5ae13e6b1177a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "29ba65d8ea6dcc3c55b1925ea7156197ab8fa523", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ada/ws", "max_forks_repo_path": "report/acknowledgments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "29ba65d8ea6dcc3c55b1925ea7156197ab8fa523", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ada/ws", "max_issues_repo_path": "report/acknowledgments.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "29ba65d8ea6dcc3c55b1925ea7156197ab8fa523", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ada/ws", "max_stars_repo_path": "report/acknowledgments.tex", "max_stars_repo_stars_event_max_datetime": "2018-12-04T12:57:18.000Z", "max_stars_repo_stars_event_min_datetime": "2018-12-04T12:57:18.000Z", "num_tokens": 37, "size": 173 }
%\documentclass[a1paper,landscape,showframe,fontscale=.42]{baposter} %%THIS are the max size given by the COMPLENET website and is bigger than A1!: %%\documentclass[paperwidth=42in, paperheight=42in,landscape,showframe,fontscale=.42]{baposter} %%\documentclass[paperwidth=42in, paperheight=33.1in,landscape,showframe,fontscale=.42]{baposter} \documentclass[a1paper,portrait,showframe,fontscale=.46]{baposter} %%%%lualatex on %\usepackage{luatextra} \usepackage{fontspec} %Ligatures={Contextual, Common, Historical, Rare, Discretionary} \setmainfont[Mapping=tex-text]{Linux Libertine O} \usepackage{enumerate} \usepackage[english]{babel} \usepackage{graphicx} %to insert pictures \usepackage{color} %to set colors \usepackage{algorithm,algorithmicx,algpseudocode} \usepackage{mathtools} \usepackage{latexsym} \usepackage{caption} \usepackage{multicol} \usepackage{array} \usepackage{float} \usepackage{booktabs} \algnewcommand\And{\textbf{and}} \DeclarePairedDelimiter\abs{\lvert}{\rvert}% \DeclarePairedDelimiter\norm{\lVert}{\rVert}% \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} \makeatletter \let\oldabs\abs \def\abs{\@ifstar{\oldabs}{\oldabs*}} \let\oldnorm\norm \def\norm{\@ifstar{\oldnorm}{\oldnorm*}} \makeatother %\usepackage[top=1.5cm,bottom=2cm,left=2.5cm,right=2.5cm]{geometry} %\linespread{1.5}\selectfont \author{Simon Carrignon} \definecolor{bordercol}{RGB}{255,255,255} \definecolor{headercol1}{RGB}{142,161,42} \definecolor{epnetcol}{RGB}{142,161,42} \definecolor{headercol2}{RGB}{255,255,255} \definecolor{headerfontcol}{RGB}{78,78,78} \definecolor{boxcolor}{RGB}{255,255,255} \definecolor{emphcol}{RGB}{106,105,180} %%% Save space in lists. Use this after the opening of the list %%%%%%%%%%%%%%%% \newcommand{\compresslist}{ \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} } \newcommand{\coloremph}[1]{ \textcolor{emphcol}{\bf#1} } \begin{document} \begin{poster}{ borderColor=bordercol, headerColorOne=headercol1, headerColorTwo=headercol2, headerFontColor=headerfontcol, % Only simple background color used, no shading, so boxColorTwo isn't necessary boxColorOne=boxcolor, headershape=roundedright, headerfont=\Large\sf\bf, textborder=rectangle, headerborder=open, background=plain, bgColorOne=white, boxshade=plain, eyecatcher, columns=2 } { } { Mis cosillas } { Juan Moros, Juan Moros, and Juan Moros\\ {\small [email protected] \& \small [email protected]} } { \setlength\fboxsep{0pt} \setlength\fboxrule{0.5pt} \begin{minipage}{14em} %\vspace*{\stretch{1}} \includegraphics[height=8em]{logos/epnetLogo2.png} %\vspace*{\stretch{1}} %\includegraphics[angle=90,width=2.5em]{MemoireLophiss/images/logo_p7_large} \end{minipage} } \headerbox{Introduction}{name=introduction,column=0,row=0}{ Cultural change comprises processes that modify spread of information by social interaction within a population~\cite{boyd_origin_2005}. Numerous social scientists are using an evolutionary framework to model this~\cite{henrich_evolution_2003}. Here we follow this trend to study the dynamics of an exchange based economy. This economy is a social activity depending on particular cultural traits: the value attributed to goods used to trade during the exchange. Multiple cultural parameters could influence the way those values evolve through space and time leading to different dynamics. In this study we focus on how the topology of the cultural network impacts the dynamics of such a trade based economy. To do so we start from a system where a mechanism of cultural transmission, biased toward the success of the individual, allows the agents to efficiently modify the value they attribute to each good in order that everyone can gather all the good they doesn't produce. The cultural networks allow all the producers of one good to know the economical success of the other producers of the same good and to copy their strategies. We propose an experimental setup that allow us to change the topology of this cultural network and to look at how the trade dynamics are affected by such changes. } \headerbox{Model}{name=ud,column=1,row=0}{ To explore how cultural network topologies influence the co-evolution between trade and cultural change, we developed a simple framework where the different agents produce and trade goods. The model is composed of a population $Pop$ of $m$ agents. Each agent $i$ is defined by 2 vectors $Q^i$ and $V^i$ of size $n$. $Q^i$ stores the quantity of each good owned by $i$ and $V^i$ represents the price estimated by $i$ for each of the $n$ good. %\begin{algorithm}[H] % \scriptsize %\caption{Model} %\label{algo:complete} % \begin{algorithmic}[1] % \State INITIALIZATION: % \For{$i \in \#Pop$} \Comment{Initialize the agent with no goods and a random value vector} % \State $Q^i = (0, \cdots, 0)$ % \State $V^i = (v^i_0, \cdots, v^i_n)$ \Comment{The values of $v^i_j$ are selected randomly} % \EndFor % % \State SIMULATION: % \Loop{$~step \in TimeSteps$} % \For{$i \in Pop$} % \State $Production(Q^i)$ % \EndFor % \For{$i \in Pop$} % \For{$j \in Pop$} % \State $TradeProcess(V^i,Q^i,V^j,Q^j)$ % \EndFor % \EndFor % \For{$i \in Pop$} % \State $ConsumeGoods(Q^i)$ \Comment{All goods are consumed} % \If{$ (step \mod CulturalStep) = 0$} % \State $CulturalTransmission(V)$ % \State $Innovation(V^i)$ % \EndIf % \EndFor % \EndLoop %\end{algorithmic} %\end{algorithm} Given the prices attributed by the agents for each good ($V^i$), an exchange is made or not using the trade network (in green in the Figure~\ref{fig:feedbackSchema}). Given the quantities ($Q^i$) gathered, a score reflecting the ``economic success'' of each agent is attributed. Using this score, agents use their cultural network (in blue in the Figure~\ref{fig:feedbackSchema}) to update the value attributed to each good $V^i$. \begin{figure}[H] \centering \includegraphics[trim={2cm 6cm 2cm 5cm},clip,width=8cm]{img/trade-cultural.png} \caption{ {\small Illustration of the interaction between the Trade network and the Cultural networks}} \label{fig:feedbackSchema} \end{figure} % We first compare the impact of different $CulturalTransmission$ mechanism on the distribution of frequencies of traits (the belief about the price of each goods). % \begin{figure}[H] % \centering % \setlength{\columnseprule}{0pt} % \begin{multicols}{2} % \includegraphics[width=5.2cm]{img/2SetupDistribA.pdf} % \caption{Comparaison of the distribution of frequencies between the neutral and the trade model.} % \label{fig:2setDi} % \end{multicols} % \end{figure} % \vspace{-.8cm} % The figure~\ref{fig:2setDi} shows that when $CulturalTransmission$ is neutral (agents randomly copy prices) the distribution follow the well know power law \cite{bentley_random_2004} but when transmission is not neutral but biased by the economical success of the agents, the power law disappear. } \headerbox{Simulations \& Results}{name=res1,column=0,span=2,below=ud}{ \begin{multicols}{2} \subsection*{Simulation} In all the simulations we use a population of $m=600$ agents and $n=3$ goods. A penalty of $1$ is given to the agents unable to exchange their good with one of the other goods. If the exchange is made, the penalty is reduced if the quantity gathered ($Q^i$) is closer to an optimal value $O^i$ shared by all the agents. During one timestep, the agents exchange their good 10 times before updating the values they attributes to prices. The score of the agents is given by the sum of the penalties. \subsection*{Fully connected Cultural Network} We first carry out simulations in which cultural networks are complete \emph{i.e.} every agent knows the strategy of the every producers of its own good. \begin{figure}[H] \centering \begin{multicols}{2} \includegraphics[width=5cm]{img/full.pdf} \caption{Evolution of the score of the agents in a setup with 600 agents and 3 goods trading and exchanging their strategies during 10000 timesteps.}%% \label{fig:scoreEvol} \end{multicols} \end{figure} \vspace{-1cm} When the cultural network is fully connected, all the mean score of the agent converge to a value around 3. It means that during the 10 exchange they make, in the worst case there is 3 exchanges during which they are not able to exchange one of the goods. \subsection*{Influence of Average Distance ($L$) and Average Degree ($\left\langle k\right\rangle$)} \begin{multicols}{2} To test the influence of the topology of cultural networks we build several networks with the same average distance $L$ and different average degree $\left\langle k\right\rangle$. To reach those values, we create chain or ring lattices of $v$ neighbours and then we rewire other lattices with $v'<v$ until the former network's $L$ is achieved. \columnbreak \vspace{1cm} \begin{center} \large \begin{tabular}{l|ccc} & $\left\langle k\right\rangle_1$ & \dots & $\left\langle k\right\rangle_n$ \\\hline $L_1$ & $G_{11}$ & & $G_{1n}$ \\ \dots & &\dots & \\ $L_m$ & $G_{m1}$ & & $G_{mn}$ \\ \end{tabular} \end{center} \end{multicols} \columnbreak The next table illustrates some of the topologies we designed for networks with 200 agents and for some couple of values $\{L,\left\langle k\right\rangle\}$. \begin{center} \begin{tabular}{m{1.2cm}m{2cm}m{2cm}} &$\left\langle k\right\rangle=8$ & $\left\langle k\right\rangle=4$\\ $L\approx17$& \includegraphics[width=2cm]{img/g02.pdf}& \includegraphics[width=2cm]{img/g00.pdf}\\ $L\approx4$& \includegraphics[width=2.5cm]{img/g42.pdf}& \includegraphics[width=2.5cm]{img/g40.pdf}\\ \end{tabular} \end{center} \subsection*{Results} The average distance $L$ has been proven to be the key property for the simulations not only to reach the equilibrium faster, but also to enhance performances for the agents. For equal values of $L$, there is no clear influence of the density in either aspects. Nonetheless, the best performance is worse in these more realistic networks than in the fully connected experiment. \begin{multicols}{2} \begin{figure}[H] \center \includegraphics[width=7cm]{img/provResults2.pdf} \caption{ \small Behaviour of the simulations with different topologies.} \label{fig:resultsTs} \end{figure} \columnbreak \begin{figure}[H] \center \includegraphics[width=5cm]{img/L_vs_t10.pdf} \caption{ \small Time of convergence as a function of $L$.} \label{fig:resultsConv} \end{figure} \end{multicols} \end{multicols} } \headerbox{Concluding Remarks}{name=conclusion,column=0,below=res1}{ Trade dynamics and cultural mechanisms are often studied separately. A model of cultural evolution allows us to bring together those two aspects and to perform quantitative analysis of the influence of both side on the general dynamic of the system. To illustrate the importance of the interaction between cultural and economic mechanism, we show with this study how the some particular properties of the cultural network influence the speed of convergence of the trade mechanisms. This strongly suggest that when studying economic dynamics one cannot put aside the cultural mechanism underlying the economy. In following studies we want to measure how the properties shown here can improve the economic resilience of a system under unstable conditions. } \headerbox{References}{name=references,column=1,below=res1}{ \scriptsize \renewcommand{\refname}{\vspace{-0.5em}} \bibliographystyle{unsrt} \bibliography{biblio} } \headerbox{Acknowledgements}{name=acknowledgements,column=1,below=references}{ Funding for this work was provided by the ERC Advanced Grant EPNet (340828). \begin{center} \begin{tabular}{m{2cm}m{2cm}m{2cm}} \includegraphics[width=2cm]{logos/bscLogo.jpg}& \includegraphics[width=1.5cm]{logos/LOGO-ERC.jpg}& \includegraphics[width=2cm]{logos/logoUB.jpg} \end{tabular} \end{center} } \end{poster} \end{document}
{ "alphanum_fraction": 0.7114699596, "avg_line_length": 44.1713286713, "ext": "tex", "hexsha": "512ed2c225b1562bbb7de0172399f135f725c1c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8603f0c2550bb81ba9d8dc5c0712f6aefb846a37", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "simoncarrignon/20160323_CompleNet", "max_forks_repo_path": "Poster.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8603f0c2550bb81ba9d8dc5c0712f6aefb846a37", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "simoncarrignon/20160323_CompleNet", "max_issues_repo_path": "Poster.tex", "max_line_length": 549, "max_stars_count": null, "max_stars_repo_head_hexsha": "8603f0c2550bb81ba9d8dc5c0712f6aefb846a37", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "simoncarrignon/20160323_CompleNet", "max_stars_repo_path": "Poster.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3783, "size": 12633 }
\documentclass[11pt, oneside]{scrartcl} % use "amsart" instead of "article" for AMSLaTeX format %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % packages \usepackage[portrait]{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{a4paper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for rotated page geometry %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} % Use pdf, png, jpg, or eps with pdflatex; use eps in DVI mode % TeX will automatically convert eps --> pdf in pdflatex \usepackage{amssymb} \usepackage{amsmath} \usepackage{tikz} \usepackage{tikz-3dplot} \usepackage{mathtools} \usepackage{pgfplots} \usetikzlibrary{angles, quotes} \pgfplotsset{width=7cm,compat=newest} \usepackage{listings} \usepackage[d]{esvect} \usepackage{color} \usepackage{eso-pic} \usepackage{hyperref} % \include{mydefines} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % color definitions \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % definitions for listings \lstset{ backgroundcolor=\color{white}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor}; should come as last argument basicstyle=\footnotesize, % the size of the fonts that are used for the code breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace breaklines=true, % sets automatic line breaking captionpos=b, % sets the caption-position to bottom commentstyle=\color{mygreen}, % comment style deletekeywords={...}, % if you want to delete keywords from the given language escapeinside={\%*}{*)}, % if you want to add LaTeX within your code extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8 firstnumber=1000, % start line enumeration with line 1000 frame=single, % adds a frame around the code keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible) keywordstyle=\color{blue}, % keyword style language=Octave, % the language of the code morekeywords={*,...}, % if you want to add more keywords to the set numbers=left, % where to put the line-numbers; possible values are (none, left, right) numbersep=5pt, % how far the line-numbers are from the code numberstyle=\tiny\color{mygray}, % the style that is used for the line-numbers rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here)) showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' showstringspaces=false, % underline spaces within strings only showtabs=false, % show tabs within strings adding particular underscores stepnumber=2, % the step between two line-numbers. If it's 1, each line will be numbered stringstyle=\color{mymauve}, % string literal style tabsize=2, % sets default tabsize to 2 spaces title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % headlines and footlines \usepackage[headsepline]{scrlayer-scrpage} \pagestyle{scrheadings} \clearpairofpagestyles % \ohead{\textsf{Section \thesection}} % \thesection \ohead{\headmark} \automark[subsection]{section} % \chead{\textsf{Page \thepage}} \chead[\pagemark]{Page \pagemark} \ihead{\textsf{Project Description}} \cfoot[\pagemark]{Page \pagemark} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \setlength{\parindent}{0pt} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % new commands \newcommand{\mb}[1]{{\mathbf #1}} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % title page \begingroup \thispagestyle{empty} \centering \AddToShipoutPicture*{\put(30,100){\includegraphics[scale=0.55]{Figures/HardwareSetup_BluePill+STLinkV2+USBSerialAdapter.jpeg}}} % Image background \par\normalfont\fontsize{30}{30}\sffamily\selectfont \vspace*{1.0cm} {\color{blue} \textbf{\Huge My "Blue Pill" Projects Test Setup} \\ \textbf{\huge Description and Evaluation} \\ \vspace*{0.5cm} \hspace{-0.3cm} {\textbf\huge by Dr. Markus Reinhardt }\par % Book title \hspace{-1.3cm} {\textbf \huge \today}\par % Author name \vspace*{1.5cm} } \endgroup \vfill %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % table of contents \newpage \thispagestyle{empty} \tableofcontents \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % lists of figures and tables \newpage \thispagestyle{empty} \listoffigures \listoftables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % the main text \newpage \pagestyle{scrheadings} \section{Project goals} The projects are created to test the basic HW / SW concepts for projects based on the so-called "Blue Pill STM32 board". The used IDE is based on VSCode with PlatformIO extension and STM32 board operated with the Arduino environment. The HW setup allows to program the board with an ST-Link V2 adapter and also serial communications with a (Linux) PC via a USB/Serial adapter. In a second HW setup the control of a LCD via the I2C interface is tested. In a third project two rotary encoders are used as input devices. In a fourth project the ADC of the STM32 processor is tested with different resolutions. In a fifth project the PWM generation is tested. In a sixth project the usage of Flash storage for string and other constants is tested. In a seventh project the usage of the FreeRTOS real time operating system is tested with a C based standard way of task definition. In an eight project the usage of the FreeRTOS real time operating system is tested with a C++ based way of task definition. The "Blue Pill" STM32 board has a6 STM32F103C8T6 processor with 20KB RAM and 64KB EEPROM running at 72MHz. See also \href{https://docs.platformio.org/en/latest//boards/ststm32/bluepill_f103c8.html}{"Blue Pill F103C8" in PlatformIO}. All the programs described below developed in the PlatformIO IDE are saved in the overall project Gitea repository. \section{Project 1: Hardware Setup} A simple HW setup is shown in figure \ref{fig:HWSetup}. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Figures/HardwareSetup_BluePill+STLinkV2+USBSerialAdapter.jpeg} \caption{Project 1 hardware setup} \label{fig:HWSetup} \end{figure} Note the jumper positions of the two yellow jumpers on the (blue colored) "Blue Pill" board!\\ Programming is done with the (metallic blue colored) ST-Link V2 module. Also the 3.3V power supply for the "Blue Pill" board is delivered by this module. There are four pins of the module connected with the "Blue Pill" board. The connections are as follows (Table \ref{table:ConnectionSTLink1}): \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Blue Pill} & \textbf{Cable color} & \textbf{ST-Link V2} \\ \hline GND & Brown & GND (Pin 6) \\ \hline 3.3V & Red & 3.3V (Pin 8) \\ \hline CLK & Orange & SWCLK (Pin 2) \\ \hline DIO & Yellow & SWDIO (Pin 4) \\ \hline \end{tabular} \caption{Connection "Blue Pill" to ST-Link V2} \label{table:ConnectionSTLink1} \end{table} A USART interfacing between the Blue Pill board and the PC / IDE is realized with the (red colored) USB/Serial adapter board. Note the jumper position on the board is such that the 3.3V output voltage is provided (but not connected in this setup) as the "Blue Pill" processor is operated with 3.3V. In the software the Arduino Serial1 interface port is used. The connection between the USB/Serial adapter and the Blue Pill board is done as follows (Table \ref{table:ConnectionSerial1}): \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Blue Pill} & \textbf{Cable color} & \textbf{USB/Serial} \\ \hline GND & Black & GND \\ \hline TX1 (Pin PA9) & Gray & RX \\ \hline RX1 (Pin PA10) & Brown & TX \\ \hline \end{tabular} \caption{Connection "Blue Pill" to USB/Serial adapter} \label{table:ConnectionSerial1} \end{table} The USB/Serial adapter appears under /dev/ttyUSB0 in the Linux operating system. This port has to be selected in the IDE, when the Serial Monitor is activated. \section{Project 1: PlatformIO IDE } The IDE for project 1 with the first Arduino sketch is shown in figure \ref{fig:IDESetup}. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Figures/VSCode+PlatformIO+SerialMonitor+Blinky.png} \caption{VSCode/PlatformIO (Arduino environment) IDE} \label{fig:IDESetup} \end{figure} The picture shows the PlatformIO IDE within the VSCode editor and the main Arduino sketch which implements the simple blinking of the on-board LED and the output of data via the Serial1 port. The picture also shows the output of dots (see the sketch code) via the USB/Serial adapter and the Serial1 Arduino interface to the Arduino Serial Monitor displayed in the lower part of the IDE.\\ The development cycle is controlled by pressing the relevant buttons in the blue bottom line of the IDE (see the call-outs of the buttons when moving over them with the mouse pointer).\\ \textbf{Important Note}: If you have multiple projects within the Platform IDE, do not forget to select the right project in the bottom line (in the figure here: "Default (BluePillBlinky)") before compiling.\\ Compilation is done by pressing the "Check" button. Program download is done by pressing the "Right Arrow" button. Activation of the Serial Arduino Monitor is done with the "Plug" button. To switch back to the PlatfromIO home screen is done with the "Home" button. The project's software directory is \verb!BluePillBlinky!. \newpage \section{Project 2: LCD control via I2C} The IDE with the second Arduino sketch is shown in figure \ref{fig:IDESetup2}. \begin{figure}[htbp] \centering \includegraphics[width=0.85\linewidth]{Figures/Test_BluePill_I2C_LCD_IDE.png} \caption{VSCode/PlatformIO IDE} \label{fig:IDESetup2} \end{figure} The sketch controls a 16x4 character LCD connected to the "Blue Pill" board via the I2C interface.\\ In this sketch the Serial Arduino interface is used for test messages to the Serial Monitor of the IDE. It is connected to pins PA9 (TX1) and PA10 (RX1) and via the USB/Serial adapter to the PC. The IDE shows in the lower part the Serial Monitor and the messages received from the "Blue Pill" board.\\ The HW setup for this test is shown in figure \ref{fig:HWSetupI2CLCD} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Figures/Test_BluePill_I2C_LCD_HWSetup.jpeg} \caption{Project 2 HW setup with I2C-LCD} \label{fig:HWSetupI2CLCD} \end{figure} The connection from the ST-Link V2 adapter to the "Blue Pill" board is now using the 5V pins according to the following table (Table \ref{table:ConnectionSTLink2}). The 3.3V for the processor is now provided by the on-board fixed power regulator. The 5V power supply is also required for the LCD. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Blue Pill} & \textbf{Cable color} & \textbf{ST-Link V2} \\ \hline GND & Brown & GND (Pin 6) \\ \hline 5V & Red & 5V (Pin 10) \\ \hline CLK & Orange & SWCLK (Pin 2) \\ \hline DIO & Yellow & SWDIO (Pin 4) \\ \hline \end{tabular} \caption{Connection "Blue Pill" to USB/Serial adapter} \label{table:ConnectionSTLink2} \end{table} The connection between the "Blue Pill" board and the I2C adapter of the LCD is done here as follows (Table \ref{table:ConnectionI2C}): \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Blue Pill} & \textbf{Cable color} & \textbf{I2C LCD} \\ \hline GND & Black & GND \\ \hline 5V & Red & 5V \\ \hline SCL1 (Pin PB6) & Blue & SCL \\ \hline SDA1 (Pin PB7) & Green & SDA \\ \hline \end{tabular} \caption{Connection "Blue Pill" to I2C adapter of the LCD} \label{table:ConnectionI2C} \end{table} The connection between the "Blue Pill" board and the USB/Serial adapter is equal to setup 1 (Table \ref{table:ConnectionSerial1}). The project's software directory is \verb!TestBluePillI2C!. \newpage \section{Project 3: Test of Rotary Encoders} Figure \ref{fig:BluePillRotEnc} shows the test of two rotary encoders at the "Blue Pill" board. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{Figures/Test_BluePill_RotaryEncoder.jpeg} \caption{Rotary encoder test at the "Blue Pill" board} \label{fig:BluePillRotEnc} \end{figure} The connection of the rotary encoders is done according to the following tables \ref{table:ConnectionRotEnc1}, \ref{table:ConnectionRotEnc2} ): \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Blue Pill} & \textbf{Cable color} & \textbf{rotary encoder 1} \\ \hline GND & Black & GND \\ \hline 5V & Red & 5V \\ \hline Pin PB12 & White & SW \\ \hline Pin PB13 & Blue & DT \\ \hline Pin PB14 & Yellow & CLK \\ \hline \end{tabular} \caption{Connection "Blue Pill" to rotary encoder 1} \label{table:ConnectionRotEnc1} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Blue Pill} & \textbf{Cable color} & \textbf{rotary encoder 2} \\ \hline GND & Black & GND \\ \hline 5V & Red & 5V \\ \hline Pin PB3 & White & SW \\ \hline Pin PB4 & Blue & DT \\ \hline Pin PB5 & Green & CLK \\ \hline \end{tabular} \caption{Connection "Blue Pill" to rotary encoder 2} \label{table:ConnectionRotEnc2} \end{table} A key press or a turn of a rotary encoder creates a HW interrupt on the processor. The specific interrupt service routines evaluate the key hits and determine the turn direction. The project's software directory is \verb!TestBluePillRotaryEncoders!. \newpage \section{Project 4: Test of the ADC} The ADC of the "Blue Pill processor in the Arduino environment can be programmed to use for the resolution a different number of bits. Figure \ref{fig:BluePillADC} shows the ADC characteristic of the "Blue Pill" processor for the 10bit and 12bit cases. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{Figures/BluePillADCCharacteristic.png} \caption{ADC characteristic of the "Blue Pill" processor} \label{fig:BluePillADC} \end{figure} The figures show in red color the measured ADC characteristic and in blue the ideal linear ADC characteristics. The ADC reference voltage ($V_{ref}$) is equal to the power supply voltage of the processor of 3.3V. The ADC value is given the as follows:\\ For the 10bit ADC resolution case: \begin{equation*} \text{ADCValue} = \text{voltage (at ADC pin Ax)} ~/~ V_{ref} \cdot 1024 \end{equation*} For the 12bit ADC resolution case: \begin{equation*} \text{ADCValue} = \text{voltage (at ADC pin Ax)} ~/~ V_{ref} \cdot 4096 \end{equation*} The project's software directory is \verb!TestBluePillADC!. \section{Project 5: Test of PWM} The STM32 processor of the "Blue Pill" board has multiple PWM capable pins. Figure \ref{fig:BluePillPWM} shows the generated PWM signal of the "Blue Pill" processor on the screen of a hand-held oscilloscope. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{Figures/Test_BluePill_PWM.jpeg} \caption{PWM signal of the "Blue Pill" processor} \label{fig:BluePillPWM} \end{figure} The project's software directory is \verb!TestBluePillPWM!. \section{Project 6: Test of Flash Storage} The STM32 processor of the "Blue Pill" board has no EEPROM like AVR devices, but parts of the program flash memory can be reused to emulate EEPROM storage. The project tested writing and retrieving standard and custom data types to and from the flash memory. It used the library from here \href{https://github.com/khoih-prog/FlashStorage_STM32}{Github Library FlashStorage\_STM32}. The project's software directory is \verb!TestBluePillFlashStorage!. \section{Project 7: Test of FreeRTOS (first example)} Installation of STM32FreeRTOS for the PlatformIO environment is described here:\\ \href{https://platformio.org/lib/show/2093/STM32duino\%20FreeRTOS/installation}{https://platformio.org/lib/show/2093/STM32duino\%20FreeRTOS/installation}\\ Example sketches using STM32FreeRTOS are here:\\ \href{https://github.com/stm32duino/STM32FreeRTOS/tree/master/examples}{https://github.com/stm32duino/STM32FreeRTOS/tree/master/examples}\\ A first example project in PlatformIO that uses the STM32FreeRTOS real time operating system (RTOS) is created. It has two tasks. The first task is blinking the on-board LED, the second task is reading and digitizing via the ADC the voltage on port A0 and writing the result to the serial port.\\ The project's software directory is \verb!TestFreeRTOS1!. \section{Project 8: Test of FreeRTOS C++ wrapper} A first example project in PlatformIO that uses the STM32FreeRTOS with a C++ wrapper for the task definition is created. It has two tasks. The first task is blinking the on-board LED, the second task is reading and digitizing via the ADC the voltage on port A0 and writing the result to the serial port.\\ The project's software directory is \verb!TestFreeRTOSCPP1!. \newpage \appendix \section{Appendix A} \subsection{Pin-out of the "Blue Pill" board} Figure \ref{fig:BluePillPinout} shows the pin-out of the "Blue Pill" board. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Figures/STM32-Blue-Pill-Development-Board-Pinout.jpg} \caption{Pin-out of the "Blue Pill" board} \label{fig:BluePillPinout} \end{figure} \subsection{Helpful Links} \href{https://platformio.org/platformio-ide}{PlatformIO IDE}\\ \href{https://docs.platformio.org/en/latest//boards/ststm32/bluepill_f103c8.html}{"Blue Pill F103C8" in PlatformIO}\\ \href{https://www.youtube.com/watch?v=cmHQxd_qGl8}{Installing PlatformIO and creating a sample program for STM32 Blue Pill}\\ \href{https://www.arduino.cc/en/Tutorial/HomePage}{Arduino Getting Started and Tutorials} \end{document}
{ "alphanum_fraction": 0.7129808726, "avg_line_length": 42.4931506849, "ext": "tex", "hexsha": "534ecdf3843eefb9bf8820a64248611728ae9694", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "98630f027187c1007851336c3a3b4f79edac189c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DrMarkusReinhardt/MyBluePillProjectsTestSetup", "max_forks_repo_path": "DOC/MyBluePillProjects.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "98630f027187c1007851336c3a3b4f79edac189c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DrMarkusReinhardt/MyBluePillProjectsTestSetup", "max_issues_repo_path": "DOC/MyBluePillProjects.tex", "max_line_length": 301, "max_stars_count": null, "max_stars_repo_head_hexsha": "98630f027187c1007851336c3a3b4f79edac189c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DrMarkusReinhardt/MyBluePillProjectsTestSetup", "max_stars_repo_path": "DOC/MyBluePillProjects.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5267, "size": 18612 }
% Change the title of the chapter as per your convenience. \chapter{Some Review and Backgroud} \lettrine[lines=1]{T}{his} chapter gives the basic or detailed literature review of the topic and formulate the question by the end \section{My section} Some equations prototype: \begin{equation} \label{lag-derv} \dfrac{DA}{Dt} = \dfrac{\partial A}{\partial t} + \vec{u} \cdot \vec{\nabla} A \end{equation} Referencing the equations is just as easy as \autoref{lag-derv}. The following is the how to write the same equation without any numbering. \begin{equation*} \dfrac{DA}{Dt} = \dfrac{\partial A}{\partial t} + \vec{u} \cdot \vec{\nabla} A \end{equation*} \section{Some other section} Another section to add useful stuff. This is an aligned equation, when you need to show some derivation and all. \begin{align} & dm \, \dfrac{D \vec{u}}{Dt} = dm \vec{g} - 2 \, dm \, \vec{\Omega} \times \vec{u} + \vec{\nabla} \cdot \overleftrightarrow{\sigma} \, d^3x \nonumber \\ % \nonumber prevents it from adding any equation numbering to the line. \implies & \dfrac{D \vec{u}}{Dt} = \vec{g} - 2 \, \vec{\Omega} \times \vec{u} + \dfrac{1}{\rho} \, \vec{\nabla} \cdot \overleftrightarrow{\sigma} \\ % no \nonumber = equation is numbered \implies & \dfrac{D \vec{u}}{Dt} = \vec{g} - 2 \, \vec{\Omega} \times \vec{u} - \dfrac{1}{\rho} \, \vec{\nabla} p + \nu \left[ \nabla^2 \vec{u} + \dfrac{1}{3} \, \vec{\nabla} (\vec{\nabla} \cdot \vec{u}) \right] \label{nse-2-eq} % \label labels a numbered equation. \end{align} And you can reference the labelled equation as \autoref{nse-2-eq}. \subsection{Another stuff} You can also write long derivations with no equations being numbered by doing: \begin{align*} & dm \, \dfrac{D \vec{u}}{Dt} = dm \vec{g} - 2 \, dm \, \vec{\Omega} \times \vec{u} + \vec{\nabla} \cdot \overleftrightarrow{\sigma} \, d^3x \\ \implies & \dfrac{D \vec{u}}{Dt} = \vec{g} - 2 \, \vec{\Omega} \times \vec{u} + \dfrac{1}{\rho} \, \vec{\nabla} \cdot \overleftrightarrow{\sigma} \\ \implies & \dfrac{D \vec{u}}{Dt} = \vec{g} - 2 \, \vec{\Omega} \times \vec{u} - \dfrac{1}{\rho} \, \vec{\nabla} p + \nu \left[ \nabla^2 \vec{u} + \dfrac{1}{3} \, \vec{\nabla} (\vec{\nabla} \cdot \vec{u}) \right] \end{align*} Finally, you can write a boxed result by doing: \begin{equation} \therefore \quad \boxed{\dfrac{\partial \rho}{\partial t} + \vec{\nabla} \cdot \vec{J} = 0} \end{equation}
{ "alphanum_fraction": 0.648905411, "avg_line_length": 71.2058823529, "ext": "tex", "hexsha": "71cfd3f4ebdc897cc6abd21f8487f15df7b83c4b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ce4296f7b3e1ad6ea40a302ad1cb563ce2438f9f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "tapanmayukh/IITG-BTP-Template", "max_forks_repo_path": "Chapters/chapter2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ce4296f7b3e1ad6ea40a302ad1cb563ce2438f9f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "tapanmayukh/IITG-BTP-Template", "max_issues_repo_path": "Chapters/chapter2.tex", "max_line_length": 269, "max_stars_count": null, "max_stars_repo_head_hexsha": "ce4296f7b3e1ad6ea40a302ad1cb563ce2438f9f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "tapanmayukh/IITG-BTP-Template", "max_stars_repo_path": "Chapters/chapter2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 903, "size": 2421 }
%\documentclass[aspectratio=169,classification=confidential]{lh-presentation} \documentclass[inverse,aspectratio=169,classification=confidential]{lh-presentation} \usepackage[utf8]{inputenc} \title{A Beamer-theme for LogicalHacking.com} \subtitle{Example presentation} \institute[The University of Sheffield] {Department of Computer Science, The University of Sheffield, Sheffield, UK} \author[A.D. Brucker] {% \href{http://www.brucker.uk/}{Achim D. Brucker}\\ \texttt{\footnotesize\href{mailto:"Achim D. Brucker" <[email protected]>}{[email protected]} \hspace{.6cm} \url{http://www.brucker.uk/}} } \titlevisual{visuals/lh-title-visual-code-dark} \contactauthor{Dr. Achim D. Brucker} \contactemail{[email protected]} \contacttwitter{adbrucker} \contactlinkedin{https://de.linkedin.com/in/adbrucker/} \contactwww{https://www.brucker.ch/} \contactblog{https://logicalhacking.com/blog/} \DeclareTextCommand{\nobreakspace}{T1}{\leavevmode\nobreak\ } \begin{document} \begin{frame}[plain] \maketitle \end{frame} \AgendaFrame \section{Introduction} \subsection{A few example slides} \frame{\sectionpage} %\begin{frame}[classification={public-cc-by}] \begin{frame}[classification={confidential}] \frametitle{A standard slide} \framesubtitle{With a frame subtitle} \begin{itemize} \item Only first word of slide title \ldots \begin{itemize} \item And a second level \begin{itemize} \item Officially, a third level should be avoided \end{itemize} \end{itemize} \end{itemize} \[ x = \sum_{i=0}^{-\infty}\sqrt{-i}\] \begin{itemize} \item rm: {\rmfamily The quick {\mdseries brown fox} \emph{jumps over} \textbf{the lazy's doc back}} \item sf: {\sffamily The quick {\mdseries brown fox} \emph{jumps over} \textbf{the lazy's doc back}} \item tt: {\ttfamily The quick {\mdseries brown fox} \emph{jumps over} \textbf{the lazy's doc back}} \item {\Huge Huge} {\huge huge} {\Large Large} {\large large} {\normalsize normal} {\small small} {\footnotesize footnote} {\scriptsize scriptsize} {\tiny tiny} \end{itemize} \end{frame} \begin{frame} \frametitle{Block Examples} \begin{block}{Regular Block} The \alert{quick} \textbf{brown} \emph{fox} \ldots \end{block} \begin{alertblock}{Alert Block} The \alert{quick} \textbf{brown} \emph{fox} \ldots \end{alertblock} \begin{exampleblock}{Example Block} The \alert{quick} \textbf{brown} \emph{fox} \ldots \end{exampleblock} \begin{quotebox} The \alert{quick} \textbf{brown} \emph{fox} \ldots \end{quotebox} \end{frame} \begin{frame} A Frame without Title \ldots asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af asdf asdf af \end{frame} \section{Conclusion} \subsection{Concluding remarks} \frame{\sectionpage} \begin{frame}[plain,classification=confidential] \frametitle{A plain slide} \framesubtitle{With a frame subtitle} \begin{itemize} \item Only first word of slide title \ldots \begin{itemize} \item And a second level \begin{itemize} \item Officially, a third level should be avoided \end{itemize} \end{itemize} \end{itemize} \[ x = \sum_{i=0}^{-\infty}\sqrt{-i}\] \end{frame} \ThanksFrame \CopyrightFrame \PartFrame[white][lhMagentaDark]{New Chapter} \section{Another Section} \frame{\sectionpage} \begin{frame} \frametitle{Conclusion} \end{frame} \end{document}
{ "alphanum_fraction": 0.7189706217, "avg_line_length": 22.7512953368, "ext": "tex", "hexsha": "df7ca55f2ca5092a23772401ccfdf26bd868456c", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_head_hexsha": "242169a6a7ba0521163d8f4712b48b0243af376d", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "alphaprime/logicalhacking-latex", "max_forks_repo_path": "lh-presentation/example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "242169a6a7ba0521163d8f4712b48b0243af376d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "alphaprime/logicalhacking-latex", "max_issues_repo_path": "lh-presentation/example.tex", "max_line_length": 100, "max_stars_count": 13, "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_path": "200+ beamer 模板合集/logicalhacking-latex-master(研究所定制)/lh-presentation/example.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "num_tokens": 1424, "size": 4391 }
\documentclass[a4paper,11pt]{ltxdoc} \usepackage[utf8]{inputenc} \usepackage{geometry} \usepackage{amsmath} \usepackage{color} \usepackage{subfigure} \usepackage{tabularx} \usepackage{graphicx} \usepackage{rotating} \usepackage{natbib} \usepackage{listings} \definecolor{mygray}{rgb}{0.9,0.9,0.9} \lstset{frame=single,breaklines=true,backgroundcolor=\color{mygray}} \title{Freva -- BUG -- Basic User Guide (ALPHA VERSION)} \begin{document} \maketitle \section{Start the Evaluation System via Freva} \subsection*{... in the shell} To log in to the miklip system you may use ssh from any linux/unix system: \begin{lstlisting} ssh -X [email protected] \end{lstlisting} The -X will allow you to connect to the remote X server (e.g. to display images). user-account should be your account you usally login with, if you still don't have one please ask the admins. Start setting up the environment by loading the proper module (You may copy the following line as is into your shell): \begin{lstlisting} module load freva \end{lstlisting} This activates the system for your current session. You might notice some other modules have been loaded. \subsection*{... in the web} To log in to the research group system you may use a browser: \begin{lstlisting} research.group.web.domain \end{lstlisting} The domain could be different to the shell domain, depending on how the admins set up the system. Use your normal user account. \section{Work with the Evaluation System via Freva} Freva is the all in one framework with the main features: freva --help \begin{lstlisting} Freva Available commands: --plugin : Applies some analysis to the given data. --history : provides access to the configuration history (use --help for more help) --databrowser : Find data in the system --crawl_my_data: Use this command to update your projectdata. --esgf : Browse ESGF data and create wget script This is the main tool for the evaluation system. Usage: freva --COMMAND [OPTIONS] To get help for the individual commands use freva --COMMAND --help \end{lstlisting} The core applications are the plugin, history and databrowser --- same in web and shell. \subsection*{Example Table} %\begin{table}[] %\centering %\caption{Freva main commands} %\label{commands} \begin{tabular}{lllll} \cline{1-1} \multicolumn{1}{|l|}{\textbf{Freva Option} } & \multicolumn{1}{|l|}{\textbf{Description}} & \multicolumn{1}{|l|}{\textbf{Example} } \\ --plugin & apply some analysis tool & freva --plugin pca input=myfile.nc outputdir=/tmp variable=tas \\ --history & browse your history & freva --history or freva --history --plugin movieplotter \\ --databrowser & Search the research groups data & freva --databrowser project=cmip5 variable=tas time\_frequency=mon ensemble=r[1-5]i1p1 experiment=*196[0-5] \\ --crawl\_my\_data & how2put your data into database & freva --crawl\_my\_data --path=/PATH/2/USERDATA/user-account \\ --esgf & Contact the esgf & freva --esgf --download-script /tmp/download.wget variable=tas time\_frequency=mon ensemble=r1i1p1 experiment=decadal1965 \end{tabular} %\end{table} All commands have a -h or -\-help flag to display the commands help. Basic commands for Freva: \begin{tabular}{lllll} \cline{1-1} \multicolumn{1}{|l|}{\textbf{To ... } } & \multicolumn{1}{|l|}{\textbf{Shell}} & \multicolumn{1}{|l|}{\textbf{Web}} \\ \multicolumn{1}{|l|}{To get the main help } & \multicolumn{1}{|l|}{freva --help} & \multicolumn{1}{|l|}{Click on Help} \\ \multicolumn{1}{|l|}{To list the plugins } & \multicolumn{1}{|l|}{freva --plugin} & \multicolumn{1}{|l|}{Click on Plugins} \\ \multicolumn{1}{|l|}{To list the plugin help } & \multicolumn{1}{|l|}{freva --plugin sometool --help} & \multicolumn{1}{|l|}{Click on a specific plugin} \\ \multicolumn{1}{|l|}{To see the history } & \multicolumn{1}{|l|}{freva --history} & \multicolumn{1}{|l|}{Click on History} \\ \multicolumn{1}{|l|}{To see the history help } & \multicolumn{1}{|l|}{freva --history --help} & \multicolumn{1}{|l|}{Read the buttons} \\ \end{tabular} \subsection{Plugins --plugins} This is the main access to all installed analysis tools and the history. The tools are implemented by providing plug-ins to the system. For more information on how to create a plugin check the Basic Developer Guide (BDG). \subsubsection*{Basic Usage} To get the help: \begin{lstlisting} $ freva --plugin --help freva --plugin [opt] query opt: [...] \end{lstlisting} To list all available analysis tools: \begin{lstlisting} $ freva --plugin PCA: Principal Component Analysis ... \end{lstlisting} The "Overview":research.group.web/plugins of tools in the framework. To select a particular tool: \begin{lstlisting} $ freva --plugin pca Missing required configuration for: input, variable \end{lstlisting} You see here that the PCA tool is complaining because of an incomplete configuration. To get the help of a particular tool: \begin{lstlisting} $ freva --plugin pca --help PCA (v3.1.0): Principal Component Analysis Options: areaweight (default: False) Whether or not you want to have your data area weighted. This is done per latitude with sqrt(cos(latitude)). boots (default: 100) Number of bootstraps. [...] input (default: None) [mandatory] An arbitrary NetCDF file. There are only two restrictions to your NetCDF file: a) Time has to be the very first dimension in the variable you like to analyze. b) All dimensions in your variable need to be defined as variables themselves with equal names. Both, a) and b), are usually true. [...] \end{lstlisting} Here you see the configuration parameter, its default value (None means there is no value setup), whether the configuration is mandatory ([mandatory] marking by the default value) and an explanation about the configuration parameter. To pass the values to the tool you just need to use the key=value construct like this: \begin{lstlisting} $ freva --plugin pca input=myfile.nc outputdir=/tmp eofs=3 [...] \end{lstlisting} You may even define variables in terms of other variables like the projection name above. While doing so from the shell please remember you need to escape the \$ sign by using the backslash or setting the value in single quotes (no, double quotes don't work). For example: \begin{lstlisting} $ freva --plugin pca input=myfile_\${eofs}.nc outputdir=/tmp eofs=3 #or $ freva --plugin pca 'input=myfile_${eofs}.nc' outputdir=/tmp eofs=3 \end{lstlisting} If you want to know more about this bash feature see this and if you want to want to know much more then take a look at this Quoting is very important on any shell, so if you use them, be sure to know how it works. It may help you avoid losing data! \subsubsection*{Configuring the plugins} All configurations are saved in the --history can be seen, saved, return into a command and restarted! You may want to save the configuration of the tool: \begin{lstlisting} $ freva --plugin pca --save-config=/home/<user_account>/evaluation_system/config/pca/pca.conf variable=tas input=myfile.nc outputdir=/tmp eofs=3 INFO:__main__:Configuration file saved in /home/<user_account>/evaluation_system/config/pca/pca.conf \end{lstlisting} Note this starts the tool. To just save the configuration without starting the tool use the -n or --dry-run flag. Also note this stores the configuration in a special directory structure so the system can find it again. You can save the configuration somewhere else: \begin{lstlisting} $ freva --plugin pca --save-config=/home/<user_account>/evaluation_system/config/pca/pca.conf --dry-run --tool pca variable=tas input=myfile.nc outputdir=/tmp eofs=3 INFO:__main__:Configuration file saved in pca.conf \end{lstlisting} The configuration stored will be used to overwrite the default one. This is a possible usecase: Change the defaults to suit your general needs: \begin{lstlisting} $ freva --plugin pca --save-config=XXX --dry-run outputdir=/my_output_dir shiftlats=false \end{lstlisting} Prepare some configurations you'll be using recurrently \begin{lstlisting} $ freva --plugin pca --save-config=XXX --dry-run --config-file pca.tas.conf --tool pca variable=tas $ freva --plugin pca --save-config=XXX --dry-run --config-file pca.uas.conf --tool pca variable=uas \end{lstlisting} \subsubsection*{Scheduling} Instead of running your job directly in the terminal, you can involve the SLURM scheduler. To run the tool murcss analyzing the variable tas the command is \begin{lstlisting} $ freva --plugin murcss variable=tas ... \end{lstlisting} The execution takes a certain time (here: roughly 1 minute) and prints \begin{lstlisting} Searching Files Remapping Files Calculating ensemble mean Calculating crossvalidated mean Calculating Anomalies Analyzing year 2 to 9 Analyzing year 1 to 1 Analyzing year 2 to 5 Analyzing year 6 to 9 Finished. Calculation took 63.4807469845 seconds \end{lstlisting} To schedule the same task you would use \begin{lstlisting} $ freva --plugin murcss variable=tas ... --batchmode=true \end{lstlisting} instead. The output changes to \begin{lstlisting} Scheduled job with history id 414 You can view the job's status with the command squeue Your job's progress will be shown with the command tail -f /home/zmaw/u290038/evaluation_system/slurm/murcss/slurm-1437.out \end{lstlisting} The last line shows you the command to view the output, which is created by the tool. In this example you would type \begin{lstlisting} $ tail -f /home/zmaw/u290038/evaluation_system/slurm/murcss/slurm-1437.out \end{lstlisting} For jobs with a long run-time or large amounts of jobs you schould consider to schedule them and use the batch mode! \subsubsection*{--help} \begin{lstlisting} $ freva --plugin --help Applies some analysis to the given data. See research group wiki for more information. The "query" part is a key=value list used for configuring the tool. It's tool dependent so check that tool help. For Example: freva --plugin pca eofs=4 bias=False input=myfile.nc outputdir=/tmp/test Usage: freva --plugin [options] Options: -d, --debug turn on debugging info and show stack trace on exceptions. -h, --help show this help message and exit --repos-version show the version number from the repository --caption=CAPTION sets a caption for the results --save saves the configuration locally for this user. --save-config=FILE saves the configuration at the given file path --show-config shows the resulting configuration (implies dry-run). --scheduled-id=ID Runs a scheduled job from database --dry-run dry-run, perform no computation. This is used for viewing and handling the configuration. --batchmode=BOOL creates a SLURM job \end{lstlisting} \subsection{History --history} To get the history in the web just click on 'History' and browse around. In the shell: \begin{lstlisting} $ freva --history 24) pca [2013-01-14 10:46:44.575529] <THIS MUST BE DEFINED!>.pca.<THIS MUST BE DEFINED!>.nc {u'normalize... 23) pca [2013-01-14 10:46:01.322760] None.pca.None.nc {u'normalize': u'true', u'testorthog': u'true', u'... 22) nclplot [2013-01-11 14:51:40.910996] first_plot.eps {u'plot_name': u'first_plot', u'file_path': u'tas_Am... 21) nclplot [2013-01-11 14:44:15.297102] first_plot.eps {u'plot_name': u'first_plot', u'file_path': u'tas_Am... 20) nclplot [2013-01-11 14:43:37.748200] first_plot.eps {u'plot_name': u'first_plot', u'file_path': u'tas_Am... [...] \end{lstlisting} It shows just the 10 latest entries, i.e. the 10 latest analysis that were performed. To create more complex queries check the help: \begin{lstlisting} $ freva --history --help Displays the last 10 entries with a one-line compact description. The first number you see is the entry id, which you might use to select single entries. DATE FORMAT Dates can be given in "YYYY-MM-DD HH:mm:ss.n" or any less accurate subset of it. These are all valid: "2012-02-01 10:08:32.1233431", "2012-02-01 10:08:32", "2012-02-01 10:08", "2012-02-01 10", "2012-02-01", "2012-02", "2012". These are *NOT*: "01/01/2010", "10:34", "2012-20-01" Missing values are assumed to be the minimal allowed value. For example: "2012" == "2012-01-01 00:00:00.0" Please note that in the shell you need to escape spaces. All these are valid examples (at least for the bash shell): freva --history --since=2012-10-1\ 10:35 freva --history --since=2012-10-1" "10:35' Usage: freva --history [options] Options: -d, --debug turn on debugging info and show stack trace on exceptions. -h, --help show this help message and exit --full_text If present shows the complete configuration stored --return_command Show freva commands belonging to the history entries instead of the entries themself. --limit=N n is the number of entries to be displayed --plugin=NAME Display only entries from plugin "name" --since=DATE Retrieve entries older than date (see DATE FORMAT) --until=DATE Retrieve entries newer than date (see DATE FORMAT) --entry_ids=IDs Select entries whose ids are in "ids" (e.g. entry_ids=1,2 or entry_ids=5) \end{lstlisting} You can view the configuration used at nay time and the satatus of the created files (i.e. if the files are still there or has been modified) \begin{lstlisting} $ freva --history --plugin=pca --limit=1 --full_text 26) pca v3.1.0 [2013-01-14 10:51:26.244553] Configuration: areaweight=false boots=100 bootstrap=false eigvalscale=false eofs=-1 input=test.nc latname=lat missingvalue=1e+38 normalize=false outputdir=/home/user/evaluation_system/output/pca pcafile=test.nc.pca.tas.nc principals=true projection=test.nc.pro.tas.nc session=1 shiftlats=false testorthog=false threads=7 variable=tas Output: /home/user/evaluation_system/output/pca/test.nc.pca.tas.nc (deleted) \end{lstlisting} The history offers a more direct way to re-run tools. The option return\_command shows the plugin command belonging to the configuration. Here an example for the tool movieplotter: \begin{lstlisting} freva --history --plugin=movieplotter --limit=1 --return_command \end{lstlisting} It returns: \begin{lstlisting} freva --plugin movieplotter latlon='None' polar='None' work='/home/user/evaluation_system/cache/movieplotter/1387364295586' reverse='False' range_min='None' collage='False' range_max='None' earthball='False' level='0' ntasks='24' input=''/database/data4researchgroup/projectdata/project/product/institute/model/experiment/time_frequency/realm/variable/ensemble/variable_CMORtable_model_experiment_ensemble_startdate-enddate.nc'' loops='0' colortable='ncl_default' animate='True' cacheclear='True' resolution='800' outputdir='/home/user/evaluation_system/output/movieplotter' secperpic='1.0' \end{lstlisting} This is not an handy expression, but very useful. A re-run of the tool in batch shell could be easily performed by \begin{lstlisting} $(freva --history --plugin=movieplotter --limit=1 --return_command) \end{lstlisting} \subsection{Data-Browser --databrowser} All files available on the MiKlip server are scanned and indexed in a special server (SOLR). This allows us to query the server which responds almost immediately. Because of the miklip configuration the first time you call the tool it might take up to a couple of seconds to start. After that normally you should see results within a second. \subsubsection{Help} \begin{lstlisting} freva --databrowser --help The query is of the form key=value. <value> might use *, ? as wildcards or any regular expression encclosed in forward slashes. Depending on your shell and the symbols used, remeber to escape the sequences properly. The safest would be to enclosed those in single quotes. For Example: %s project=baseline1 model=MPI-ESM-LR experiment=/decadal200[0-3]/ time_frequency=*hr variable='/ta|tas|vu/' Usage: freva --databrowser [options] Options: -d, --debug turn on debugging info and show stack trace on exceptions. -h, --help show this help message and exit --multiversion select not only the latest version but all of them --relevant-only show only facets that filter results (i.e. >1 possible values) --batch-size=N Number of files to retrieve --count-facet-values Show the number of files for each values in each facet --attributes retrieve all possible attributes for the current search instead of the files --all-facets retrieve all facets (attributes & values) instead of the files --facet=FACET retrieve these facets (attributes & values) instead of the files \end{lstlisting} \subsubsection{Usage} The databrowser expects a list of attribute=value (or key=value) pairs. There are a few differences and many more options (explained next).\\ Most important is that you don't need to split the search according to the type of data you are searching for. You might as well search for files both on observations, reanalysis and model data all at the same time.\\ Also important is that all searches are made case insensitive (so don't worry about upper or lower casing)\\ You can also search for attributes themselves instead of file paths. For example you can search for the list of variables available that satisfies a certain constraint (e.g. sampled 6hr, from a certain model, etc). \\ \textbf{Defining the search} freva --databrowser project=baseline1 variable=tas time\_frequency=mon \textbf{Defining the possible values} There are many more options for defining a value for a given attribute: \begin{lstlisting} Attribute syntax Meaning attribute=value Search for files containing exactly that attribute attribute=val* Search for files containing a value for attribute that starts with the prefix val attribute=*lue Search for files containing a value for attribute that ends with the suffix lue attribute=*alu* Search for files containing a value for attribute that has alu somewhere attribute=/.*alu.*/ Search for files containing a value for attribute that matches the given regular expression (yes! you might use any regular expression to find what you want. Check the table after this one) attribute=value1 attribute=value2 Search for files containing either value1 OR value2 for the given attribute (note that's the same attribute twice!) attribute1=value1 attribute2=value2 Search for files containing value1 for attribute1 AND value2 for attribute2 attribute_not_=value Search for files NOT containing value attribute_not_=value1 attribute_not_=value2 Search for files containing NEITHER value1 or value2 \end{lstlisting} NOTE: When using * remember that your shell might give it a different meaning (normally it will try to match files with that name) to turn that off you can use backslash / in most shells Regular Expressions must be given within forward slashes (/) and are match agains the whole value and not some part of it. Here's a summary (there might be more... check it!) Syntax Meaning \begin{lstlisting} ==Characters== <any_non_special_character> that character Any one character [<any_character>] Any one character between brackets [<any_character>-<any_other_character>] Any one character between those characters (e.g. [a-e] is like [abcde] ==Repetitions== * 0 or more times + 1 or more times {n} exactly n times {n,} at least n times {n,m} from n to m times RegExA|RegExB Either RegExpA or RegExpB ==Some examples== abc exactly "abc" [abc] either "a", "b" or "c" [abc]{3} three characters from those given. E.g. "aaa", "bab" or "cab" [abc]{2,4} two to four characters from those given. E.g. "aaa", "ab" or "cccc" [a-z]+[0-9]* One ore more characters followed by cero or more number, e.g. "a", "tas", "cfaddbze94" \end{lstlisting} Searching for metadata You might as well want to now about possible values that an attribute can take after a certain search is done. For this you use the --facet flag (facets are the possible attributes that partition the result set). For example to see the time frequency (time resolution) in which reanalysis are available you might issue the following query: \begin{lstlisting} $ freva --databrowser --facet time_frequency project=reanalysis time_frequency: 6hr,day,mon \end{lstlisting} You might also ask for more than one single facet by definig the --facet flag multiple times. For example let's also see a list of variables: \begin{lstlisting} $ freva --databrowser --facet time_frequency --facet variable project=reanalysis variable:cl,clt,evspsbl,hfls,hfss,hur,hus,pr,prc,prsn,prw,ps,psl,rlds,rldscs,rlus,rlut,rlutcs,rsds,rsdscs,rsdt,rsut,rsutcs,sfcwind,ta,tas,tauu,tauv,tro3,ts,ua,uas,va,vas,wap,zg time_frequency: 6hr,day,mon \end{lstlisting} Please note that those are not related, i.e. the values of the time\_frequency facet do not correspond to any particular variable. It is like issuing to difference queries. Also note that you can further define this as usual with a given query. For example check which files are at 6hr frequency: \begin{lstlisting} $ freva --databrowser --facet variable project=reanalysis time_frequency=6hr variable: psl,sfcwind,tas,zg \end{lstlisting} If you want to see how many files would return if you further select that variable (drill down query) you may add the --count-facet-values flag (simply --count will also do): \begin{lstlisting} $ freva --databrowser --count-facet-values --facet variable project=reanalysis time_frequency=6hr variable: psl (7991),sfcwind (33),tas (33),zg (131) \end{lstlisting} This means that there are 7991 files containing the variable psl, 33 for sfcwind, and so on. If you want to check all facets at once you may use the --all-facets flag (don't worry this is still very fast) \begin{lstlisting} $ freva --databrowser --all-facets project=reanalysis time_frequency=6hrcmor_table: 6hrplev realm: atmos data_type: reanalysis institute: ecmwf,jma-criepi,nasa-gmao,ncep-ncar,noaa-cires project: time_frequency: 6hr experiment: 20cr,cfsr,eraint,jra-25,merra,merra_testarea,ncep1,ncep2 variable: psl,sfcwind,tas,zg model: cdas,cfs,geos-5,ifs,jcdas,nomads data_structure: ensemble: r10i1p1,r11i1p1,r12i1p1,r13i1p1,r14i1p1,r15i1p1,r16i1p1,r17i1p1,r18i1p1,r19i1p1,r1i1p1,r20i1p1,r21i1p1,r22i1p1,r23i1p1,r24i1p1,r25i1p1,r26i1p1,r27i1p1,r28i1p1,r29i1p1,r2i1p1,r30i1p1,r31i1p1,r32i1p1,r33i1p1,r34i1p1,r35i1p1,r36i1p1,r37i1p1,r38i1p1,r39i1p1,r3i1p1,r40i1p1,r41i1p1,r42i1p1,r43i1p1,r44i1p1,r45i1p1,r46i1p1,r47i1p1,r48i1p1,r49i1p1,r4i1p1,r50i1p1,r51i1p1,r52i1p1,r53i1p1,r54i1p1,r55i1p1,r56i1p1,r5i1p1,r6i1p1,r7i1p1,r8i1p1,r9i1p1 \end{lstlisting} And again you can also have the --count flag: \begin{lstlisting} $ freva --databrowser --all-facets --count project=reanalysis time_frequency=6hr cmor_table: 6hrplev (8188) realm: atmos (8188) data_type: reanalysis (8188) institute: ecmwf (132),jma-criepi (66),nasa-gmao (99),ncep-ncar (163),noaa-cires (7728) project: time_frequency: 6hr (8188) experiment: 20cr (7728),cfsr (64),eraint (132),jra-25 (66),merra (66),merra_testarea (33),ncep1 (65),ncep2 (34)variable: psl (7991),sfcwind (33),tas (33),zg (131)model: cdas (99),cfs (64),geos-5 (99),ifs (132),jcdas (66),nomads (7728) data_structure: ensemble: r10i1p1 (138),r11i1p1 (138),r12i1p1 (138),r13i1p1 (138),r14i1p1 (138),r15i1p1 (138),r16i1p1 (138),r17i1p1 (138),r18i1p1 (138),r19i1p1 (138),r1i1p1 (598),r20i1p1(138), r21i1p1 (138),r22i1p1 (138),r23i1p1 (138),r24i1p1 (138),r25i1p1 (138),r26i1p1 (138),r27i1p1 (138),r28i1p1 (138),r29i1p1 (138),r2i1p1 (138),r30i1p1 (138),r31i1p1 (138),r32i1p1 (138),r33i1p1 (138),r34i1p1 (138),r35i1p1 (138),r36i1p1 (138),r37i1p1 (138),r38i1p1 (138),r39i1p1 (138),r3i1p1 (138),r40i1p1 (138),r41i1p1 (138),r42i1p1 (138),r43i1p1 (138),r44i1p1 (138),r45i1p1 (138),r46i1p1 (138),r47i1p1 (138),r48i1p1 (138),r49i1p1 (138),r4i1p1 (138),r50i1p1 (138),r51i1p1 (138),r52i1p1 (138),r53i1p1 (138),r54i1p1 (138),r55i1p1 (138),r56i1p1 (138),r5i1p1 (138),r6i1p1 (138),r7i1p1 (138),r8i1p1 (138),r9i1p1 (138) \end{lstlisting} You might have also seen that some facets are not relevant at all as they are not partitioning the resulting data (e.g. see cmor\_table or data\_type). You can leave them out by adding the --relevant-only flag \begin{lstlisting} $ freva --databrowser --all-facets --count --relevant-only project=reanalysis time_frequency=6hr institute: ecmwf (132),jma-criepi (66),nasa-gmao (99),ncep-ncar (163),noaa-cires (7728) experiment: 20cr (7728),cfsr (64),eraint (132),jra-25 (66),merra (66),merra_testarea (33),ncep1 (65),ncep2 (34) variable: psl (7991),sfcwind (33),tas (33),zg (131) model: cdas (99),cfs (64),geos-5 (99),ifs (132),jcdas (66),nomads (7728) ensemble: r10i1p1 (138),r11i1p1 (138),r12i1p1 (138),r13i1p1 (138),r14i1p1 (138),r15i1p1 (138),r16i1p1 (138),r17i1p1 (138),r18i1p1 (138),r19i1p1 (138),r1i1p1 (598),r20i1p1 (138),r21i1p1 (138),r22i1p1 (138),r23i1p1 (138),r24i1p1 (138),r25i1p1 (138),r26i1p1 (138),r27i1p1 (138),r28i1p1 (138),r29i1p1 (138),r2i1p1 (138),r30i1p1 (138),r31i1p1 (138),r32i1p1 (138),r33i1p1 (138),r34i1p1 (138),r35i1p1 (138),r36i1p1 (138),r37i1p1 (138),r38i1p1 (138),r39i1p1 (138),r3i1p1 (138),r40i1p1 (138),r41i1p1 (138),r42i1p1 (138),r43i1p1 (138),r44i1p1 (138),r45i1p1 (138),r46i1p1 (138),r47i1p1 (138),r48i1p1 (138),r49i1p1 (138),r4i1p1 (138),r50i1p1 (138),r51i1p1 (138),r52i1p1 (138),r53i1p1 (138),r54i1p1 (138),r55i1p1 (138),r56i1p1 (138),r5i1p1 (138),r6i1p1 (138),r7i1p1 (138),r8i1p1 (138),r9i1p1 (138) \end{lstlisting} If you try to retrieve all variables stored (remember there are over +2.100.000 files!) you'll notice an ellipses (...) ath the end of the list: \begin{lstlisting} $ freva --databrowser --facet variable variable: abs550aer,ageice,agessc,albisccp,arag,areacella,areacello,bacc,baresoilfrac,basin,bddtalk,bddtdic,bddtdife,bddtdin,bddtdip,bddtdisi,bfe,bmelt,bsi,burntarea,c3pftfrac,c4pftfrac,calc,ccb,cct,ccwd,cdnc,cfad2lidarsr532,cfaddbze94,cfadlidarsr532,cfc11,cfc113global,cfc11global,cfc12global,ch4,ch4global,chl,chlcalc,chldiat,chldiaz,chlmisc,chlpico,ci,cl,clc,clcalipso,clcalipso2,clccalipso,cldnci,cldncl,cldnvi,cleaf,clhcalipso,cli,clic,clis,clisccp,clitter,clitterabove,clitterbelow,clivi,cllcalipso,clmcalipso,clrcalipso,cls,clt,cltc,cltcalipso,cltisccp,cltnobs,cltstddev,clw,clwc,clws,clwvi,cmisc,co2,co2mass,co3,co3satarag,co3satcalc,concaerh2o,concbb,concbc,conccn,concdms,concdust,concnh4,concno3,concoa,concpoa,concso2,concso4,concsoa,concss,cproduct,croot,cropfrac,csoil,csoilfast... \end{lstlisting} This means there are more results than those being shown here. We limit the results to 100 for usability sake. If you still think this is a bug instead of a terrific feature, then you might use a special search word to change this facet.limit. That's the number of results that will be retrieved. Setting it to -1 retrieves just everything... be aware that make cause some problems if you don't know what you are doing (well sometimes it might also cause problems if you do... so use with discretion) \begin{lstlisting} $ freva --databrowser --facet variable facet.limit=-1 variable: alot! \end{lstlisting} By the way, do you want to count them? Those are 619 variables! \begin{lstlisting} $ freva --databrowser --facet variable facet.limit=-1 | tr ',' '\n' | wc -l 619 \end{lstlisting} \textbf{Bash auto completion} And if that's not awesome enough (I know it never is), then try the bash auto-completion. If you are using bash, everything is already setup when you issued the 'module load freva' command. Whenever you hit tab the word will be completed to the longest unique string that matches your previous input. A second tab will bring up a list of all possible completions after that. For example (<TAB> denotes presing the tab key): \begin{lstlisting} freva --databrowser project=base<TAB> \end{lstlisting} results in \begin{lstlisting} freva --databrowser project=baseline \end{lstlisting} Now pressing <TAB> again will show all other possibilities: \begin{lstlisting} $ freva --databrowser project=baseline<TAB> baseline0 baseline1 \end{lstlisting} But flags are not the only thing being populated, it also work on atributes: \begin{lstlisting} $ freva --databrowser <TAB><TAB> cmor_table= ensemble= institute= project= time_frequency= data_type= experiment= model= realm= variable= \end{lstlisting} ... and of course values: \begin{lstlisting} $ freva --databrowser institute=m<TAB><TAB> miroc mohc mpi-m mri \end{lstlisting} And (yes! That wasn't all) this is also query aware: \begin{lstlisting} $ freva --databrowser institute=<TAB><TAB> bcc csiro-bom and so on \end{lstlisting} \begin{lstlisting} $ freva --databrowser project=reanalysis institute=<TAB><TAB> ecmwf jma-criepi nasa-gmao ncep-ncar noaa-cires \end{lstlisting} Note that if you mix flags this might not work as intended (or not at all). \subsection{--crawl\_my\_data} Per default it is loading the users "projectdata" directory: \\ /research/database/data4project/projectdata/user-account \subsubsection*{Help} \begin{lstlisting} freva --crawl_my_data --help Use this command to update your projectdata. Usage: freva --crawl_my_data [options] Options: -d, --debug turn on debugging info and show stack trace on exceptions. -h, --help show this help message and exit --path=PATH crawl the given directory \end{lstlisting} \subsubsection*{Usage} \begin{lstlisting} freva --crawl_my_data \end{lstlisting} would crawl all data you have in /research/database/data4project/projectdata/user-account When you have a lot of data in your directory, it could be worth it, to take just a sub-directory. This gets much faster, when less data. EXAMPLE: You've put in a new decadal experiment and just want to add this \begin{lstlisting} freva --crawl_my_data --path /research/database/data4project/projectdata/user-account/output/MPI-M/MPI-ESM-LR/dec08o2000/ \end{lstlisting} \subsection{--esgf} The search syntax is defined here: http://www.esgf.org/wiki/ESGF\_Search\_REST\_API It has been simplified to be used from the command line and resemble freva --databrowser as clos as possible. But the two commands rely on different backends which have different query possibilities. \subsubsection*{Help} \begin{lstlisting} The query is of the form key=value. the key might be repeated and/or negated with the '_not_' suffix (e.g. model_not_=MPI-ESM-LR experiment=decadal2000 experiment=decadal2001) Simple query: freva --esgf model=MPI-ESM-LR experiment=decadal2001 variable=tas distrib=False The search API is described here: http://www.esgf.org/wiki/ESGF_Search_REST_API Some special query keys: distrib: (*true*, false) search globally or only at DKRZ (MPI data and replicas) latest : (true, false, *unset*) search for the latest version, older ones or all. replica: (true, false, *unset*) search only for replicas, non-replicas, or all. Usage: freva --esgf [options] Options: -d, --debug turn on debugging info and show stack trace on exceptions. -h, --help show this help message and exit --datasets List the name of the datasets instead of showing the urls. --show-facet=FACET <list> List all values for the given facet (might be defined multiple times). The results show the possible values of the selected facet according to the given constraints and the number of *datasets* (not files) that selecting such value as a constraint will result (faceted search) --opendap List the name of the datasets instead of showing the urls. --gridftp Show Opendap endpoints instead of the http default ones (or skip them if none found) --download-script=FILE <file> Download wget_script for getting the files instead of displaying anything (only http) --query=QUERY <list> Display results from <list> queried fields \end{lstlisting} \subsubsection*{Usage} If you need some files: first check if they are there and how many they are: \begin{lstlisting} $ freva --esgf project=CMIP5 experiment=decadal{1960..1965} variable=tas distrib=false latest=true | wc -l 278 \end{lstlisting} You can check those urls by just not piping the result to wc (word count) \begin{lstlisting} $ freva --esgf project=CMIP5 experiment=decadal{1960..1965} variable=tas distrib=false latest=true http://cmip3.dkrz.de/thredds/fileServer/cmip5/output1/CCCma/CanCM4/decadal1965/day/atmos/day/r10i1p1/v20120531/tas/tas_day_CanCM4_decadal1965_r10i1p1_19660101-19751231.nc http://cmip3.dkrz.de/thredds/fileServer/cmip5/output1/CCCma/CanCM4/decadal1965/day/atmos/day/r10i2p1/v20120531/tas/tas_day_CanCM4_decadal1965_r10i2p1_19660101-19751231.nc ... \end{lstlisting} And you can get the wget script, a bash script written around wget to simplify data download using this: \begin{lstlisting} $ freva --esgf --download-script /tmp/scrip.wget project=CMIP5 experiment=decadal{1960..1965} variable=tas distrib=false latest=true Download script successfully saved to /tmp/scrip.wget \end{lstlisting} By the way, the search looked for all files stored locally at DKRZ (distrib=false) holding the latest version (latest=true) of the variable tas (variable=tas) for the experiments decadal1960 to decadal1965 (this is a bash construct and not part of the search api!) \end{document}
{ "alphanum_fraction": 0.7404938779, "avg_line_length": 50.0102790015, "ext": "tex", "hexsha": "d7803a82fef40921a3e0094067983b89b6dce922", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "53c6d0951a8dcfe985c8f33cbb3fbac7e8a3db04", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "FREVA-CLINT/Freva", "max_forks_repo_path": "docu/guides/bug.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "53c6d0951a8dcfe985c8f33cbb3fbac7e8a3db04", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "FREVA-CLINT/Freva", "max_issues_repo_path": "docu/guides/bug.tex", "max_line_length": 795, "max_stars_count": 2, "max_stars_repo_head_hexsha": "53c6d0951a8dcfe985c8f33cbb3fbac7e8a3db04", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "FREVA-CLINT/Freva", "max_stars_repo_path": "docu/guides/bug.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-18T03:35:08.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-12T18:18:48.000Z", "num_tokens": 10204, "size": 34057 }
\documentclass[12pt]{article} % 12-point font \usepackage[margin=1in]{geometry} % set page to 1-inch margins \usepackage{bm,bbm} % for math \usepackage{amsmath} % for math \usepackage{amssymb} % like \Rightarrow \setlength\parindent{0pt} % Suppresses the indentation of new paragraphs. % Big display \newcommand{\ds}{ \displaystyle } % Parenthesis \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\p}[1]{\left(#1\right)} \newcommand{\bk}[1]{\left[#1\right]} \newcommand{\bc}[1]{ \left\{#1\right\} } \newcommand{\abs}[1]{ \left|#1\right| } % Derivatives \newcommand{\df}[2]{ \frac{d#1}{d#2} } \newcommand{\ddf}[2]{ \frac{d^2#1}{d{#2}^2} } \newcommand{\pd}[2]{ \frac{\partial#1}{\partial#2} } \newcommand{\pdd}[2]{\frac{\partial^2#1}{\partial{#2}^2} } % Distributions \newcommand{\Normal}{\text{Normal}} \newcommand{\Beta}{\text{Beta}} \newcommand{\G}{\text{Gamma}} \newcommand{\InvGamma}{\text{Inv-Gamma}} \newcommand{\Uniform}{\text{Uniform}} \newcommand{\Dirichlet}{\text{Dirichlet}} \newcommand{\LogNormal}{\text{LogNormal}} % Statistics \newcommand{\E}{ \text{E} } \newcommand{\iid}{\overset{iid}{\sim}} \newcommand{\ind}{\overset{ind}{\sim}} \newcommand{\true}{\text{TRUE}} \usepackage{color} \newcommand{\alert}[1]{\textcolor{red}{#1}} % Graphics \usepackage{graphicx} % for figures \usepackage{float} % Put figure exactly where I want [H] % Uncomment if using bibliography % Bibliography % \usepackage{natbib} % \bibliographystyle{plainnat} % Adds settings for hyperlinks. (Mainly for table of contents.) \usepackage{hyperref} \hypersetup{ pdfborder={0 0 0} % removes red box from links } % Title Settings \title{Simulation Study 1} \author{Arthur Lui} \date{\today} % \date{} to set date to empty % MAIN % \begin{document} \maketitle \section{Simulation Setup}\label{sec:sim-setup} We assessed our model through the following simulation studies. We first generated four data sets (I, II, III, IV) according to our model. In each four scenarios, % the true mixture locations were $\bm{\mu}^\true=(-1, 1, 3)$, the true mixture scales were $\bm{\sigma}^\true=(0.7, 0.7, 0.7)$, the true mixture degrees of freedom were $\bm{\nu}^\true=(7, 5, 10)$, and the true mixture skews were $\bm{\phi}^\true=(-5, -3, 0)$. % In scenario I, $\gamma_C^\true=0.3$, $\gamma_T^\true=0.2$, $\bm\eta_C^\true=(0.5, 0.5, 0)$, and $\bm\eta_T^\true=(0.5,0.4,0.1)$. Implicitly, $\beta^\true=1$. In scenario II, $\gamma_C^\true=0.3$, $\gamma_T^\true=0.3$, $\bm\eta_C^\true=(0.5, 0.5, 0)$, and $\bm\eta_T^\true=(0.5,0.4,0.1)$. Implicitly, $\beta^\true=1$. In scenario III, $\gamma_C^\true=0.3$, $\gamma_T^\true=0.3$, $\bm\eta_C^\true=(0.5, 0.5, 0)$, and $\bm\eta_T^\true=(0.5,0.45,0.05)$. Implicitly, $\beta^\true=1$. In scenario IV, $\gamma_C^\true=0.3$, $\gamma_T^\true=0.3$, $\bm\eta_C^\true=(0.5, 0.5, 0)$, and $\bm\eta_T^\true=(0.5,0.5,0)$. Implicitly, $\beta^\true=0$. % In each scenario, $N_i=1000$. Table~\ref{tab:sim-truth} summarizes the simulation truth for the model parameters. \begin{table} \centering \begin{tabular}{|c|ccccccc|} \hline Scenario & $\gamma_C^\true$ & $\gamma_T^\true$ & $\bm\eta_C^\true$ & $\bm\eta_T^\true$ & $\beta^\true$ & $\hat\beta$ & KS p-value \\ \hline I & 0.3 & 0.2 & (0.5, 0.5, 0) & (0.5, 0.40, 0.10) & 1 & 1.000 & $0.00003$ \\ II & 0.3 & 0.3 & (0.5, 0.5, 0) & (0.5, 0.40, 0.10) & 1 & 1.000 & $0.01123$ \\ III & 0.3 & 0.3 & (0.5, 0.5, 0) & (0.5, 0.45, 0.05) & 1 & 1.000 & $0.28775$ \\ IV & 0.3 & 0.3 & (0.5, 0.5, 0) & (0.5, 0.50, 0.00) & 0 & 0.251 & $0.64755$ \\ \hline \end{tabular} \caption{Simulation truth under various scenarios. Posterior mean of $\beta$ is included before the right-most column. The right-most column is the p-value of under the two-sample Kolmogorov-Smirnov test.} \label{tab:sim-truth} \end{table} \section{Simulation Results}\label{sec:sim-results} The following priors were used in this analysis. First, we set $K=3$ and $p=0.5$. Then $\gamma_i\sim\Beta(1, 1)$, $\bm\eta_i\sim\Dirichlet_K(1/K)$, $\mu_k\sim\Normal(\bar{\mu}, s_\mu^2)$, $\omega_k\sim\InvGamma(0.1, 0.1)$, $\nu_k\sim\LogNormal(1.6, 0.4)$, $\psi_k\sim\Normal(-1, 1)$, where, respectively, $\bar{\mu}$ and $s_\mu$ are the empirical mean and standard deviation of the data for which $y_{i,n} > 0$. Posterior inference was made via Gibbs sampling. The initial 2000 MCMC samples were discarded as burn-in, and the subsequent 5000 samples were kept for posterior inference. % % In addition, updating the parameters $\bm\zeta, \bm v, \bm\mu, \bm\omega, % \bm\nu$, and $\bm\psi$ multiple times for each update of the other parameters % helped with mixing. Thus, 10 updates for those parameters were done during % each iteration of the MCMC. % The inference speed was approximately 11 iterations per second. Figure~\ref{fig:sim-postdens-data-kde} summarizes the posterior densities for the positive values of $y_{i,n}$. The dashed lines are kernel density estimates of the data, and the shaded regions are the 95\% credible intervals for the densities. Note that the intervals match the data closely. % Also, in scenario III, where $\beta^\true=0$, the posterior mean of $\beta$ % was also 0. Thus, $\gamma_T$ and $\bm\eta_T$ were simply samples from the % prior. Thus, the posterior density for sample T was not included here. % Since the KDE is only an approximation for the density of the observed data, we have included Figure~\ref{fig:sim-postdens-data-true-den}, which replaces the KDE of the observed portion of the simulated data with the actual pdf of the data-generating mechanism. The graphs more clearly show that the simulation truth is well captured by this model. % Figure~\ref{fig:sim-gammas} shows box plots of the posterior distribution of $\gamma_i$ for the different scenarios. The circles represent the proportion of zeros in the simulated data. The posterior distributions easily capture the true values of $\gamma_i$. % Note that Table~\ref{tab:sim-truth} also includes the posterior mean of $\beta$, denoted $\hat\beta$. Note that when $\beta^\true=0$, $\hat\beta=0.5$. As the distributions of the two samples become increasingly different, $\hat\beta$ increases. \begin{figure}[t!] \centering \begin{tabular}{cc} \includegraphics[scale=.45]{results/scenario1/img/postpred.pdf} & \includegraphics[scale=.45]{results/scenario2/img/postpred.pdf} \\ (a) Scenario I & (b) Scenario II \\ \includegraphics[scale=.45]{results/scenario3/img/postpred.pdf} & \includegraphics[scale=.45]{results/scenario4/img/postpred.pdf} \\ (c) Scenario III & (d) Scenario IV \end{tabular} \caption{Posterior density in each simulation scenario for observed data ($y_{i,n}>0$). Dashed lines are the kernel density estimates of the simulated data. The shaded regions are 95\% credible intervals of the density.} \label{fig:sim-postdens-data-kde} \end{figure} \begin{figure}[t!] \centering \begin{tabular}{cc} \includegraphics[scale=.45]{results/scenario1/img/postpred-true-data-density.pdf} & \includegraphics[scale=.45]{results/scenario2/img/postpred-true-data-density.pdf} \\ (a) Scenario I & (b) Scenario II \\ \includegraphics[scale=.45]{results/scenario3/img/postpred-true-data-density.pdf} & \includegraphics[scale=.45]{results/scenario4/img/postpred-true-data-density.pdf} \\ (c) Scenario III & (d) Scenario IV \end{tabular} \caption{Posterior density in each simulation scenario for observed data ($y_{i,n}>0$). Dashed lines are the kernel density estimates of the simulated data. The shaded regions are 95\% credible intervals of the density.} \label{fig:sim-postdens-data-true-den} \end{figure} \begin{figure}[t!] \centering \begin{tabular}{cc} (a) Scenario I & (b) Scenario II \\ \includegraphics[scale=.45]{results/scenario1/img/gammas.pdf} & \includegraphics[scale=.45]{results/scenario2/img/gammas.pdf} \\ (c) Scenario III & (d) Scenario IV \\ \includegraphics[scale=.45]{results/scenario3/img/gammas.pdf} & \includegraphics[scale=.45]{results/scenario4/img/gammas.pdf} \end{tabular} \caption{Box plots of posterior distribution of $\gamma_C$ and $\gamma_T\mid\beta=1$ for different simulated datasets. Circles represent the proportion of zeros in each dataset. Blue for sample C, and red for sample T.} \label{fig:sim-gammas} \end{figure} % Uncomment if using bibliography: % \bibliography{bib} \end{document}
{ "alphanum_fraction": 0.6936265815, "avg_line_length": 41.2536585366, "ext": "tex", "hexsha": "b190f14897addf0a3fe8dc962f0fa9a1618a34ca", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1f62d693c66b9e303dc8ee0cb8743dc848d9df5e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "luiarthur/CytofDensityEstimation", "max_forks_repo_path": "runs/simstudy/sim3/tex/simstudy.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "1f62d693c66b9e303dc8ee0cb8743dc848d9df5e", "max_issues_repo_issues_event_max_datetime": "2020-12-07T07:05:00.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-12T18:10:36.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "luiarthur/CytofDensityEstimation", "max_issues_repo_path": "runs/simstudy/sim3/tex/simstudy.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "1f62d693c66b9e303dc8ee0cb8743dc848d9df5e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "luiarthur/CytofDensityEstimation", "max_stars_repo_path": "runs/simstudy/sim3/tex/simstudy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2822, "size": 8457 }
\documentclass[a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{imakeidx} \usepackage[hidelinks]{hyperref} %% A screen friendly geometry: \usepackage[paper=a5paper,scale=0.9]{geometry} %% PPL Setup \newcommand{\assign}{\mathrel{\mathop:}=} \newcommand{\concat}{\mathrel{+\!+}} \newcommand{\f}[1]{\mathsf{#1}} \newcommand{\true}{\top} \newcommand{\false}{\bot} \newcommand{\imp}{\rightarrow} \newcommand{\revimp}{\leftarrow} \newcommand{\equi}{\leftrightarrow} \newcommand{\entails}{\models} \newcommand{\eqdef}{\; \raisebox{-0.1ex}[0mm]{$ \stackrel{\raisebox{-0.2ex}{\tiny \textnormal{def}}}{=} $}\; } \newcommand{\iffdef}{\n{iff}_{\mbox{\scriptsize \textnormal{def}}}} \newcommand{\pplmacro}[1]{\mathit{#1}} \newcommand{\ppldefmacro}[1]{\mathit{#1}} \newcommand{\pplparam}[1]{\mathit{#1}} \newcommand{\pplparamidx}[2]{\mathit{#1}_{#2}} \newcommand{\pplparamplain}[1]{#1} \newcommand{\pplparamplainidx}[2]{#1_{#2}} \newcommand{\pplparamsup}[2]{\mathit{#1}^{#2}} \newcommand{\pplparamsupidx}[3]{\mathit{#1}^{#2}_{#3}} \newcommand{\pplparamplainsup}[2]{#1^{#2}} \newcommand{\pplparamplainsupidx}[3]{#1^{#2}_{#3}} \newcommand{\pplparamnum}[1]{\mathit{X}_{#1}} %% %% We use @startsection just to obtain reduced vertical spacing above %% macro headers which are immediately after other headers, e.g. of sections %% \makeatletter% \newcounter{entry}% \newcommand{\entrymark}[1]{}% \newcommand\entryhead{% \@startsection{entry}{10}{\z@}{12pt plus 2pt minus 2pt}{0pt}{}}% \makeatother \newcommand{\pplkbBefore} {\entryhead*{}% \setlength{\arraycolsep}{0pt}% \pagebreak[0]% \begin{samepage}% \noindent% \rule[0.5pt]{\textwidth}{2pt}\\% \noindent} % \newcommand{\pplkbDefType}[1]{\hspace{\fill}{{[}#1{]}\\}} \newcommand{\pplkbBetween} {\setlength{\arraycolsep}{3pt}% \\\rule[3pt]{\textwidth}{1pt}% \par\nopagebreak\noindent Defined as\begin{center}} \newcommand{\pplkbAfter}{\end{center}\end{samepage}\noindent} \newcommand{\pplkbBodyBefore}{\par\noindent where\begin{center}} \newcommand{\pplkbBodyAfter}{\end{center}} \newcommand{\pplkbFreePredicates}[1]{\f{free\_predicates}(#1)} % \newcommand{\pplkbRenameFreeOccurrences}[3]{\f{rename\_free\_occurrences}(#1,#2,#3)} \newcommand{\pplIsValid}[1]{\noindent This formula is valid: $#1$\par} \newcommand{\pplIsNotValid}[1]{\noindent This formula is not valid: $#1$\par} \newcommand{\pplFailedToValidate}[1]{\noindent Failed to validate this formula: $#1$\par} \newcounter{def} \makeindex \begin{document} % % Doc at position 0 % \title{Conservative and Definitional Extensions} \date{Revision: May 10, 2016; Rendered: \today} \maketitle \noindent Conservative and definitional extension (see, e.g., \cite{hodges:shorter}). Actually, a generalization of definitional extension is presented here. Formalized with the \href{http://cs.christophwernhard.com/pie/}{\textit{PIE}} system. % % Doc at position 470 % \section{Conservative Extension} Formula $G$ is a \textit{conservative extension} of formula $F$ if and only if the following biconditional is valid. The right-to-left direction can be expressed as first-order validity, since second-order quantification is only in the antecedent and only existential there. The left-to-right direction in % % Statement at position 860 % \pplkbBefore \index{conservative_extension(F,G)@$\ppldefmacro{conservative\_extension}(\pplparamplain{F},\pplparamplain{G})$}$\begin{array}{lllll} \ppldefmacro{conservative\_extension}(\pplparamplain{F},\pplparamplain{G}) \end{array} $\pplkbBetween $\begin{array}{lllll} \pplparamplain{F} \equi \pplmacro{proj}(\pplparamplain{S},\pplparamplain{G}), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplain{S} \assign \pplkbFreePredicates{\pplparamplain{F}}. \end{array}$\pplkbBodyAfter % % Doc at position 947 % \subsection{Examples for Conservative Extensions} % % Statement at position 1005 % \pplkbBefore \index{f1@$\ppldefmacro{f_{1}}$}$\begin{array}{lllll} \ppldefmacro{f_{1}} \end{array} $\pplkbBetween $\begin{array}{lllll} \mathsf{a} \imp \mathsf{b}. \end{array} $\pplkbAfter % % Statement at position 1027 % \pplkbBefore \index{ex_ce_1@$\ppldefmacro{ex\_ce_{1}}$}$\begin{array}{lllll} \ppldefmacro{ex\_ce_{1}} \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{conservative\_extension}(\pplmacro{f_{1}},(\pplmacro{f_{1}} \land (\mathsf{p} \equi \mathsf{a}))). \end{array} $\pplkbAfter \pplIsValid{\pplmacro{ex\_ce_{1}}.} % % Statement at position 1168 % \pplkbBefore \index{ex_ce_2@$\ppldefmacro{ex\_ce_{2}}$}$\begin{array}{lllll} \ppldefmacro{ex\_ce_{2}} \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{conservative\_extension}(\pplmacro{f_{1}},(\pplmacro{f_{1}} \land (\mathsf{p} \revimp \mathsf{a}))). \end{array} $\pplkbAfter \pplIsValid{\pplmacro{ex\_ce_{2}}.} % % Doc at position 1308 % \subsection{Counterexamples for Conservative Extensions} % % Statement at position 1373 % \pplkbBefore \index{ex_ce_3@$\ppldefmacro{ex\_ce_{3}}$}$\begin{array}{lllll} \ppldefmacro{ex\_ce_{3}} \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{conservative\_extension}(\pplmacro{f_{1}},(\pplmacro{f_{1}} \land (\mathsf{b} \imp \mathsf{p}) \land (\mathsf{p} \imp \mathsf{a}))). \end{array} $\pplkbAfter \pplFailedToValidate{\pplmacro{ex\_ce_{3}}.} % % Statement at position 1523 % \pplkbBefore \index{def_extx(F,G)@$\ppldefmacro{def\_extx}(\pplparamplain{F},\pplparamplain{G})$}$\begin{array}{lllll} \ppldefmacro{def\_extx}(\pplparamplain{F},\pplparamplain{G}) \end{array} $\pplkbBetween $\begin{array}{lllll} \mathsf{predicate\_definiens}(\pplparamplain{P},(\pplparamplain{F},\pplparamplain{G})), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplainidx{S}{F} \assign \pplkbFreePredicates{\pplparamplain{F}},\\ \pplparamplainidx{S}{G} \assign \pplkbFreePredicates{\pplparamplain{G}},\\ \pplparamplainidx{S}{X} \assign \pplparamplainidx{S}{G} \setminus \pplparamplainidx{S}{F},\\ \mathrm{singleton\_to\_member(S\_X,P)}. \end{array}$\pplkbBodyAfter % % Doc at position 1788 % \section{Implicit Definitional Extensions} We define the following concept: Formula $G$ is an \textit{implicit definitional extension} of formula $F$ by unary predicate $p$ iff \begin{enumerate} \item $p$ does not occur in $F$. \item There exists a formula $Dx$ with no occurrences of $p$ and with no bound occurrences of $x$ such that $G \entails \forall x\, px \equi Dx$. \item \label{item-ide-ce} $F \equiv \exists p\, G$. \end{enumerate} That property can be verified with just first-order reasoning: $Dx$ is a definiens of $p$ that can be computed by interpolation. The right-to left direction of the conservative extension property, condition~(\ref{item-ide-ce}), can generally be expressed as first-order validity. Also the left-to-right condition can be expressed as first-order validity, as shown by the following equivalences: \[ \begin{array}{r@{\hspace{1em}}l} & F \entails \exists p\, G[p]\\ \mathrm{ iff } & F \entails \exists p\, G[p] \land \forall x\, px \equi Dx\\ \mathrm{ iff } & F \entails \exists p\, G[D] \land \forall x\, px \equi Dx\\ \mathrm{ iff } & F \entails G[D], \end{array} \] where $G[p] = G$ and $G[D]$ stands for $G$ with all occurrences of $p$ replaced by $Dx$ with $x$ instantiated to the argument of $p$ at the respective occurrence. The follwing entailment is another equivvalent way to express the above entailments. It is first-order expressible and might be more convenient since the replacement of the occurrences of $p$ does not have to be explicitly performed: \[F \entails \forall p\, \lnot (\forall x\, px \equi Dx) \lor G[p].\] % % Statement at position 3378 % \pplkbBefore \index{predicate_definiens_xyz(P,F)@$\ppldefmacro{predicate\_definiens\_xyz}(\pplparamplain{P},\pplparamplain{F})$}$\begin{array}{lllll} \ppldefmacro{predicate\_definiens\_xyz}(\pplparamplain{P},\pplparamplain{F}) \end{array} $\pplkbBetween $\begin{array}{lllll} \exists \pplparamplain{P} \, (\pplparamplain{F} \land \pplparamplainidx{P}{X}) \imp \lnot \exists \pplparamplain{P} \, (\pplparamplain{F} \land \lnot \pplparamplainidx{P}{X}), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplain{N} \assign \mathrm{arity\ of }\; \pplparamplain{P}\; \mathrm{ in }\; \pplparamplain{F},\\ \pplparamplain{X} \assign x_1,\ldots,x_{\pplparamplain{N}},\\ \pplparamplainidx{P}{X} \assign \pplparamplain{P}(\pplparamplain{X}). \end{array}$\pplkbBodyAfter % % Doc at position 3534 % A version of predicate\_definiens with fixed arguments (as obtained by mac\_make\_args). It is assumed that these are not used as constants elsewhere. % % Doc at position 3693 % \bigskip The following predicate implements the sketched method for verifying the implicit definitional extension property by means of first-order reasoning. The formula arguments are passed to the ppl\_ predicates that perform macro expansion. The predicate succeeds iff the property holds and returns a definiens for the argument predicate as binding of $D$. {\small \begin{verbatim} implicit_definitional_extension(F, G, P, D) :- ppl_valid((ex2(P, G)->F), [prover=cm, printing=false]), last_ppl_result(true), ppl_ipol(predicate_definiens_xyz(P, G), [prover=cm, printing=false]), last_ppl_result(D), mac_get_arity(P, G, N), mac_make_args(N, X), mac_make_atom(P, X, P_X), ppl_valid( (F -> all2(p, (all(X, (P_X<->D))->G))), [prover=cm, printing=false]), last_ppl_result(true). \end{verbatim}} % % Statement at position 4531 % \pplkbBefore \index{f2@$\ppldefmacro{f_{2}}$}$\begin{array}{lllll} \ppldefmacro{f_{2}} \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{f_{1}} \land (\mathsf{p} \imp \mathsf{a}) \land (\mathsf{a} \land \mathsf{b} \imp \mathsf{p}). \end{array} $\pplkbAfter % % Statement at position 4570 % \pplkbBefore \index{f3@$\ppldefmacro{f_{3}}$}$\begin{array}{lllll} \ppldefmacro{f_{3}} \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{f_{1}} \land (\mathsf{p} \imp \mathsf{a}). \end{array} $\pplkbAfter % % Doc at position 4598 % Both formulas $f_2$ and $f_3$ are conservative extensions of $f_1$: \pplIsValid{\pplmacro{conservative\_extension}(\pplmacro{f_{1}},\pplmacro{f_{2}}).} % % Doc at position 4784 % \pplIsValid{\pplmacro{conservative\_extension}(\pplmacro{f_{1}},\pplmacro{f_{3}}).} % % Doc at position 4900 % \bigskip We can test the predicate \texttt{implicit\_definitional\_extension} with these calls: \begin{verbatim} ?- implicit_definitional_extension(f1, f2, p, D). % succeeds ?- implicit_definitional_extension(f1, f3, p, D). % fails \end{verbatim} Only formula $f_2$ but not $f_3$ is an implicit definitional extension of $f_1$. The following formula is a computed definiens $D$ for \begin{center} \texttt{implicit\_definitional\_extension(f1, f2, p, D)}: \end{center} \[\begin{array}{lllll} \mathsf{a} \land \mathsf{b}. \end{array} \] % % Doc at position 5465 % \bibliographystyle{alpha} \bibliography{bibscratch03} \printindex \end{document}
{ "alphanum_fraction": 0.7157401568, "avg_line_length": 31.7114285714, "ext": "tex", "hexsha": "2b8cea3b90143d07768594d2cac5735a5f9de0fa", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-10-08T15:13:50.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-08T15:13:50.000Z", "max_forks_repo_head_hexsha": "679789ecda03c586f02f642b38e614a2f925720d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "logicmoo/logicmoo_base", "max_forks_repo_path": "prolog/logicmoo/circ/pie/scratch/scratch_extension_out.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "679789ecda03c586f02f642b38e614a2f925720d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "logicmoo/logicmoo_base", "max_issues_repo_path": "prolog/logicmoo/circ/pie/scratch/scratch_extension_out.tex", "max_line_length": 180, "max_stars_count": 13, "max_stars_repo_head_hexsha": "679789ecda03c586f02f642b38e614a2f925720d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TeamSPoon/logicmoo_base", "max_stars_repo_path": "prolog/logicmoo/circ/pie/scratch/scratch_extension_out.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-18T01:17:10.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-03T03:18:53.000Z", "num_tokens": 3937, "size": 11099 }
%! Author = ahmedhassanien %! Date = 5/2/20 % Preamble \documentclass[aspectratio=169]{beamer} \usetheme[width=2cm]{Hannover} % Packages \usepackage[]{theme/beamercolorthemeowl} \usepackage{fontawesome} \usepackage{tikz} \usetikzlibrary{shapes.geometric} \usetikzlibrary{arrows.meta,arrows} \tikzset{ invisible/.style = {opacity=0}, visible on/.style = {alt={#1{}{invisible}}}, alt/.code args = {<#1>#2#3}{\alt<#1>{\pgfkeysalso{#2}}{\pgfkeysalso{#3}}} % \pgfkeysalso doesn't change the path.\input{common/front_page} } \title[Scala Sandbox]{Scala Sandbox} \subtitle{\textit{Scala SDLC and template}} % Document \begin{document} \maketitle %----------------------------------------------------------------------------------------------------------------------% \section{Introduction}\label{sec:introduction} %----------------------------------------------------------------------------------------------------------------------% \subsection{Goals}\label{subsec:goals} \begin{frame}{Scala Sandbox Goals} \begin{itemize}[<+- | alert@+>] \item Create Scala SDLC. \item Simplify Scala project bootstrapping. \item Releasing strategy. \end{itemize} \end{frame} \subsection{Notes}\label{subsec:notes} \begin{frame}{Notes} \begin{itemize}[<+- | alert@+>] \item Maven is an example of a build management tool. A similar SDLC is achievable by using other means. \item Scala is an example, and we don't expect anyone to be a Scala expert for following this session. We are writing simple functions for the demo. \item We do bugs to show you the detailed steps to build the project, but we will fix it together \item This video focuses on local settings. In the next videos, we will show how to configure release this project using the CICD automated process with different tools. \end{itemize} \end{frame} \subsection{Final Takeaways}\label{subsec:final-takeaways} \begin{frame}{Notes} \begin{itemize}[<+- | alert@+>] \item Get your laptop ready with us, pause, and continue following every line. \item You need Java (8), and Maven (3.6.x) installed locally. \item Check our GitHub repo and articles for more detail. \item Ask and share your question on our website. \item Please share with us your future expectations about this series. \end{itemize} \end{frame} %----------------------------------------------------------------------------------------------------------------------% \section{Outline}\label{sec:outline} %----------------------------------------------------------------------------------------------------------------------% \begin{frame} \begin{figure} \tikzstyle{block} = [rectangle, draw, fill=yellow!90, text width=3cm,minimum height=1em, inner sep=0pt, text centered, minimum height=1cm,font=\bfseries] \begin{tikzpicture}[node distance = 4cm, auto,>=stealth'] \node [block, visible on=<1->] (b) {Create Scala mvn Proj}; \node [block, right of=b, visible on=<2->] (c) {Add static analysis tools}; \node [block, right of=c, visible on=<3->] (d) {Add mvn site reporting}; \node [block, below of=d, visible on=<4->] (e) {Eliminate boilerplate}; \node [block, left of=e, visible on=<5->] (f) {Mvn release strategy}; \node [block, left of=f, visible on=<6->] (g) {GitHub actions}; \draw [->, visible on=<2->] (b) -- (c); \draw [->, visible on=<3->] (c) -- (d); \draw [->, visible on=<4->] (d.south) -- (e.north); \draw [->, visible on=<5->] (e) -- (f); \draw [->, visible on=<6->] (f) -- (g); \end{tikzpicture} \caption{Outline} \label{fig:M1} \end{figure} \end{frame} %----------------------------------------------------------------------------------------------------------------------% %\begin{frame}{Outline} % \begin{itemize}[<+- | alert@+>] % \item Create Scala maven project. % \item Add Scala static analysis tools. % \item Add maven site reporting for visualizing your checks. % \item Eliminate maven boilerplate. % \item Maven release strategy. % \item Enable GitHub Actions for Scala maven repository. % \end{itemize} %\end{frame} %----------------------------------------------------------------------------------------------------------------------% %\begin{frame} % \begin{figure} % \tikzstyle{block} = [rectangle, draw, fill=green!60!yellow, text width=3cm, % minimum height=1em, % inner sep=0pt, % text centered, minimum size=1cm,font=\bfseries] % % \begin{tikzpicture}[node distance = 4cm, auto,>=stealth'] % % \node [block] (b) {Create project skeleton}; % \node [block, right of=b] (c) {Scala static analysis tools}; % \node [block, right of=c] (d) {Compile Scala using mvn}; % \node [block, below of=d] (e) {Add sample Scala Test}; % \node [block, left of=e] (f) {Configure mvn to run Scala test}; % \node [block, left of=f] (g) {Disable compile Java src/tests}; % % \draw [->] (b) -- (c); % \draw [->] (c) -- (d); % \draw [->] (d.south) -- (e.north); % \draw [->] (e) -- (f); % \draw [->] (f) -- (g); % % \end{tikzpicture} % \caption{Scala maven project} \label{fig:M1} % \end{figure} %\end{frame} %----------------------------------------------------------------------------------------------------------------------% \subsection{Scala maven project}\label{subsec:scala-maven-project} \begin{frame}{Create Scala maven project} \begin{itemize}[<+- | alert@+>] \item Create project skeleton. \item Add sample Scala Class. \item Configure maven to compile Scala sources. \item Add sample Scala Test. \item Configure maven to run Scala test. \item Configure maven to avoid compiling java classes and test classes. \end{itemize} \end{frame} \subsection{Static analysis tools}\label{subsec:static-analysis-tools} \begin{frame}{Scala static analysis tools} \begin{itemize}[<+- | alert@+>] \item Add Scala code coverage tool. \item Solve the test running twice issue. \item Add Scala style tool. \item Add FindBugs tool. \end{itemize} \end{frame} \subsection{Maven Site Reporting}\label{subsec:maven-site-reporting} \begin{frame}{Maven Site Reporting} \begin{itemize}[<+- | alert@+>] \item Add Scala Coverage report. \item Add FindBugs report. \item Add Scala docs report. \end{itemize} \end{frame} \subsection{Eliminate boilerplate}\label{subsec:eliminate-boilerplate} \begin{frame}{Eliminate maven boilerplate} \begin{itemize}[<+- | alert@+>] \item Clean the Scala maven project and use maven BOM and Scala profile. \item Add SCM tag to maven (Scala maven project). \end{itemize} \end{frame} \subsection{Maven release strategy}\label{subsec:maven-release-strategy} \begin{frame}{Maven release strategy} \begin{itemize}[<+- | alert@+>] \item Add distribution management tag (Scala maven project). \item Create settings.xml to set repository server credentials (Github repository/package). \item Add maven release plugin. \item Release out first version. \end{itemize} \end{frame} \subsection{GitHub Actions}\label{subsec:github-actions} \begin{frame}{GitHub Actions} \begin{itemize}[<+- | alert@+>] \item Add new GitHub/Workflow/build.yml for Build, Test and Report. \item Add deployment with each develop push. \item Add release with each master push (Merge request from develop to master). \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.5353971682, "avg_line_length": 42.3045685279, "ext": "tex", "hexsha": "829d1ea61ebcc3810be8f37510e75948a2c2e063", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1a97ec4f810b65c6cb7da0168c0488bf40db13b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "garage-education/uncovered-code", "max_forks_repo_path": "src/ScalaSandBox.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1a97ec4f810b65c6cb7da0168c0488bf40db13b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "garage-education/uncovered-code", "max_issues_repo_path": "src/ScalaSandBox.tex", "max_line_length": 142, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1a97ec4f810b65c6cb7da0168c0488bf40db13b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "garage-education/uncovered-code", "max_stars_repo_path": "src/ScalaSandBox.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2103, "size": 8334 }
\documentclass[t,usenames,dvipsnames]{beamer} \usetheme{Copenhagen} \setbeamertemplate{headline}{} % remove toc from headers \beamertemplatenavigationsymbolsempty \usepackage{amsmath, tikz, xcolor, bm, pgfplots, array} \pgfplotsset{compat = newest} \usetikzlibrary{arrows.meta, calc, decorations.pathreplacing, patterns, decorations.markings} \tikzset{>=stealth} \title{Parametric Equations} \author{} \date{} \AtBeginSection[] { \begin{frame} \frametitle{Objectives} \tableofcontents[currentsection] \end{frame} } \begin{document} \begin{frame} \titlepage \end{frame} \section{Sketch a parametric curve.} \begin{frame}{Intro} Up until now, we have looked at functions that define $x$ in terms of $y$ (or $y$ in terms of $x$ for inverse functions). \newline\\ \pause In this section, we will look at parametric functions: ones in which $x$ and $y$ are defined by a \alert{parameter}, such as $t$. \end{frame} \begin{frame}{Intro} For instance, the plot below could show the path a bug might take (starting at $O$) while walking on a table: \newline\\ \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = middle, xmin = -1, xmax = 7.5, ymin = -1, ymax = 6, xtick = {1,2,3,4,5}, ytick = {1,2,3,4,5}, xlabel = $x$, ylabel = $y$, ] \addplot [mark = *, mark size=2pt] coordinates {(5,1)} node [right] {$O$}; \addplot [domain = -2:2, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.25 with {\arrow{>};}, mark = at position 0.5 with {\arrow{>};}, mark = at position 0.75 with {\arrow{>};}, mark = at position 1 with {\arrow{>};} }} ] ({t^2+1}, {t^3-3*t+3}) node [left, above] {$P(x,y) = (f(t),g(t))$}; \end{axis} \end{tikzpicture} \end{center} \end{frame} \begin{frame}{Intro} The independent variable ($t$ in this case) is called a \alert{parameter}. \newline\\ \pause The system of equations \[ \begin{cases} x &= f(t) \\ y &= g(t) \end{cases} \] is called a \alert{parametrization} of the curve. \newline\\ \pause \emph{Note}: The curve itself is a set of points and is devoid of any orientation. \end{frame} \begin{frame}{Example 1} Sketch the curve described by \[ \begin{cases} x &= t^2 - 3 \\ y &= 2t - 1 \end{cases} \quad \text{ for } t \geq -2 \] \pause \begin{minipage}{0.5\textwidth} \setlength{\extrarowheight}{3pt} \begin{tabular}{cccc} $t$ & $x(t)$ & $y(t)$ & $(x(t), y(t))$ \\ \hline \onslide<2->{$-2$ & 1 & $-5$ & $(1,-5)$ \\[3pt]} \onslide<3->{$-1$ & $-2$ & $-3$ & $(-2,-3)$ \\[3pt]} \onslide<4->{0 & $-3$ & $-1$ & $(-3,-1)$ \\[3pt]} \onslide<5->{1 & $-2$ & 1 & $(-2,1)$ \\[3pt]} \onslide<6->{2 & 1 & 3 & $(1,3)$ \\[3pt]} \onslide<7->{3 & 6 & 5 & $(6, 5)$ \\} \end{tabular} \end{minipage} \hspace{-0.25cm} \begin{minipage}{0.5\textwidth} \onslide<8->{ \begin{tikzpicture}[scale=0.7] \begin{axis}[ axis lines = middle, grid, xmin = -3, xmax = 6, ymin = -5, ymax = 5, xtick = {-3,-2,...,6}, ytick = {-5,-4,...,5}, xlabel = $x$, ylabel = $y$, ] \addplot [mark = *, mark size=2pt, only marks] coordinates {(1,-5) (-2,-3) (-3,-1) (-2,1) (1,3) (6,5)}; \addplot [domain = -2:3, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.15 with {\arrow[ultra thick, red]{>};}, mark = at position 0.4 with {\arrow[ultra thick, red]{>};}, mark = at position 0.65 with {\arrow[ultra thick, red]{>};}, mark = at position 0.85 with {\arrow[ultra thick, red]{>};} }}] ({t*t-3},{2*t-1}); \end{axis} \end{tikzpicture}} \end{minipage} \end{frame} \section{Rewrite an equation by eliminating the parameter.} \begin{frame}{Eliminating the Parameter} We can eliminate the parameter $t$ by solving one of the equations for $t$ and substituting it into the other. \end{frame} \begin{frame}{Example 2} Eliminate the parameter in Example 1 and write the equation using only $x$ and $y$. \[ \begin{cases} x &= t^2 - 3 \\ y &= 2t - 1 \end{cases} \quad \text{ for } t \geq -2 \] \begin{align*} \onslide<2->{y &= 2t - 1} \\[8pt] \onslide<3->{y+1 &= 2t} \\[8pt] \onslide<4->{t &= \frac{y+1}{2}} \\[8pt] \onslide<5->{x &= t^2 - 3} \\ \end{align*} \end{frame} \begin{frame}{Example 2} \begin{align*} x &= t^2 - 3 \\[6pt] \onslide<2->{x &= \left(\frac{y+1}{2}\right)^2 - 3} \\[6pt] \onslide<3->{x+3 &= \frac{(y+1)^2}{4}} \\[6pt] \onslide<4->{4(x+3) &= (y+1)^2} \\[6pt] \onslide<5->{t &= \frac{y+1}{2}} \\[6pt] \onslide<6->{\frac{y+1}{2} &\geq -2} \\[6pt] \onslide<7->{y &\geq -5} \\ \end{align*} \end{frame} \begin{frame}{Example 2} \[ 4(x+3) = (y+1)^2, \quad y \geq -5 \] \end{frame} \begin{frame}{Example 3a} Sketch each of the following curves. \newline\\ (a) \quad $\begin{cases} x &= t^3 \\ y &= 2t^2 \\ \end{cases}$ \quad for $-1 \leq t \leq 1$ \newline\\ \pause \begin{minipage}{0.6\textwidth} \begin{tikzpicture}[scale=0.7] \begin{axis}[ axis lines = middle, grid, xmin = -2, xmax = 2, ymin = -1, ymax = 3, xtick = {-2,-1,...,2}, ytick = {-1,0,...,3}, xlabel = $x$, ylabel = $y$ ] \addplot [mark = *, mark size = 2pt, only marks] coordinates {(-1,2) (1,2)}; \addplot [domain = -1:1, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.15 with {\arrow[ultra thick, red]{>};}, mark = at position 0.4 with {\arrow[ultra thick, red]{>};}, mark = at position 0.65 with {\arrow[ultra thick, red]{>};}, mark = at position 0.85 with {\arrow[ultra thick, red]{>};} }}] ({t*t*t},{2*t*t}); \end{axis} \end{tikzpicture} \end{minipage} \hspace{-0.5cm} \begin{minipage}{0.4\textwidth} \begin{align*} \onslide<3->{x &= t^3} \\[8pt] \onslide<4->{t &= \sqrt[3]{x}} \\[8pt] \onslide<5->{y &= 2\left(\sqrt[3]{x}\right)^2} \\[8pt] \onslide<6->{y &= 2\cdot \sqrt[3]{x^2}} \\[8pt] \onslide<7->{y &= 2x^{2/3}} \\ \end{align*} \end{minipage} \end{frame} \begin{frame}{Example 3b} (b) \quad $\begin{cases} x &= 2e^{-t} \\ y &= e^{-2t} \\ \end{cases}$ \quad for $t \geq 0$ \newline\\ \pause \begin{minipage}{0.6\textwidth} \begin{tikzpicture}[scale=0.7] \begin{axis}[ axis lines = middle, grid, xmin = -1, xmax = 3, ymin = -1, ymax = 2, xtick = {-1,0,...,3}, ytick = {-1,0,...,2}, xlabel = $x$, ylabel = $y$ ] \addplot [mark = *, mark size = 2pt, only marks] coordinates {(2,1)}; \addplot [domain = 0:5, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.15 with {\arrow[ultra thick, red]{>};}, mark = at position 0.4 with {\arrow[ultra thick, red]{>};}, mark = at position 0.65 with {\arrow[ultra thick, red]{>};}, mark = at position 0.85 with {\arrow[ultra thick, red]{>};} }}] ({2*e^(-1*t)},{e^(-2*t}); \draw [fill=white] (axis cs: 0,0) circle (2pt); \end{axis} \end{tikzpicture} \end{minipage} \hspace{-0.5cm} \begin{minipage}{0.4\textwidth} \begin{align*} \onslide<3->{x &= 2e^{-t}} \\[6pt] \onslide<4->{e^{-t} &= \frac{x}{2}} \\[6pt] \onslide<5->{y &= \left(e^{-t}\right)^2} \\[6pt] \onslide<6->{y &= \left(\frac{x}{2}\right)^2} \\[6pt] \onslide<7->{y &= \frac{x^2}{4}} \\ \end{align*} \end{minipage} \end{frame} \begin{frame}{Example 3c} (c) \quad $\begin{cases} x &= \sin t \\ y &= \csc t \\ \end{cases}$ \quad for $0 < t < \pi$ \newline\\ \pause \begin{minipage}{0.6\textwidth} \begin{tikzpicture}[scale=0.7] \begin{axis}[ axis lines = middle, grid, xmin = -1, xmax = 2, ymin = -1, ymax = 4, xtick = {-1,0,1,2}, ytick = {-1,0,...,4}, xlabel = $x$, ylabel = $y$ ] \addplot [mark = *, mark size = 2pt, only marks] coordinates {(1,1)}; \addplot [domain = 0.25:3.0, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.15 with {\arrow[ultra thick, red]{>};}, mark = at position 0.4 with {\arrow[ultra thick, red]{>};}, mark = at position 0.65 with {\arrow[ultra thick, red]{>};}, mark = at position 0.85 with {\arrow[ultra thick, red]{>};} }}] ({sin(deg(t))},{1/(sin(deg(t)))}); \end{axis} \end{tikzpicture} \end{minipage} \hspace{-0.25cm} \begin{minipage}{0.3\textwidth} \begin{align*} \onslide<3->{x &= \sin t} \\[6pt] \onslide<4->{y &= \csc t} \\[10pt] \onslide<5->{y &= \frac{1}{\sin t}} \\[10pt] \onslide<6->{y &= \frac{1}{x}} \\ \end{align*} \end{minipage} \end{frame} \begin{frame}{Example 3d} (d) \quad $\begin{cases} x &= 1 + 3\cos t \\ y &= 2\sin t \\ \end{cases}$ \quad for $0 \leq t \leq \frac{3\pi}{2}$ \newline\\ \pause \begin{minipage}{0.5\textwidth} \begin{tikzpicture}[scale=0.7] \begin{axis}[ axis lines = middle, grid, xmin = -3, xmax = 5, ymin = -3, ymax = 3, xtick = {-3,-2,...,5}, ytick = {-3,-2,...,3}, xlabel = $x$, ylabel = $y$ ] \addplot [mark = *, mark size = 2pt, only marks] coordinates {(4,0) (1,2) (-2,0) (1,-2)}; \addplot [domain = 0:4.71, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.15 with {\arrow[ultra thick, red]{>};}, mark = at position 0.4 with {\arrow[ultra thick, red]{>};}, mark = at position 0.65 with {\arrow[ultra thick, red]{>};}, mark = at position 0.85 with {\arrow[ultra thick, red]{>};} }}] ({1+3*cos(deg(t))},{2*sin(deg(t))}); \end{axis} \end{tikzpicture} \end{minipage} \hspace{-0.5cm} \begin{minipage}{0.5\textwidth} \begin{align*} \onslide<3->{x &= 1 + 3\cos t} \\[10pt] \onslide<4->{\frac{x-1}{3} &= \cos t} \\[10pt] \onslide<5->{y &= 2 \sin t} \\[10pt] \onslide<6->{\frac{y}{2} &= \sin t} \\ \end{align*} \end{minipage} \end{frame} \begin{frame}{Example 3d} \begin{minipage}{0.5\textwidth} \begin{tikzpicture}[scale=0.7] \begin{axis}[ axis lines = middle, grid, xmin = -3, xmax = 5, ymin = -3, ymax = 3, xtick = {-3,-2,...,5}, ytick = {-3,-2,...,3}, xlabel = $x$, ylabel = $y$ ] \addplot [mark = *, mark size = 2pt, only marks] coordinates {(4,0) (1,2) (-2,0) (1,-2)}; \addplot [domain = 0:4.71, samples = 100, variable = \t, postaction={decorate, decoration={markings, mark = at position 0.15 with {\arrow[ultra thick, red]{>};}, mark = at position 0.4 with {\arrow[ultra thick, red]{>};}, mark = at position 0.65 with {\arrow[ultra thick, red]{>};}, mark = at position 0.85 with {\arrow[ultra thick, red]{>};} }}] ({1+3*cos(deg(t))},{2*sin(deg(t))}); \end{axis} \end{tikzpicture} \end{minipage} \hspace{-0.5cm} \begin{minipage}{0.5\textwidth} \begin{align*} \cos^2 t + \sin^2 t &= 1 \\[10pt] \onslide<2->{\left(\frac{x-1}{3}\right)^2 + \left(\frac{y}{2}\right)^2 &= 1} \\[10pt] \onslide<3->{\frac{(x-1)^2}{9} + \frac{y^2}{4} &= 1} \\ \end{align*} \end{minipage} \end{frame} \begin{frame}{Parametrizations of Common Curves} \begin{itemize} \item For $y = f(x)$, as $x$ runs through some interval $I$, let $x=t$ and $y = f(t)$ and let $t$ run through $I$. \newline\\ \pause \item For $x = g(y)$, as $y$ runs through some interval $I$, let $x=g(t)$ and $y = t$ and let $t$ run through $I$. \newline\\ \pause \item For a directed line segment with initial point $(x_0,y_0)$ and terminal point $(x_1,y_1)$ let $x = x_0 + (x_1-x_0)t$ and let $y = y_0 + (y_1-y_0)t$ for $0 \leq t \leq 1$. \newline\\ \pause \item For an ellipse in the form $\frac{(x-h)^2}{a^2} + \frac{(y-k)^2}{b^2}=1$, let $x = h + a\cos t$ and $y = k + a\sin t$ for $0 \leq t < 2\pi$. \end{itemize} \end{frame} \begin{frame}{Example 4} Find a parametrization for each of the following. \newline\\ (a) \quad $y = x^2$ from $x = -3$ to $x = 2$ \onslide<2->{\[ x = t \quad \text{and} \quad y = t^2 \quad \text{for } -3 \leq t \leq 2 \]} \onslide<3->{(b) \quad $y = f^{-1}(x)$ where $f(x) = x^5 + 2x + 1$} \begin{align*} \onslide<4->{y &= x^5 + 2x + 1} \\[8pt] \onslide<5->{x &= y^5 + 2y + 1} \\[8pt] \onslide<6->{y = t \quad & \quad x = t^5 + 2t + 1 \quad \text{for } -\infty < t < \infty} \end{align*} \end{frame} \begin{frame}{Example 4c} (c) \quad The line segment which starts at $(2,-3)$ and ends at $(1,5)$ \begin{align*} \onslide<2->{x_1 - x_0 &= 1 - 2} \\ \onslide<3->{&= -1} \\ \onslide<4->{x &= x_0 + (x_1 - x_0)t} \\ \onslide<5->{x &= 2 + (-1)t} \\ \onslide<6->{{\color{red}x} &{\color{red}= 2 - t} \\ y_1 - y_0 &= 5 - (-3)} \\ \onslide<7->{&= 8} \\ \onslide<8->{y &= y_0 + (y_1 - y_0)t} \\ \onslide<9->{{\color{red}y} &{\color{red}= -3 + 8t}} \\ \onslide<10->{& \text{for } 0 \leq t \leq 1} \end{align*} \end{frame} \begin{frame}{Example 4} (d) \quad The circle $x^2+2x+y^2-4y=4$ \begin{align*} \onslide<2->{\frac{(x+1)^2}{9} + \dfrac{(y-2)^2}{9} &= 1} \\[10pt] \onslide<3->{x = -1 + 3\cos t \quad & \quad y = 2 + 3\sin t \quad \text{for } 0 \leq t < 2\pi} \\ \end{align*} \end{frame} \begin{frame}{Example 4} (e) \quad The left half of the ellipse $\dfrac{x^2}{4} + \frac{y^2}{9} = 1$ \begin{align*} \onslide<2->{x &= 2\cos t} \\[8pt] \onslide<3->{y &= 3\sin t} \\[8pt] \onslide<4->{& \text{for } \frac{\pi}{2} \leq t \leq \frac{3\pi}{2}} \end{align*} \end{frame} \end{document}
{ "alphanum_fraction": 0.5551260129, "avg_line_length": 32.3341346154, "ext": "tex", "hexsha": "d455f9ab1717ff22bc0af49446b83fa586190b54", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "BryanBain/Trig_BEAMER", "max_forks_repo_path": "Parametric_Equations(BEAMER).tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "BryanBain/Trig_BEAMER", "max_issues_repo_path": "Parametric_Equations(BEAMER).tex", "max_line_length": 200, "max_stars_count": null, "max_stars_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "BryanBain/Trig_BEAMER", "max_stars_repo_path": "Parametric_Equations(BEAMER).tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5416, "size": 13451 }
\documentclass[a4paper]{article} \def\npart{III} \def\ntitle{3 Manifolds} \def\nlecturer{S.\ Rasmussen} \def\nterm{Lent} \def\nyear{2019} \input{header} \renewcommand{\boundary}{\partial} \renewcommand{\b}{\boundary} \newcommand{\interior}{\ocirc} \renewcommand{\P}{{\mathbb P}} \newcommand{\immerse}{\looparrowright} \DeclareMathOperator{\grad}{grad} \begin{document} \input{titlepage} \tableofcontents \setcounter{section}{-1} \section{Why 3?} \subsection{Motivation} \paragraph{Poincare conjecture (1904)} Question: how can we distinguish \(S^3\) fom other 3-manifolds? The strategy is to find an invariant that distinguishes \(S^3\). The frst guess is homology but \begin{theorem}[Poincare] There exists a closed oriented 3-manifold \(P\) with \(H_*(P) \simeq H_*(S^3)\) but with \(P \ncong S^3\). \end{theorem} \begin{notation} We use \(\cong\) to denote homeomorphism and \(\simeq\) to denote isomorphism. \end{notation} This is proven in the following way: first invent the fundamental group \(\pi_1\), then construct \(P\), which is now known as (-1)-Dehn surgery on left-handed trefoil knot \(K_T \subseteq S^3\). Finally show that \(|\pi_1(P)| = 120, |\pi_1(S^3)| = 1\) and \(H_*(P) \simeq H_*(S^3)\). \subsection{Homotopy} \paragraph{Review of homotopy theory} homotopy, fundamental groups and higher homotopy groups, homotopy equivalence, weak homotopy equivalence \paragraph{Homotopy vs.\ homology} Let \(X\) and \(Y\) be path-connected topological spaces. \begin{theorem}[Hurewicz]\leavevmode \begin{enumerate} \item \(H_1(X, \Z) \simeq \pi_1(X)/[\pi_1(X), \pi_1(X)]\). \item If \(\pi_i(X) = 1\) for \(i = \{1, \dots, n\}\) then \begin{align*} H_i(X) &= 0 \text{ for } i \leq n, i \neq 0 \\ H_{n + 1} &\simeq \pi_{n + 1}(X) \end{align*} \end{enumerate} \end{theorem} \begin{theorem}[Whitehead] If \(X, Y\) are CW complexes. Then a weak homotopy equivalence of \(X\) and \(Y\) is also a homotopy equivalence. \end{theorem} \begin{theorem}[Whitehead-homology variant] Suppose \(X, Y\) are simply-connected CW complexes. If the induced homomorphisms \(f_*: H_k(X; \Z) \to H_k(Y; \Z)\) are isomorphisms for all \(k \leq \dim X\) then \(f: X \to Y \) is a homotopy equivalence. \end{theorem} \begin{theorem} Any homotopy equivalence \(f: X \to Y\) induces isomorphisms on homology, cohomology, cohomology ring structure (for any coefficients). \end{theorem} \subsection{*Simplifications in higher dimension} Let \(\mathcal C\) be the smooth category when \(n \geq 5\) and topological category \(n \geq 4\). \begin{theorem}[Whitney trick] Suppose \(\dim X = n\) where \(n \geq 4\) and \(P, Q \subseteq X\) are \(\mathcal \C\)-embedded submanifolds and \(\dim P + \dim Q = \dim X\). Then \(P, Q\) can be locally \(\mathcal C\)-isotoped so that the geometric intersection number equal to the absolute value of algebraic intersection of \(P, Q\). Note that algebraic intersection number is signed while teh geometric counterpart is not. \end{theorem} \begin{convention} When we say topological embeddings we always mean locally flat embeddings, which will be defined later in the course. \end{convention} \begin{definition}[\(h\)-cobordism] Let \(W\) with \(\boundary W = X_1 \amalg X_2\) be a cobordism from \(X_1\) to \(X_2\). \(W\) is an \emph{\(h\)-cobordism} if the embeddings \(X_i \embed W\) are homotopy equivalences. \end{definition} \begin{convention} All manifolds are compact connected and oriented unless otherwise stated. \end{convention} \begin{theorem}[\(h\)-cobordism] Suppose \(\dim X_i = n, \dim W = n + 1\), \(W\) is a \(h\)-cobordism from \(X_1\) to \(X_2\). If \(\pi_1(X_i) = \pi_1(W) = 1\) and \(n \geq 4\) then \(W\) is \(\mathcal C\)-isomorphic to \(X_1 \times [0, 1]\). \end{theorem} \subsection{Generalised Poincare conjecture} Poincare conjecture: if \(S\) is compact oriented \(3\)-manifold homotopy equivalent to \(S^n\), then does \(S \cong S^n\)? Generalised Poincare conjecture: if \(S\) is compact oriented \(n\)-manifold homotopy equivalent to \(S^n\), then does \(S \cong S^n\)? It turns out for \(n \geq 4\), the generalised Poincare conjecture is a corollary of \(h\)-cobordism theorem. Sketch of proof for \(n \geq 5\): suppose \(S\) is homotopy equivalent to \(S^n\), Then \(\pi_*(S) \simeq \pi_*(S^n), H_*(S) \simeq H_*(S^n)\). Delete two balls from \(S\) to obtain \(W \cong S \setminus \interior B_1^n \amalg \interior B_2^n\). Claim that \(W\) is a \(h\)-cobordism: apply Mayer-Vietoris with \(A = W, B = B_1^n \amalg B_2^n\). Then \(A \cap B = S^{n - 1} \amalg S^{n - 1} =_{\text{htp}} W \amalg \{0, 1\}, A \cup B = S, A \amalg B = W \). \[ \begin{tikzcd} H_n(S^{n - 1} \amalg S^{n - 1}) \ar[r] & H_n(W \amalg \{0, 1\}) \ar[r] & H_n(S) \ar[dll, out=0, in=180] \\ H_{n - 1}(S^{n - 1} \amalg S^{n - 1}) \ar[r] & H_{n - 1}(W \amalg \{0, 1\}) \ar[r] & H_{n - 1}(S) \end{tikzcd} \] The first term vanishes because of dimension, the second term vanishes because \(W\) is not closed. By homotopy equivalence we get \[ \begin{tikzcd} 0 \ar[r] & \Z \ar[r] & \Z \oplus \Z \ar[r] & H_{n - 1}(W \amalg \{0, 1\}) \ar[r] & 0 \end{tikzcd} \] We can compute that \(H_{n - 1}(W \amalg \{0, 1\}) \simeq \Z\). It is an exercise to show that there is an induced isomorphism on homology \(H_k(S^n_i) \to H_k(W)\) for each \(k\). Moreover \(\pi_1(W) = 1\) so \(S^n_i \to W\) are homotopy equivalent. Therefore \(W \cong S^{n - 1} \times [0, 1]\) So \(S \cong B_1^n \cup W \cup B_2^n\). By Alexander trick map on a \(S^{n - 1}\) can be extended \emph{topologically} to a map on \(B^n\) with \(\boundary B^n = S^n\). Extends this homeomorphism over the two balls. Note that this only applies to topological category and smooth generalised Poincare conjecture is still open in \(n \geq 4\). \subsection{Why not higher than 5?} Moral: homotopy-theoretic techniques can be used to answer most/many questions about topology or smooth structures in dimension \(\geq 5\). \section{Lecture 2: Why 3-manifolds? + Embeddings/Knots} \subsection*{Active research areas} \begin{enumerate} \item An interaction with 4-dimensional manifolds (smooth/symplectic/complex structures) \begin{enumerate} \item Dimension reduction reduces 4-dimensional invariant to 3-dimensional ones (that are fancier ``categorified'') and maps induced by cobordisms. \item symplectic form \(\omega\) on \(X^4\) \(\implies\) \emph{contact structure} \(\xi\) on \(Y = \b X\). \item Stein structure (complex/symplectic structure) on \(X\) \(\implies\) Stein-fillable contact structure. \item Normal complex structure sin \((X, 0)\) is a real cone over \(Y = \)Linkm(X, 0. \end{enumerate} \item Geometric group theory: fundamental groups, especially of 3-manifolds: prime, atoroidal non lens space 3 manifolds \(\iff\) fundmental groups of such 3-manifolds. \item 2-dimensional structure \begin{enumerate} \item contact stucture: \(\xi\) everywhere nonintegrable \(2\)-lane field. ``tight'' contact structure classification \item minimal genus representatives of embedded surfaces, or knot genus. This is better understood. Thurston norm. The 4-dimensional analogue is still open. \item Foliations. Taut folations classification. Seifert fibered \end{enumerate} \item 1-dimensional structure: knots and links \begin{enumerate} \item embedddings \(\amalg_i S^1_i \embed S^3\). Every 3-manifold can be realised as \emph{Dehn surgery} on a link \(L \embed S^3\). Thus the theory of knot theory is richer that of 3-manifold. We study 3-manifolds via knot invariants (WIlten-Reshetikhin-Turaev invariant). \item Relations to other areas \begin{enumerate} \item Chern-Simons knot invarints: \(K \subseteq S^3\) \(\iff\) Gromov-Witten invariants on \(O(-1) \underbrace{\oplus}_{\C\P^1} O(-1)\). \item Homfly homology of \(n\)str braids \(\iff\) DC sheaves on \(\operatorname{HIlb}^n(\C)\). \item Khovanov homology of links in \(S^3\) \(\iff\) DC sheaves on other spaces. \end{enumerate} \end{enumerate} \end{enumerate} \subsection{Course themes} \begin{enumerate} \item Decompositions/Constructions of 3-manifolds. \begin{enumerate} \item surface decompositions/constructures \begin{enumerate} \item prime decomposition --- cut along essential \(S^2\) \item JSJ decomposition --- cut along essential \(T\). \item Mapping tori \(\iff\) surface fibrations. \end{enumerate} \item quotient spaces \begin{enumerate} \item Hyperbolic quotients \item quotients of \(S^7\). Seifert fibration \item Morse theoretic \begin{enumerate} \item handle decomposition \item Heegaard splittings/diagrams \end{enumerate} \item Dehn surgery on links \end{enumerate} \end{enumerate} \item Structure + Invariants for 3-manifolds \begin{enumerate} \item Knots \& links \begin{enumerate} \item complement \(S^3 \setminus K\) \item \(\pi_1(S^3 \setminus K)\) \item Alexander polynomials + Turaev torsion \end{enumerate} \item Essential/incompressible embedded surfaces, Thurston norm \item Foliations \end{enumerate} \end{enumerate} \section{Embeddings} \begin{definition}[link]\index{link} A \emph{link} is an embedding \(L = \amalg_i S^1_i \embed S^3\) considered up to isotopy. This embedding is either smooth or topoogical and locally flat. These two notions are equivalent. \end{definition} Let \(X\) and \(Y\) be topological manifolds. \begin{definition}[topological embedding]\index{topological embedding} A \emph{topological embedding} \(X \embed Y\) is a map \(X \embed Y\) which is a homeomorphism onto its image. \end{definition} \begin{definition}[immersion]\index{immersion} If \(X\) and \(Y\) are also smooth then a map \(f: X \to Y\) is an \emph{immersion} if \(d_xf: T_xX \to T_{f(x)}Y\) is injective for all \(x \in X\). \end{definition} As a consequence of inverse function theorem, any immersion is locally an embedding. \begin{definition}[smooth embedding]\index{smooth embedding} A \emph{smooth embedding} is a topological embedding that is also an immersion. \end{definition} \begin{corollary} If \(X, Y\) are smooth compact then any bijective immersion is an embedding. \end{corollary} \begin{theorem}[Moise]\index{Moise theorem} There is a canonical correpondence between topological structures and smooth structures on 3-manifolds. \end{theorem} Thus 3-manifolds up to homeomorphism bijects to 3-manifolds up to diffeomorphism. \begin{definition}[local flatness]\index{local flatness} A topologically embedded submanifold \(X \subseteq Y\) is \emph{locally flat} at \(x \in X\) if \(x\) has a neighbourhood \(x \in U \subseteq Y\) with homeomorphisms \((U \cap X, U) \cong (\R^{\dim X}, \R^{\dim Y})\). A \emph{locally flat embedding} is locally flat everywhere. \end{definition} \begin{convention} From now on any embedding is smooth or locally flat. \end{convention} \begin{definition}[regular neighbourhood]\index{regular manifolds} A \emph{regular neighbourhood} of an embedded submanifold \(X \subseteq Y\) is a tubular/collar neighbourhood if the embedding is smooth/topologically flat. \end{definition} In 3-dimensions normal bundles are trivial so a regular neighbourhood \(\nu(X)\) is just \(D^2 \times X \embed Y\) if \(\dim X = 1\) and \(D^1 \times X \embed Y\) if \(\dim Y = 2\). In particular, neighbourhood of a not \(K \embed S^3\) is just a solid torus \(D^2 \times S^1 \embed S^3\). \section{Lecture 3: Link diagrams \& Alexander Skein relations} \begin{eg} Wild knot: not locally flat embedding \end{eg} \begin{definition}[isotopy]\index{isotopy} An \emph{isotopy} in category \(\mathcal C\) from \(f_1\) to \(f_2: X \to Y\) is a homotopy through maps of type \(\mathcal C\). \end{definition} The point is, all knots (including wild knot) are isotopic through non-locally flat embeddings to an unknot, and all knots are homotopic to an unknot so we want to exclude the ``bad'' homotopies where a knot can cross itself. \subsection{Knot and link diagrams} \begin{definition}[link]\index{link} A \emph{link} is an (oriented) embedding \(\iota: \coprod_i S_i^1 \embed S^3\) of (oriented circles), considered up to isotopy. \end{definition} \begin{definition}[link projection] A \emph{link projection} is an immersion \(L \immerse \Gamma \embed \R^2\), induced by \[ \begin{tikzcd} L \ar[r, hook] \ar[d, "p|_L"] & S^3 \setminus \{x_0\} \ar[r, "\cong"] & \R^3 \ar[r, "\cong"] & \R^2 \times \R \ar[d, "p"] \\ \Gamma \ar[rrr] & & & \R^2 \end{tikzcd} \] such that \(x_0 \notin L\) and \(p|_L\) is an embedding except at double point singularities. \end{definition} This aweful looking definition is just a formalisation of a familiar concept that facilitates the study of knots: \begin{definition}[link diagram]\index{link diagram} A \emph{link diagram} \(D = (\Gamma, \text{crossing} (D))\) of a link \(L \subseteq S^3\) is an embedded graph \(\Gamma \embed \R^2\) from a link projection of \(D\), together with decorations at double points to label crossings. We draw a gap in the lower strand. \end{definition} \begin{theorem}[Reidemeister moves]\index{Reidemeister moves} Let \(D_1\) and \(D_2\) be link diagrams for respective links \(L_1, L_2 \subseteq S^3\). Then \(L_1\) and \(L_2\) are isotopic if and only if \(D_1\) and \(D_2\) are related by some combination of the fuollowing moves: \end{theorem} It is more important to know that such moves exist than what they actually are. \subsection{Alexander Skein relation} To compute the alexander polynomial, you first choose an orientation for the link \(L \subseteq S^3\). However, the resulting polynomial is independent of choice of orientation for knots. \begin{theorem}[Alexander]\index{Alexander polynomial} The \emph{Alexander polynomial} \[ \Delta: \{\text{link diagram}\} \to \Z[t^{-1/2}, t^{1/2}] \] is specified by 2 conditions: \begin{enumerate} \item normalisation: \(\Delta(u) = 1\) where \(u\) is the unknot. \item Skein relation: \(\Delta(negative crossing) - \Delta(positive crossing) = \Delta(oriented resolution) (t^{-1/2} - t^{1/2})\) for all \(c \in \text{crossing}(D)\). \end{enumerate} \(\Delta(D_1) = \Delta(D_2)\) if \(D_1\) and \(D_2\) are diagrams for isotopic links. \end{theorem} \begin{theorem}[equivalence of Alexander polynomial] Later we will define an Alexander polynomial for 3-manifolds with \(b_1 > 0\). With respect to this definition, \[ \Delta_{\text{link}}(L) = \Delta_{\text{3-manifold}}(S^3 \setminus L) \] for any link \(L \subseteq S^3\). \end{theorem} \section{Handle decompositions from Morse Singularities} Handles: index \(k\)-handles are tubular neighbourhood of \(k\)-cell CW complex, also are neighbourhoods of Morse critical points. \subsection{Morse functions} Let \(X \to \R\) be a smooth function on a smooth manifold \(X\). \begin{definition}[Hessian, critial point]\index{Hessian}\index{critcal point} \(\textrm{Hess}_p(x)\) is the Hessian of \(f\) at \(p\), which is local coordintes is \[ \left( \frac{\partial^2 f}{\partial x_i \partial x_J}|_{x = p} \right)_{ij}. \] \(\textrm{crit} f\) is the set of critical points of \(f\), i.e.\ \(\{p \in X: \frac{\partial f}{\partial x_i} = 0 \text{ for all } i\}\), or more invariantly, \(df = 0\). \end{definition} \begin{definition}[Morse function]\index{Morse function} A smooth function \(f: X \to \R\) on an \(n\)-manifold \(X\) is \emph{Morse} if \begin{enumerate} \item every critical point of \(f\) is isolated. (If \(X\) is compact then this implies that critical points are finite) \item \(\textrm{Hess}_p f\) is nongenerate at each \(p \in \textrm{crit} f\), if and only if \(\det \neq 0\), if and only if has all nonzero eigenvalues. \end{enumerate} \end{definition} \subsection{Morse singularities} A list of descriptions of Morse functions: \begin{enumerate} \item If \(f: X \to \R\) is Morse, then a Taylor series expansion around a critical point \(p \in \textrm{crit} f\) looks like \[ f(x) = f(p) + \frac{1}{2} \sum x_i x_j \frac{\partial^2 f}{\partial x_i x_j}\Bigg|_p + \text{ higher order terms} \] \item \(\textrm{Hess}_p f\) is nondegenerate means that we can rescale coordinates so that all eigenvalues are \(\pm 1\). \item Since partial derivatives commute, \(\textrm{Hess}_p f\) is symmetric. Thus by linear algebra it is diagonalisable and we can write \[ f(x) = f(p) - \sum_{i = 1}^k x_i^2 + \sum_{i = k + 1}^n x_i^2 + \text{ higher order terms}. \] \end{enumerate} \begin{lemma}[Morse lemma] Let \(X\) be a smooth manifold and \(f: X \to \R\) Morse. One can choose coordinates \(x\) centred at \(p \in \textrm{crit} f\) such that \[ f(x) = f(p) - sum_{i = 1}^k x_i^2 + \sum_{i = k + 1}^n x_i^2. \] \end{lemma} \begin{proof} Use implicit function theorem. \end{proof} \begin{definition}[index] The \emph{index} \(\operatorname{ind}_p f\) of a Morse function \(f: X \to \R\) at a critical point \(p\) is \[ \operatorname{ind}_p f = \# \text{ negative eigenvalues of } \textrm{Hess}_p, \] which is the \(k\) above. \end{definition} Thus Morse lemma says that index is the (only?) invariant of Morse functions. Moral: there is a standard local model for each index \(k\) Morse critical point. See printed notes \begin{definition}[\(k\)-handle] An index \(k\)-handle, or just \(k\)-handle, or \(n\)-dimensional \(k\)-handle is the closure of a tubular neighbourhood of an index \(k\) critical point. \(H_k^m \cong \nu(x) \cong D^k \times D^{n - k} \supseteq \interior B^k\). \end{definition} Note that the corners in \(D^k\) and \(D^{n - k}\) are different \section{Lecture 5: Handles from cells, Heegard diagrams} Cell ecomplex interpretation \begin{definition}[handle, core, cocore]\index{handle}\index{core}\index{cocore} An \emph{\(n\)-dimension \(k\)-handle} \(H^n_k\) or \emph{index \(k\)-handle} is a product decomposition \[ H^n_k \cong D^k \times D^{n - k} \cong B^n \] of the closed \(n\)-ball into a \(k\)-dimensional \(k\)-cell \emph{core} \(D^k\) and \emph{cocore} \(D^{n - k}\). \end{definition} If you choose a metric, the core \(D^k\) is fat and the cocore \(D^{n - k}\) is thin. \begin{definition}[attaching region, belt region]\index{attaching region}\index{belt region} The boundary \(\b H^n_k\) of an \(n\)-dimensional \(k\)-handle decomposes as \begin{align*} \b H^n_k &\cong \b(\text{core} \times \text{cocore}) \\ &\cong \b(\text{core}) \times \text{cocore} \cup \text{core} \times \b(\text{cocore}) \\ &\cong \underbrace{\b D^k \times D^{n - k}}_{\text{attaching region}} \cup \underbrace{D^k \times \p D^{n - k}}_{\text{belt region}} \end{align*} \end{definition} In a cell complex, we attach a \(k\)-cell \(D^k\) by gluing its boundary \(\b D^k \cong S^{k - 1}\) to the cell-complex we have built so far. \[ \text{attaching region} (H^n_k) \cong \overline{\nu(\p D^k)} \cong \overline{\nu S^{k - 1}} \cong \p D^k \times D^{n - k} \] \begin{definition}[handle attachment]\index{handle attachment} The attachment of a \(k\)-handle \(H^n_k\) to an \(n\)-manifold \(X\) to product an \(n\)-manifold \(X'\) is induced by a \emph{\(k\)-handle attachment} cobordism \(Z\) from \(- \b X\) to \(\b X'\). \[ Z = (\b X \times I) \cup_{\text{a.r.}} H^n_r. \] We have \[ \p X' \cong (\p X \setminus \text{a.r.} (H^n_k)) \cup (\text{b.r.} (H^n_k)). \] \end{definition} As \(\p X \times I\) deformation retracts to \(\p X\), we have \[ X' \cong X \cup Z \cong X \cup H^n_k. \] \begin{convention} We usually say that we attach a handle along the \emph{core} of the attaching region. \end{convention} \begin{definition}[attaching/belt sphere] \begin{align*} \text{attaching region} (H^n_k) &\cong \text{core}(\text{attaching region}(H^n_k)) \\ &\cong \text{core}(\p D^k \times D^{n - k}) \\ &\cong \p D^k \end{align*} \begin{align*} \text{belt region} (H^n_k) &\cong \text{core}(\text{belt region}(H^n_k)) \\ &\cong \p D^{n - k} \end{align*} \end{definition} Convention reexpressed: to attach a \(k\)-handle, we specify where the \emph{attaching sphere} will be glued. Morse interpretation, revisited \begin{definition}[gradient] Choose a Riemannian metric \(g\) on a smooth \(n\)-manifold \(X\). Let \(f: X \to \R\) be a smooth function. the gradient \(\grad f \in \Gamma(TX)\) of \(f\) is the vector field satisfying \[ g(\grad f, V) = df(V) \] for \(V \in \text{Vect} X = \Gamma(TX)\). Locally, \[ g_x((\grad f)_x, V_x) = df_x(V_x). \] In local coordinates, \[ \grad f = \sum g^{ik} \frac{\partial f}{\partial x^k}e_i \] where \(e_i = \frac{\partial }{\partial x^i}\). \end{definition} Idea: invariant object from partials of \(f\), \(df = \sum \frac{\partial f}{\partial x^i} \w dx^i\). To get a vector field, need a bilinear form to dualise \(df\). \(\grad f\) comes from bilinear form (metric), and Hamiltonian vector field \(f\) comes from symplectic form. Moral (Morse theorey intepretation) Q: In what sense \(H^n_k \cong \overline{\nu(x)}\), \(x \in \text{crit} f, \text{ind}_x f = k\)? A: Gradient flow at the boundary of \(H^n_k\): \(\grad f\) flows into attaching region \(H^n_k\), into \(x \in \text{crit} f\) in \(k\) directions. \(\grad f\) flows out of belt region \(H^n_k\), out of \(x\) in \(n - k\) directions. For example, for \[ f = - \sum_{i = 1}^k x_i^2 + \sum_{j = k + 1}^n x_j^2 \] \printindex \end{document} % https://www.dpmms.cam.ac.uk/~sr727/2019_3manifolds
{ "alphanum_fraction": 0.6743906387, "avg_line_length": 42.228515625, "ext": "tex", "hexsha": "f50c53e67cca704a129156a20382d0a1daed7801", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "geniusKuang/tripos", "max_forks_repo_path": "III/3-manifolds.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "geniusKuang/tripos", "max_issues_repo_path": "III/3-manifolds.tex", "max_line_length": 567, "max_stars_count": 27, "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_path": "III/3-manifolds.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "num_tokens": 7170, "size": 21621 }
\documentclass[a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{hyperref} \usepackage{tabularx} \usepackage{longtable} \usepackage{calc} \usepackage{graphicx} \hypersetup{ colorlinks=true, linkcolor=blue, urlcolor=blue, % citecolor=darkorange } % \usepackage{showframe} \setlength{\parindent}{0em} \setlength{\parskip}{1em} \setlength{\marginparwidth}{0em} \setlength{\textwidth}{\paperwidth} \addtolength{\textwidth}{-2in} \setlength{\textheight}{\paperheight} \addtolength{\textheight}{-2in} \setlength{\topmargin}{0em} \setlength{\headheight}{0em} \setlength{\headsep}{0em} \setlength{\oddsidemargin}{0em} \setlength{\marginparsep}{0em} \newlength{\colauthlen} \setlength{\colauthlen}{4cm} \newlength{\colcontriblen} \setlength{\colcontriblen}{5.5cm} \newlength{\minorskip} \setlength{\minorskip}{0.5cm} \newlength{\majorskip} % \setlength{\majorskip}{0cm} \setlength{\majorskip}{1.7\parskip} % marginparwidth \begin{document} % \maketitle \section*{Authors Contributions Statement} \thispagestyle{empty} This document lists the authors contributions to the research item described below. The contributions are described using \href{https://www.casrai.org/credit.html}{CRediT}, a standardized taxonomy developed and supported by \href{https://www.casrai.org/}{CASRAI}. CRediT (Contributor Roles Taxonomy) is high-level taxonomy, including 14 roles, that can be used to represent the roles typically played by contributors to scientific scholarly output. The roles describe each contributor’s specific contribution to the scholarly output. \vspace{\majorskip} \input{../files/meta.txt} % \vspace{\minorskip} \input{../files/contrib.txt} \vspace{\majorskip} \textit{All authors agree with the above declaration of contributions.} \\[0.3\minorskip] \today % \begin{longtable}[l]{@{}p{\colauthlen}@{\qquad}l@{}} % \today & \textit{All authors agree with the above declaration of contributions.} % \end{longtable} \end{document}
{ "alphanum_fraction": 0.7637195122, "avg_line_length": 26.5945945946, "ext": "tex", "hexsha": "bf0efa43c1974b521f93c27c38edee677c6feade", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ae8ec1dfed4d7e43050b6a3c81744f175819b8b9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "romain-jacob/credit-report", "max_forks_repo_path": "src/credit/CRediT.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ae8ec1dfed4d7e43050b6a3c81744f175819b8b9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "romain-jacob/credit-report", "max_issues_repo_path": "src/credit/CRediT.tex", "max_line_length": 269, "max_stars_count": null, "max_stars_repo_head_hexsha": "ae8ec1dfed4d7e43050b6a3c81744f175819b8b9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "romain-jacob/credit-report", "max_stars_repo_path": "src/credit/CRediT.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 588, "size": 1968 }
\subsubsection{\stid{6.03} Sandia ATDM Software Ecosystem and Delivery-- OS/On-Node Runtime} \paragraph{Overview} This project is part of the NNSA/ASC program and is primarily focused on operating system and runtime system (OS/R) technology development and evaluation. The project focuses on the design, implementation, and evaluation of OS/R interfaces, mechanisms, and policies supporting the efficient execution of the ATDM application codes on next-generation ASC platforms. Priorities in this area include the development of lightweight tasking techniques that integrate network communication, interfaces between the runtime and OS for management of critical resources (including multi-level memory, non-volatile memory, and network interfaces), portable interfaces for managing power and energy, and resource isolation strategies at the operating system level that maintain scalability and performance while providing a more full-featured set of system services. The OS/R technologies developed by this project will be evaluated in the context of ATDM application codes running at large-scale on ASC platforms. Through close collaboration with vendors and the broader community, the intention is to drive the technologies developed by this project into vendor-supported system software stacks and gain wide adoption throughout the HPC community. \paragraph{Key Challenges} Key challenges for this project include: \begin{itemize} \item {\bf Developing best practices for the use of containers and virtualization technology to support ATDM applications and workloads} Containers are gaining popularity as a way to package applications and virtualize the underlying OS to allow a set of executables built for one platform to be run unmodified on a different platform. There are several different approaches to building and deploying containers, each with differing sets of capabilities and features. \item {\bf Characterizing applications use of MPI and sensivity to system noise} Understanding how applications use MPI and its associated network resources requires both application- and hardware-level information that must be coordinated on time scales of less than a microsecond. It is also extremely difficult to isolate the sources of system noise and characterize the non-local side effects of unplanned detours that interrupt application execution flow. \item {\bf Contributing to the OpenMP specification and MPI standard} To prepare for Exascale and to ensure that our ASC mission applications are well supported by these industry-standard programming models, we represent Sandia on the OpenMP Langauge Committee and MPI Forum. We also work with vendors to ensure quality implementations of these standards that meet the needs of ASC. \end{itemize} \paragraph{Solution Strategy} The strategy for containers and virtualization is to evaluate the different technology options using ATDM applications and workflows and compare the results against a set of evaluation criteria. In order to characterize applications use of MPI and sensitivity to system noise, this project has developed a simulation environment that can be used to track MPI and network resource usage. This project is also using lightweight operating systems, which are virtually devoid of system noise, help understand how applications, especially those employing an ATM programming model, are impacted by OS noise. \paragraph{Recent Progress} The team recently completed successful containerization of the NALU computational fluid dynamics application. NALU is a good proxy for Sandia's production applications since it uses similar dependences and components such as Trilinos and Kokkos but is not a restriced code. NALU also serves as a basis for the ExaWind ECP application. The container was developed on the desktop and deployed on one of Sandia's Commodity Technology Systems (CTS). A journal article with analyses of MPI queue behavior observed during executions of Sandia mini-apps, as well as LAMMPS and CTH, was published in {\it Parallel Computing}~\cite{Ferreira:Characterizing:2018}. The techniques were also applied to SPARC, one of Sandia's ATDM applications, and an expanded tech report version including the SPARC results was prepared for the SPARC team. \paragraph{Next Steps} This year a full Sandia ASC mission application will be containerized. We continue to participate in the OpenMP Language Committee and the MPI forum. Additionally, the team is participating in an ECP working group on container technology and using the results of evaluation to guide future activities in this area.
{ "alphanum_fraction": 0.7936276551, "avg_line_length": 48.02, "ext": "tex", "hexsha": "162d9d9d72fd5cab9d1c4a53cadf568d0ffd0121", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9563f23b335c3cda19a239a1a8e9086bb682a2c3", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Franckcappello/ECP-ST-CAR-PUBLIC", "max_forks_repo_path": "projects/2.3.6-NNSA/2.3.6.03-SNL-ATDM/2.3.6.03b-SNL-ATDM-Ecosystem.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9563f23b335c3cda19a239a1a8e9086bb682a2c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Franckcappello/ECP-ST-CAR-PUBLIC", "max_issues_repo_path": "projects/2.3.6-NNSA/2.3.6.03-SNL-ATDM/2.3.6.03b-SNL-ATDM-Ecosystem.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "9563f23b335c3cda19a239a1a8e9086bb682a2c3", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Franckcappello/ECP-ST-CAR-PUBLIC", "max_stars_repo_path": "projects/2.3.6-NNSA/2.3.6.03-SNL-ATDM/2.3.6.03b-SNL-ATDM-Ecosystem.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 951, "size": 4802 }
\section{Solution Overview} \label{sec:solution_overview} Figure~\ref{figure:workflow} illustrates the workflow of our error detection system ED2. The system takes a dirty dataset $D$ as input~\ding{182}. Based on this dirty dataset, the \emph{Feature Extractor} generates content and metadata features $\rho$ for each data cell. The \emph{Feature Extractor} combines information of all cells of the corresponding tuple and the corresponding column to one feature vector for each data cell~\ding{183}. The feature representation is discussed in detail in Section~\ref{sec:features}. With the help of the \emph{Initializer}~\ding{184}, the user provides an initial set of labeled cells for both classes, erroneous and correct, for all columns. We train one classifier for each column. This method is described in Section~\ref{sec:init}. Based on the labeled data provided by the user~\ding{185}, ED2 uses cross-validation to find the optimal set of hyperparameters~\ding{186}. ED2 uses these hyperparameters to train an error detection classifier on all available labeled data cells for each column. Afterward, ED2 applies this model to all data cells of the corresponding column and estimates the probability of a cell to be erroneous~\ding{187}. ED2 leverages these predictions $P$ to augment the feature matrix~\ding{188}. This feature augmentation allows the models of different columns to share their knowledge with each other, as we explain in Section~\ref{sec:error_correlation_features}. When all models are initialized~\ding{187}, the actual active learning process starts. Our two-dimensional active learning policy is implemented via the \emph{Column Selector} component and the \emph{Batch Generator} component. As we train one classifier per column, the \emph{Column Selector} has to choose the column that should be labeled next~\ding{189}. In Section~\ref{sec:order}, we describe how the \emph{Column Selector} leverages the results of the models to make this decision. Then, the \emph{Batch Generator} selects the most promising cells for the given column~\ding{190} as described in Section~\ref{sec:uncertaintysampling}. For each data cell in the batch, its corresponding tuple is presented one by one to the user~\ding{185}. Based on the complete tuple, the user decides whether or not the marked cell value is erroneous. After the batch of cells is labeled, the new labels are added to the training set of the corresponding column, hyperparameters are optimized~\ding{186}, and the classifier is retrained on the new data~\ding{187}. From this point on, the process continues and repeats the steps from~\ding{185} to~\ding{190} in a loop. During the entire active learning process, the \emph{Status Report} provides the user with a summary of the current state of the dataset. The \emph{Status Report} contains information that correlate with the convergence of the models, such as the certainty distribution for each column and current predictions of the least certain cells as described in Section~\ref{sec:stopAL}. The active learning loop continues until the user is satisfied with the \emph{Status Report}. Then, the system applies the latest classification models for each column, marks the errors, and returns the result to the user. In the following Section, we explain the classification algorithm in more detail, describing the features that we use to model the data and the active learning strategy that we employ to converge to a satisfactory state quickly.
{ "alphanum_fraction": 0.7980604678, "avg_line_length": 116.8666666667, "ext": "tex", "hexsha": "600f62aa6bbd03a2c5139d2b0e4ad24bcebffb34", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-25T15:16:16.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-25T15:16:16.000Z", "max_forks_repo_head_hexsha": "7d8949fc8fb00b6285c7c220dbda7451dc152e44", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "BigDaMa/error-generator", "max_forks_repo_path": "documents/sections/solution_overview.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "7d8949fc8fb00b6285c7c220dbda7451dc152e44", "max_issues_repo_issues_event_max_datetime": "2018-11-21T13:18:01.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-20T15:08:23.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "BigDaMa/error-generator", "max_issues_repo_path": "documents/sections/solution_overview.tex", "max_line_length": 378, "max_stars_count": 2, "max_stars_repo_head_hexsha": "7d8949fc8fb00b6285c7c220dbda7451dc152e44", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "BigDaMa/error-generator", "max_stars_repo_path": "documents/sections/solution_overview.tex", "max_stars_repo_stars_event_max_datetime": "2019-06-19T05:44:55.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-11T07:52:51.000Z", "num_tokens": 782, "size": 3506 }
\documentclass[10pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \title{Soil Carbon Models} \date{11/21/2016} \begin{document} \maketitle \section*{Century Model} The flow diagram for the century model proposed by Parton et al. (1988) is shown in Figure 1. In this document, we re-write the model in terms of a set of differential equations. In this model, we have 5 pools, each with a different turnover times: \begin{itemize} \item[pool 1:] \makebox[2.5cm]{Structural C,\hfill} $\kappa_1 \approx 1/3$, \item[pool 2:] \makebox[2.5cm]{Metabolic C,\hfill} $\kappa_2 \approx 1/0.5$, \item[pool 3:] \makebox[2.5cm]{Active Soil C,\hfill} $\kappa_3 \approx 1/1.5$, \item[pool 4:] \makebox[2.5cm]{Slow Soil C,\hfill} $\kappa_4 \approx 1/25$, \item[pool 5:] \makebox[2.5cm]{Passive Soil C,\hfill} $\kappa_5 \approx 1/1000$, \end{itemize} where $\kappa$ denotes the decay rate which is defined as 1 over the turnover. We denote the transfer rate from pool $j$ to pool $i$ by $r_{ij}$. The transfer rates are parameterized as a ratio of the decay rate: $r_{ij} = \alpha_{ij} \kappa_j$. From Figure 1, we have: \begin{align*} \alpha_{31} & = (1-\text{A})(1-0.45 \times \text{SL} - 0.55 \times \text{BL}), \\ \alpha_{41} & = 0.7 \times \text{A}, \\ \alpha_{32} & = 0.45, \\ \alpha_{43} & = 1 - F(\text{T}) - 0.004, \\ \alpha_{53} & = 0.004, \\ \alpha_{34} & = 0.42, \\ \alpha_{54} & = 0.03, \\ \alpha_{35} & = 0.45, \end{align*} where `SL' is the surface litter, `BL' is the soil litter, `A' is the Lignin fraction, `T' is the soil silt + clay content, and $F(\text{T}) = 0.85 - 0.68 \times \text{T}$. The rest of the transfer coefficients are zero. For each pool, we can write the following differential equation: \begin{equation*} \frac{d C_i(t)}{dt} = I_i(t) -\kappa_i C_i(t) + \sum_{j\neq i} \alpha_{ij} \kappa_j, \end{equation*} where $I_i(t)$ is the external input flow to pool $i$. As far as I understand, in the century model, the input flows are due to the plant residues and only enter the first two pools. Denoting the total flow due to plant residue by $I$, we have: \begin{align*} I_1(t) & = (1-\text{L/N})\times I, \\ I_2(t) & = \text{L/N}\times I, \end{align*} where L/N denotes the Lignin to Nitrogen ratio. Combining all these differential equations into a single formula, we get: \begin{equation*} \frac{dC(t)}{dt} = \left( {\begin{array}{c} (1-\text{L/N})\times I \\ \text{L/N}\times I \\ 0 \\ 0 \\ 0 \end{array} } \right) + \left( {\begin{array}{ccccc} -\kappa_1 & 0 & 0 & 0 & 0 \\ 0 & -\kappa_2 & 0 & 0 & 0 \\ \alpha_{31}\kappa_1 & \alpha_{32}\kappa_2 & -\kappa_3 & \alpha_{34}\kappa_4 & \alpha_{35}\kappa_5 \\ \alpha_{41}\kappa_1 & 0 & \alpha_{43}\kappa_3 & -\kappa_4& 0 \\ 0 & 0 & \alpha_{53}\kappa_3 & \alpha_{54}\kappa_4 & -\kappa_5 \end{array} } \right) C(t). \end{equation*} \begin{figure}[!h] \centering \includegraphics[scale=0.85]{century} \caption{Flow diagram of century model} \end{figure} \section*{CN Model} The flow diagram of the CN model, developed by Thronton et al., is shown in Figure 2. There are six pools: \begin{itemize} \item[pool 1:] \makebox[1.5cm]{Lit1,\hfill} $\kappa_1 \approx 0.7$, \item[pool 2:] \makebox[1.5cm]{Lit2,\hfill} $\kappa_2 \approx 0.07$, \item[pool 3:] \makebox[1.5cm]{Lit3,\hfill} $\kappa_3 \approx 0.014$, \item[pool 4:] \makebox[1.5cm]{SOM1,\hfill} $\kappa_4 \approx 0.07$, \item[pool 5:] \makebox[1.5cm]{SOM2,\hfill} $\kappa_5 \approx 0.014$, \item[pool 6:] \makebox[1.5cm]{SOM3,\hfill} $\kappa_6 \approx 0.0005$. \end{itemize} The transfer rate coefficients are: \begin{equation*} \alpha_{41} = 0.61, \quad \alpha_{52} = 0.45, \quad \alpha_{63} = 0.71, \quad \alpha_{54} = 0.72, \quad \alpha_{65} = 0.56. \end{equation*} So, \begin{equation*} \frac{dC(t)}{dt} = \left( {\begin{array}{c} I_1 \\ I_2 \\ I_3 \\ 0 \\ 0 \\ 0 \end{array} } \right) + \left( {\begin{array}{cccccc} -\kappa_1 & 0 & 0 & 0 & 0 & 0\\ 0 & -\kappa_2 & 0 & 0 & 0 & 0\\ 0 & 0 & -\kappa_3 & 0 & 0 & 0 \\ \alpha_{41}\kappa_1 & 0 & 0 & -\kappa_4 & 0 & 0 \\ 0 & \alpha_{52}\kappa_2 & 0 & \alpha_{54}\kappa_4 & -\kappa_5 & 0 \\ 0 & 0 & \alpha_{63}\kappa_3 & 0 & \alpha_{65}\kappa_5 & -\kappa_6 \end{array} } \right) C(t). \end{equation*} \begin{figure}[!h] \centering \includegraphics[scale=0.9]{cn} \caption{Flow diagram of CN model} \end{figure} \end{document}
{ "alphanum_fraction": 0.6425379803, "avg_line_length": 47.1157894737, "ext": "tex", "hexsha": "ac24cf0bcfb10f1fa605f45381b9ade40a65638f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4197b1a64f8d04712323f58918400d8054c681fe", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ktoddbrown/decomPower", "max_forks_repo_path": "tex_files/models.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "4197b1a64f8d04712323f58918400d8054c681fe", "max_issues_repo_issues_event_max_datetime": "2017-02-15T18:43:38.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-18T18:14:51.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ktoddbrown/decomPower", "max_issues_repo_path": "tex_files/models.tex", "max_line_length": 249, "max_stars_count": 3, "max_stars_repo_head_hexsha": "4197b1a64f8d04712323f58918400d8054c681fe", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ktoddbrown/decomPower", "max_stars_repo_path": "tex_files/models.tex", "max_stars_repo_stars_event_max_datetime": "2016-12-15T01:09:35.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-12T23:45:51.000Z", "num_tokens": 1833, "size": 4476 }
\section{Anomaly Detection using Particle Level EFPs} \label{sec:anomaly} \textcolor{yellow}{ Write about the following here: \begin{itemize} \item How EFP preserves inner rotational symmetry therefore does not encode useless information. \item Sampling technique to escape computational complexity \item Example of how this adds to the resilliance against clustering errors. \end{itemize} } \textcolor{red}{Energy Flow Polynomials will be used as inputs to auto-encoders which are easier to construct} ~\cite{tuhin_autoencoder}. \textcolor{red}{The proof of concept for this will be shown using a similar toy model as proposed in the abstract for the LDA paper which is also doing similar Beyond-Standard-Model searches (like the toy vector-scalar boson model)} ~\cite{lda_jets}.
{ "alphanum_fraction": 0.7682926829, "avg_line_length": 51.25, "ext": "tex", "hexsha": "c99cd94d74021ec8546afbc60e6eab76a99b659e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b338b0d55380a6d21e2f78d7098650b0806f7cfc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Blizzard57/jet-tagging", "max_forks_repo_path": "report/tex/anomaly.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b338b0d55380a6d21e2f78d7098650b0806f7cfc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Blizzard57/jet-tagging", "max_issues_repo_path": "report/tex/anomaly.tex", "max_line_length": 250, "max_stars_count": null, "max_stars_repo_head_hexsha": "b338b0d55380a6d21e2f78d7098650b0806f7cfc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Blizzard57/jet-tagging", "max_stars_repo_path": "report/tex/anomaly.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 190, "size": 820 }
% ---------------------------------------------------------------------------- \typeout{--------------- The SYNTAX of soar programs ------------------------} \chapter{The Syntax of Soar Programs} %\label{performance} \label{SYNTAX} \index{syntax!productions|see{production!syntax}} \index{syntax!working memory elements|see{working memory element!syntax}} \index{syntax!preferences|see{preference!syntax}} This chapter describes in detail the syntax of elements in working memory, preference memory, and production memory, and how impasses and I/O are represented in working memory and in productions. Working memory elements and preferences are created as Soar runs, while productions are created by the user or through chunking. The bulk of this chapter explains the syntax for writing productions. The first section of this chapter describes the structure of working memory elements in Soar; the second section describes the structure of preferences; and the third section describes the structure of productions. The fourth section describes the structure of impasses. An overview of how input and output appear in working memory is presented in the fifth section; the full discussion of Soar I/O can be found in the \textit{SML Quick Start Guide}. This chapter assumes that you understand the operating principles of Soar, as presented in Chapter \ref{ARCH}. % ---------------------------------------------------------------------------- \section{Working Memory} \label{SYNTAX-wm} \index{working memory!syntax} \index{working memory element!syntax} Working memory contains \emph{working memory elements} (WME's). As described in Section \ref{ARCH-wm}, WME's can be created by the actions of productions, the evaluation of preferences, the Soar architecture, and via the input/output system. \index{identifier} \index{attribute} \index{value} \index{^ (carat symbol)} A WME is a list consisting of three symbols: an {\em identifier}, an \emph{attribute}, and a \emph{value}, where the entire WME is enclosed in parentheses and the attribute is preceded by an up-arrow (\carat ). A template for a working memory element is: \begin{verbatim} (identifier ^attribute value) \end{verbatim} The identifier is an internal symbol, generated by the Soar architecture as it runs. The attribute and value can be either identifiers or constants; if they are identifiers, there are other working memory elements that have that identifier in their first position. As the previous sentences demonstrate, identifier is used to refer both to the first position of a working memory element, as well as to the symbols that occupy that position. % ---------------------------------------------------------------------------- \subsection{Symbols} \label{SYNTAX-wm-symbols} Soar distinguishes between two types of working memory symbols: \emph{identifiers} and {\em constants}. \index{symbol} \index{identifier} \textbf{Identifiers: } An identifier is a unique symbol, created at runtime when a new object is added to working memory. The names of identifiers are created by Soar, and consist of a single uppercase letter followed by a string of digits, such as \soar{G37} or \soar{O22}. (The Soar user interface will also allow users to specify identifiers using lowercase letters, for example, when using the \texttt{print} command. But internally, they are actually uppercase letters.) \index{constant} \textbf{Constants: } There are three types of constants: integers, floating-point, and symbolic constants:\vspace{-10pt} \index{constant} \begin{itemize} \index{integer} \item Integer constants (numbers). The range of values depends on the machine and implementation you're using, but it is at least $[$-2 billion..2 billion$]$.\vspace{-8pt} \index{floating-point constants} \item Floating-point constants (numbers). The range depends on the machine and implementation you're using.\vspace{-8pt} \item Symbolic constants. These are symbols with arbitrary names. A constant can use any combination of letters, digits, or \verb.$%&*+-/:<=>?_. Other characters (such as blank spaces) can be included by surrounding the complete constant name with vertical bars: \soar{|This is a constant|}. (The vertical bars aren't part of the name; they're just notation.) A vertical bar can be included by prefacing it with a backslash inside surrounding vertical bars: \verb.|Odd-symbol\|name|.\vspace{-8pt} \end{itemize} \index{attribute} \index{value} \index{constant} \index{symbolic constant} Identifiers should not be confused with constants, although they may ``look the same''; identifiers are generated (by the Soar architecture) at runtime and will not necessarily be the same for repeated runs of the same program. Constants are specified in the Soar program and will be the same for repeated runs. Even when a constant ``looks like'' an identifier, it will not act like an identifier in terms of matching. A constant is printed surrounded by vertical bars whenever there is a possibility of confusing it with an identifier: \soar{|G37|} is a constant while \soar{G37} is an identifier. To avoid possible confusion, you should not use letter-number combinations as constants or for production names. \subsection{Objects} Recall from Section \ref{ARCH-wm} that all WME's that share an identifier are collectively called an \textit{object} in working memory. The individual working memory elements that make up an object are often called \emph{augmentations}, because they augment the object. A template for an object in working memory is: \begin{verbatim} (identifier ^attribute-1 value-1 ^attribute-2 value-2 ^attribute-3 value-3... ^attribute-n value-n) \end{verbatim} For example, if you run Soar with the example blocks-world program described in Appendix \ref{BLOCKSCODE}, after one elaboration cycle, you can look at the top-level state by using the \soar{print} command: \label{example:prints1} \begin{verbatim} soar> print s1 (S1 ^io I1 ^ontop O2 ^ontop O3 ^ontop O1 ^problem-space blocks ^superstate nil ^thing B3 ^thing T1 ^thing B1 ^thing B2 ^type state) \end{verbatim} \vspace{12pt} The attributes of an object are printed in alphabetical order to make it easier to find a specific attribute. \index{attribute!multi-valued attribute} \index{multi-attributes|see{attribute!multi-valued attribute}} Working memory is a set, so that at any time, there are never duplicate versions of working memory elements. However, it is possible for several working memory elements to share the same identifier and attribute but have different values. Such attributes are called multi-valued attributes or \emph{multi-attributes}. For example, state \soar{S1}, above, has two attributes that are multi-valued: \soar{thing} and \soar{ontop}. % ---------------------------------------------------------------------------- \subsection{Timetags} \index{timetag} \index{working memory element!timetag|see{timetag}} When a working memory element is created, Soar assigns it a unique integer \textit{timetag}. The timetag is a part of the working memory element, and therefore, WME's are actually quadruples, rather than triples. However, the timetags are not represented in working memory and cannot be matched by productions. The timetags are used to distinguish between multiple occurrences of the same WME. As preferences change and elements are added and deleted from working memory, it is possible for a WME to be created, removed, and created again. The second creation of the WME --- which bears the same identifier, attribute, and value as the first WME --- is \textit{different}, and therefore is assigned a different timetag. This is important because a production will fire only once for a given instantiation, and the instantiation is determined by the timetags that match the production and not by the identifier-attribute-value triples. To look at the timetags of WMEs, the \soar{wmes} command can be used: \begin{verbatim} soar> wmes s1 (3: S1 ^io I1) (10: S1 ^ontop O2) (9: S1 ^ontop O3) (11: S1 ^ontop O1) (4: S1 ^problem-space blocks) (2: S1 ^superstate nil) (6: S1 ^thing B3) (5: S1 ^thing T1) (8: S1 ^thing B1) (7: S1 ^thing B2) (1: S1 ^type state) \end{verbatim} \vspace{12pt} This shows all the individual augmentations of \soar{S1}, each is preceded by an integer \textit{timetag}. % ---------------------------------------------------------------------------- \subsection{Acceptable preferences in working memory} \label{SYNTAX-wm-preferences} \index{working memory!acceptable preference} \index{preference!acceptable} The acceptable preferences for the operator augmentations of states appear in working memory as identifier-attribute-value-preference quadruples. No other preferences appear in working memory. A template for an acceptable preference in working memory is: \begin{verbatim} (identifier ^operator value +) \end{verbatim} \vspace{12pt} For example, if you run Soar with the example blocks-world program described in Appendix \ref{BLOCKSCODE}, after the first operator has been selected, you can again look at the top-level state using the \soar{wmes} command: \begin{verbatim} soar> wmes s1 (3: S1 ^io I1) (9: S1 ^ontop O3) (10: S1 ^ontop O2) (11: S1 ^ontop O1) (48: S1 ^operator O4 +) (49: S1 ^operator O5 +) (50: S1 ^operator O6 +) (51: S1 ^operator O7 +) (54: S1 ^operator O7) (52: S1 ^operator O8 +) (53: S1 ^operator O9 +) (4: S1 ^problem-space blocks) (2: S1 ^superstate nil) (5: S1 ^thing T1) (8: S1 ^thing B1) (6: S1 ^thing B3) (7: S1 ^thing B2) (1: S1 ^type state) \end{verbatim} \vspace{12pt} The state \soar{S1} has six augmentations of acceptable preferences for different operators (\soar{O4} through \soar{O9}). These have plus signs following the value to denote that they are acceptable preferences. The state has exactly one operator, \soar{O7}. This state corresponds to the illustration of working memory in Figure \ref{fig:ab-wmem2}. \index{preference} \index{object} % ---------------------------------------------------------------------------- \subsection{Working Memory as a Graph} \index{link} \index{identifier} \index{object} Not only is working memory a set, it is also a graph structure where the identifiers are nodes, attributes are links, and constants are terminal nodes. Working memory is not an arbitrary graph, but a graph rooted in the states. Therefore, all WMEs are \emph{linked} either directly or indirectly to a state. The impact of this constraint is that all WMEs created by actions are linked to WMEs tested in the conditions. The link is one-way, from the identifier to the value. Less commonly, the attribute of a WME may be an identifier. \begin{figure} \insertfigure{o43net}{4in} \insertcaption{A semantic net illustration of four objects in working memory.} \label{fig:o43net} \end{figure} Figure \ref{fig:o43net} illustrates four objects in working memory; the object with identifier \soar{X44} has been linked to the object with identifier \soar{O43}, using the attribute as the link, rather than the value. The objects in working memory illustrated by this figure are: \begin{verbatim} (O43 ^isa apple ^color red ^inside O53 ^size small ^X44 200) (O87 ^isa ball ^color red ^inside O53 ^size big) (O53 ^isa box ^size large ^color orange ^contains O43 O87) (X44 ^unit grams ^property mass) \end{verbatim} \vspace{12pt} In this example, object \soar{O43} and object \soar{O87} are both linked to object \soar{O53} through \soar{(O53 \carat contains O43)} and \soar{(O53 \carat contains O87)}, respectively (the \soar{contains} attribute is a multi-valued attribute). Likewise, object \soar{O53} is linked to object \soar{O43} through \soar{(O43 \carat inside O53)} and linked to object \soar{O87} through \soar{(O87 \carat inside O53)}. Object \soar{X44} is linked to object \soar{O43} through \soar{(O43 \carat X44 200)}. Links are transitive so that \soar{X44} is linked to \soar{O53} (because \soar{O43} is linked to \soar{O53} and \soar{X44} is linked to \soar{O43}). However, since links are not symmetric, \soar{O53} is not linked to \soar{X44}. % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \section{Preference Memory} \label{SYNTAX-prefmem} \index{preference memory!syntax} \index{preference!syntax} Preferences are created by production firings and express the relative or absolute merits for selecting an operator for a state. When preferences express an absolute rating, they are identifier-attribute-value-preference quadruples; when preferences express relative ratings, they are identifier-attribute-value-preference-value quintuples For example, \begin{verbatim} (S1 ^operator O3 +) \end{verbatim} is a preference that asserts that operator O3 is an acceptable operator for state S1, while \begin{verbatim} (S1 ^operator O3 > O4) \end{verbatim} is a preference that asserts that operator O3 is a better choice for the operator of state S1 than operator O4. The semantics of preferences and how they are processed were described in Section \ref{ARCH-prefmem}, which also described each of the eleven different types of preferences. Multiple production instantiations may create identical preferences. Unlike working memory, preference memory is not a set: Duplicate preferences are allowed in preference memory. % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \section{Production Memory} \label{SYNTAX-pm} \index{production!syntax} \index{production memory!syntax} \nocomment{XXXX start here with indexing} Production memory contains productions, which can be loaded in by a user (typed in while Soar is running or \soar{source}d from a file) or generated by chunking while Soar is running. Productions (both user-defined productions and chunks) may be examined using the \soar{print} command, described in Section \ref{print} on page \pageref{print}. Each production has three required components: a name, a set of conditions (also called the left-hand side, or LHS), and a set of actions (also called the right-hand side, or RHS). There are also two optional components: a documentation string and a type. Syntactically, each production consists of the symbol \soar{sp}, followed by: an opening curly brace, \soar{\{}; the production's name; the documentation string (optional); the production type (optional); comments (optional); the production's conditions; the symbol \soar{-->} (literally: dash-dash-greaterthan); the production's actions; and a closing curly brace, \soar{\}}. Each element of a production is separated by white space. Indentation and linefeeds are used by convention, but are not necessary. \begin{verbatim} sp {production-name Documentation string :type CONDITIONS --> ACTIONS } \end{verbatim} \vspace{12pt} \begin{figure} \begin{verbatim} sp {blocks-world*propose*move-block (state <s> ^problem-space blocks ^thing <thing1> {<> <thing1> <thing2>} ^ontop <ontop>) (<thing1> ^type block ^clear yes) (<thing2> ^clear yes) (<ontop> ^top-block <thing1> ^bottom-block <> <thing2>) --> (<s> ^operator <o> +) (<o> ^name move-block ^moving-block <thing1> ^destination <thing2>)} \end{verbatim} \insertcaption{An example production from the example blocks-world task.} \label{fig:ex-prod} \end{figure} An example production, named ``\soar{blocks-world*propose*move-block}'', is shown in Figure \ref{fig:ex-prod}. This production proposes operators named \soar{move-block} that move blocks from one location to another. The details of this production will be described in the following sections. \subsubsection*{Conventions for indenting productions} Productions in this manual are formatted using conventions designed to improve their readability. These conventions are not part of the required syntax. First, the name of the production immediately follows the first curly bracket after the \soar{sp}. All conditions are aligned with the first letter after the first curly brace, and attributes of an object are all aligned The arrow is indented to align with the conditions and actions and the closing curly brace follows the last action. % ---------------------------------------------------------------------------- \subsection{Production Names} The name of the production is an almost arbitrary constant. (See Section \ref{SYNTAX-wm-symbols} for a description of constants.) By convention, the name describes the role of the production, but functionally, the name is just a label primarily for the use of the programmer. A production name should never be a single letter followed by numbers, which is the format of identifiers. The convention for naming productions is to separate important elements with asterisks; the important elements that tend to appear in the name are:\vspace{-12pt} \begin{enumerate} \item The name of the task or goal (e.g., \texttt{blocks-world}).\vspace{-10pt} \item The name of the architectural function (e.g., \texttt{propose}).\vspace{- 10pt} \item The name of the operator (or other object) at issue. (e.g., \texttt{move-block})\vspace{-10pt} \item Any other relevant details. \end{enumerate} This name convention enables one to have a good idea of the function of a production just by examining its name. This can help, for example, when you are watching Soar run and looking at the specific productions that are firing and retracting. Since Soar uses white space to delimit components of a production, if whitespace inadvertently occurs in the production name, Soar will complain that an open parenthesis was expected to start the first condition. \subsection{Documentation string (optional)} A production may contain an optional documentation string. The syntax for a documentation string is that it is enclosed in double quotes and appears after the name of the production and before the first condition (and may carry over to multiple lines). The documentation string allows the inclusion of internal documentation about the production; it will be printed out when the production is printed using the \soar{print} command. % ---------------------------------------------------------------------------- \subsection{Production type (optional)} A production may also include an optional \emph{production type}, which may specify that the production should be considered a default production (\soar{:default}) or a chunk (\soar{:chunk}), or may specify that a production should be given O- support (\soar{:o-support}) or I-support (\soar{:i-support}). Users are discouraged from using these types. These types are described in Section \ref{sp}, which begins on Page \pageref{sp}. There is one additional flag (\soar{:interrupt}) which can be placed at this location in a production. However this flag does not specify a production type, but is a signal that the production should be marked for special debugging capabilities. For more information, see Section \ref{sp} on Page \pageref{sp}. % ---------------------------------------------------------------------------- \subsection{Comments (optional)} \index{comments} Productions may contain comments, which are not stored in Soar when the production is loaded, and are therefore not printed out by the \soar{print} command. A comment is begun with a pound sign character \soar{\#} and ends at the end of the line. Thus, everything following the \soar{\#} is not considered part of the production, and comments that run across multiple lines must each begin with a \soar{\#}. For example: \begin{verbatim} sp {blocks-world*propose*move-block (state <s> ^problem-space blocks ^thing <thing1> {<> <thing1> <thing2>} ^ontop <ontop>) (<thing1> ^type block ^clear yes) (<thing2> ^clear yes) # (<ontop> ^top-block <thing1> # ^bottom-block <> <thing2>) --> (<s> ^operator <o> +) (<o> ^name move-block # you can also use in-line comments ^moving-block <thing1> ^destination <thing2>)} \end{verbatim} When commenting out conditions or actions, be sure that all parentheses remain balanced outside the comment. \subsubsection*{External comments} Comments may also appear in a file with Soar productions, outside the curly braces of the \soar{sp} command. Comments must either start a new line with a \soar{\#} or start with \soar{;\#}. In both cases, the comment runs to the end of the line. \begin{verbatim} # imagine that this is part of a "Soar program" that contains # Soar productions as well as some other code. source blocks.soar ;# this is also a comment \end{verbatim} % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \subsection{The condition side of productions (or LHS)} \label{SYNTAX-pm-conditions} %perf-cond \index{condition side} \index{LHS of production} \index{production!LHS} \index{production!condition} The condition side of a production, also called the left-hand side (or LHS) of the production, is a pattern for matching one or more WMEs. When all of the conditions of a production match elements in working memory, the production is said to be instantiated, and is ready to perform its action. The following subsections describe the condition side of a production, including predicates, disjunctions, conjunctions, negations, acceptable preferences for operators, and a few advanced topics. % A grammar for the % condition side is given in Appendix \ref{GRAMMARS}. % ---------------------------------------------------------------------------- \subsubsection{Conditions} \label{Conditions} \index{Conditions} The condition side of a production consists of a set of conditions. Each condition tests for the existence or absence (explained later in Section \ref{SYNTAX-pm-negated}) of working memory elements. Each condition consists of a open parenthesis, followed by a test for the identifier, and the tests for augmentations of that identifier, in terms of attributes and values. The condition is terminated with a close parenthesis. Thus, a single condition might test properties of a single working memory element, or properties of multiple working memory elements that constitute an object. \begin{verbatim} (identifier-test ^attribute1-test value1-test ^attribute2-test value2-test ^attribute3-test value3-test ...) \end{verbatim} The first condition in a production must match against a state in working memory. Thus, the first condition must begin with the additional symbol ``state''. All other conditions and actions must be \textit{linked} directly or indirectly to this condition. This linkage may be direct to the state, or it may be indirect, through objects specified in the conditions. If the identifiers of the actions are not linked to the state, a warning is printed when the production is parsed, and the production is not stored in production memory. In the actions of the example production shown in Figure \ref{fig:ex-prod}, the operator preference is directly linked to the state and the remaining actions are linked indirectly via the operator preference. Although all of the attribute tests in the template above are followed by value tests, it is possible to test for only the existence of an attribute and not test any specific value by just including the attribute and no value. Another exception to the above template is operator preferences, which have the following structure where a plus sign follows the value test. \begin{verbatim} (state-identifier-test ^operator value1-test + ...) \end{verbatim} In the remainder of this section, we describe the different tests that can be used for identifiers, attributes, and values. The simplest of these is a constant, where the constant specified in the attribute or value must match the same constant in a working memory element. % ---------------------------------------------------------------------------- \subsubsection{Variables in productions} \label{SYNTAX-pm-variables} \index{variables} Variables match against constants in working memory elements in the identifier, attribute, or value positions. Variables can be further constrained by additional tests (described in later sections) or by multiple occurrences in conditions. If a variable occurs more than once in the condition of a production, the production will match only if the variables match the same identifier or constant. However, there is no restriction that prevents different variables from binding to the same identifier or constant. Because identifiers are generated by Soar at run time, it impossible to include tests for specific identifiers in conditions. Therefore, variables are used in conditions whenever an identifier is to be matched. Variables also provide a mechanism for passing identifiers and constants which match in conditions to the action side of a rule. Syntactically, a variable is a symbol that begins with a left angle-bracket (i.e., \soar{<}), ends with a right angle-bracket (i.e., \soar{>}), and contains at least one alphanumeric symbol in between. In the example production in Figure \ref{fig:ex-prod}, there are seven variables: \soar{<s>}, \soar{<clear1>}, \soar{<clear2>}, \soar{<ontop>}, \soar{<block1>}, \soar{<block2>}, and \soar{<o>}. The following table gives examples of legal and illegal variable names. \begin{tabular}{| l | l |} \hline \bf{Legal variables} & \bf{Illegal variables} \\ \hline \soar{<s>} & \soar{<>} \\ \soar{<1>} & \soar{<1} \\ \soar{<variable1>} & \soar{variable>} \\ \soar{<abc1>} & \soar{<a b>} \\ \hline \end{tabular} \vspace{10pt} % ---------------------------------------------------------------------------- \subsubsection{Predicates for values} \label{SYNTAX-pm-predicates} %perf-pred} \index{predicates} \index{=} \index{<>} \index{<} \index{<=} \index{>=} \index{>} \index{<=>} A test for an identifier, attribute, or value in a condition (whether constant or variable) can be modified by a preceding predicate. There are six predicates that can be used: \soar{<>, <=>, <, <=, >=, >}. \begin{tabular}{| l | l |} \hline \bf{Predicate} & \bf{Semantics of Predicate} \\ \hline \soar{<>} & Not equal. Matches anything except the value immediately \\ & following it. \\ \soar{<=>} & Same type. Matches any symbol that is the same type (identifier, \\ & integer, floating-point, non-numeric constant) as the value \\ & immediately following it. \\ \soar{<} & Numerically less than the value immediately following it. \\ \soar{<=} & Numerically less than or equal to the value immediately \\ & following it. \\ \soar{>=} & Numerically greater than or equal to the value immediately \\ & following it. \\ \soar{>} & Numerically greater than the value immediately following it. \\ \hline \end{tabular} \vspace{10pt} \index{numeric comparisons} \index{type comparisons} \index{not equal test} The following table shows examples of legal and illegal predicates: \begin{tabular}{| l | l |} \hline \bf{Legal predicates} & \bf{Illegal predicates} \\ \hline \soar{> <valuex>} & \soar{> > <valuey>} \\ \soar{< 1} & \soar{1 >} \\ \soar{<=> <y>} & \soar{= 10} \\ \hline \end{tabular} \vspace{10pt} \subsubsection*{Example Production} \begin{verbatim} sp {propose-operator*to-show-example-predicate (state <s> ^car <c>) (<c> ^style convertible ^color <> rust) --> (<s> ^operator <o> +) (<o> ^name drive-car ^car <c>) } \end{verbatim} In this production, there must be a ``color'' attribute for the working memory object that matches \verb+<c>+, and the value of that attribute must not be ``rust''. % ---------------------------------------------------------------------------- \subsubsection{Disjunctions of values} \label{SYNTAX-pm-disjuncts} %perf-disj \index{disjunction of constants} \index{<< >>} A test for an identifier, attribute, or value may also be for a disjunction of constants. With a disjunction, there will be a match if any one of the constants is found in a working memory element (and the other parts of the working memory element matches). Variables and predicates may not be used within disjunctive tests. Syntactically, a disjunctive test is specified with double angle brackets (i.e., \soar{ <<} and \soar{>>}). There must be spaces separating the brackets from the constants. The following table provides examples of legal and illegal disjunctions: \begin{tabular}{| l | l |} \hline \bf{Legal disjunctions} & \bf{Illegal disjunctions} \\ \hline \soar{<< A B C 45 I17 >>} & \soar{<< <A> A >>} \\ \soar{<< 5 10 >>} & \soar{<< < 5 > 10 >>} \\ \soar{<< good-morning good-evening >>} & \soar{<<A B C >>} \\ \hline \end{tabular} \vspace{10pt} \subsubsection*{Example Production} For example, the third condition of the following production contains a disjunction that restricts the color of the table to \soar{red} or \soar{blue}: \begin{verbatim} sp {blocks*example-production-conditions (state ^operator <o> + ^table <t>) (<o> ^name move-block) (<t> ^type table ^color << red blue >> ) --> ... } \end{verbatim} \subsubsection*{Note} Disjunctions of complete conditions are not allowed in Soar. Multiple (similar) productions fulfill this role. % ---------------------------------------------------------------------------- \subsubsection{Conjunctions of values} \label{SYNTAX-pm-conjunctions} %perf-conj} \index{conjunctive!conditions} A test for an identifier, attribute, or value in a condition may include a conjunction of tests, all of which must hold for there to be a match. Syntactically, conjuncts are contained within curly braces (i.e., \soar{\{} and \soar{\}}). The following table shows some examples of legal and illegal conjunctive tests: \begin{tabular}{| l | l |} \hline \bf{Legal conjunctions} & \bf{Illegal conjunctions} \\ \hline \soar{\{ <= <a> >= <b> \}} & \soar{\{ <x> < <a> + <b> \}} \\ \soar{\{ <x> > <y> \}} & \soar{\{ > > <b> \}} \\ \soar{\{ <> <x> <y> \}} & \\ \soar{\{ << A B C >> <x> \}} & \\ \soar{\{ <=> <x> > <y> << 1 2 3 4 >> <z> \}} & \\ \hline \end{tabular} \vspace{10pt} Because those examples are a bit difficult to interpret, let's go over the legal examples one by one to understand what each is doing. In the first example, the value must be less than or equal to the value bound to variable \soar{<a>} and greater than or equal to the value bound to variable \soar{<b>}. In the second example, the value is bound to the variable \soar{<x>}, which must also be greater than the value bound to variable \soar{<y>}. In the third example, the value must not be equal to the value bound to variable \soar{<x>} and should be bound to variable \soar{<y>}. Note the importance of order when using conjunctions with predicates: in the second example, the predicate modifies \soar{<y>}, but in the third example, the predicate modifies \soar{<x>}. In the fourth example, the value must be one of \soar{A}, \soar{B}, or \soar{C}, and the second conjunctive test binds the value to variable \soar{<x>}. In the fifth example, there are four conjunctive tests. First, the value must be the same type as the value bound to variable \soar{<x>}. Second, the value must be greater than the value bound to variable \soar{<y>}. Third, the value must be equal to \soar{1}, \soar{2}, \soar{3}, or \soar{4}. Finally, the value should be bound to variable \soar{<z>}. In Figure \ref{fig:ex-prod}, a conjunctive test is used for the \soar{thing} attribute in the first condition. % ---------------------------------------------------------------------------- \subsubsection{Negated conditions} \label{SYNTAX-pm-negated} %perf-nega-cond \index{negated!conditions} \index{-} In addition to the positive tests for elements in working memory, conditions can also test for the absence of patterns. A \emph{negated condition} will be matched only if there does not exist a working memory element consistent with its tests and variable bindings. Thus, it is a test for the \textit{absence} of a working memory element. Syntactically, a negated condition is specified by preceding a condition with a dash (i.e., ``\soar{-}''). For example, the following condition tests the absence of a working memory element of the object bound to \soar{<p1> \carat type father}. \begin{verbatim} -(<p1> ^type father) \end{verbatim} \vspace{12pt} A negation can be used within an object with many attribute-value pairs by having it precede a specific attribute: \begin{verbatim} (<p1> ^name john -^type father ^spouse <p2>) \end{verbatim} \vspace{12pt} In that example, the condition would match if there is a working memory element that matches \soar{(<p1> \carat name john)} and another that matches \soar{(<p1> \carat spouse <p2>)}, but is no working memory element that matches \soar{(<p1> \carat type father)} (when \soar{p1} is bound to the same identifier). On the other hand, the condition: \begin{verbatim} -(<p1> ^name john ^type father ^spouse <p2>) \end{verbatim} would match only if there is no object in working memory that matches all three attribute-value tests. \subsubsection*{Example Production} \begin{verbatim} sp {default*evaluate-object (state <ss> ^operator <so>) (<so> ^type evaluation ^superproblem-space <p>) -(<p> ^default-state-copy no) --> (<so> ^default-state-copy yes) } \end{verbatim} \subsubsection*{Notes} One use of negated conditions to avoid is testing for the absence of the working memory element that a production creates with I-support; this would lead to an ``infinite loop'' in your Soar program, as Soar would repeatedly fire and retract the production. % ---------------------------------------------------------------------------- \subsubsection{Negated conjunctions of conditions} \label{SYNTAX-pm-negaconj} %perf-nega-conj} \index{negated!conjunctions} \index{conjunctive!negation} Conditions can be grouped into conjunctive sets by surrounding the set of conditions with \soar{\{} and \soar{\}}. The production compiler groups the test in these conditions together. This grouping allows for negated tests of more than one working memory element at a time. In the example below, the state is tested to ensure that it does not have an object on the table. \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state) -{(<s> ^ontop <on>) (<on> ^bottom-object <bo>) (<bo> ^type table)} --> (<s> ^nothing-ontop-table true) } \end{verbatim} When using negated conjunctions of conditions, the production has nested curly braces. One set of curly braces delimits the production, while the other set delimits the conditions to be conjunctively negated. If only the last condition, \soar{(<bo> \carat type table)} were negated, the production would match only if the state \emph{had} an ontop relation, and the ontop relation had a bottom-object, but the bottom object wasn't a table. Using the negated conjunction, the production will also match when the state has no ontop augmentation or when it has an ontop augmentation that doesn't have a bottom-object augmentation. The semantics of negated conjunctions can be thought of in terms of mathematical logic, where the negation of $(A \wedge B \wedge C)$: $\neg (A \wedge B \wedge C)$ can be rewritten as: $(\neg A) \vee (\neg B) \vee (\neg C)$ That is, ``not (A and B and C)'' becomes ``(not A) or (not B) or (not C)''. % ---------------------------------------------------------------------------- \subsubsection{Multi-valued attributes} \label{SYNTAX-pm-multi} \index{multi-valued attribute} An object in working memory may have multiple augmentations that specify the same attribute with different values; these are called multi-valued attributes, or multi-attributes for short. To shorten the specification of a condition, tests for multi-valued attributes can be shortened so that the value tests are together. For example, the condition: \begin{verbatim} (<p1> ^type father ^child sally ^child sue) \end{verbatim} could also be written as: \begin{verbatim} (<p1> ^type father ^child sally sue) \end{verbatim} % ---------------------------------------------------------------------------- \subsubsection*{Multi-valued attributes and variables} When variables are used with multi-valued attributes, remember that variable bindings are not unique unless explicitly forced to be so. For example, to test that an object has two values for attribute \soar{child}, the variables in the following condition can match to the same value. \begin{verbatim} (<p1> ^type father ^child <c1> <c2>) \end{verbatim} \vspace{12pt} To do tests for multi-valued attributes with variables correctly, conjunctive tests must be used, as in: \begin{verbatim} (<p1> ^type father ^child <c1> {<> <c1> <c2>}) \end{verbatim} \vspace{12pt} The conjunctive test \soar{ \{<> <c1> <c2>\} } ensures that \soar{<c2>} will bind to a different value than \soar{<c1>} binds to. % ---------------------------------------------------------------------------- \subsubsection*{Negated conditions and multi-valued attributes} A negation can also precede an attribute with multiple values. In this case it tests for the absence of the conjunction of the values. For example \begin{verbatim} (<p1> ^name john -^child oprah uma) \end{verbatim} is the same as \begin{verbatim} (<p1> ^name john) -{(<p1> ^child oprah) (<p1> ^child uma)} \end{verbatim} and the match is possible if either \soar{(<p1> \carat child oprah)} or \soar{(<p1> \carat child uma)} cannot be found in working memory with the binding for \soar{<p1>} (but not if both are present). % ---------------------------------------------------------------------------- \subsubsection{Acceptable preferences for operators} \label{SYNTAX-pm-acceptable} \index{condition!acceptable preference } \index{preference!acceptable as condition} \index{acceptable preference} \index{+} The only preferences that can appear in working memory are acceptable preferences for operators, and therefore, the only preferences that may appear in the conditions of a production are acceptable preferences for operators. Acceptable preferences for operators can be matched in a condition by testing for a ``\soar{+}'' following the value. This allows a production to test the existence of a candidate operator and its properties, and possibly create a preference for it, before it is selected. In the example below, \soar{\carat operator <o> +} matches the acceptable preference for the operator augmentation of the state. \emph{This does not test that operator} \soar{<o>} \emph{has been selected as the current operator}. \begin{verbatim} sp {blocks*example-production-conditions (state ^operator <o> + ^table <t>) (<o> ^name move-block) --> ... } \end{verbatim} In the example below, the production tests the state for acceptable preferences for two different operators (and also tests that these operators move different blocks): \begin{verbatim} sp {blocks*example-production-conditions (state ^operator <o1> + <o2> + ^table <t>) (<o1> ^name move-block ^moving-block <m1> ^destination <d1>) (<o2> ^name move-block ^moving-block {<m2> <> <m1>} ^destination <d2>) --> ... } \end{verbatim} \subsubsection{Attribute tests} The previous examples applied all of the different test to the values of working memory elements. All of the tests that can be used for values can also be used for attributes and identifiers (except those including constants). % ---------------------------------------------------------------------------- \subsubsection*{Variables in attributes} Variables may be used with attributes, as in: \begin{verbatim} sp {blocks*example-production-conditions (state <s> ^operator <o> + ^thing <t> {<> <t> <t2>} ) (operator <o> ^name group ^by-attribute <a> ^moving-block <t> ^destination <t2>) (<t> ^type block ^<a> <x>) (<t2> ^type block ^<a> <x>) --> (<s> ^operator <o> >) } \end{verbatim} This production tests that there is acceptable operator that is trying to group blocks according to some attribute, \soar{<a>}, and that block \soar{<t>} and \soar{<t2>} both have this attribute (whatever it is), and have the same value for the attribute. % ---------------------------------------------------------------------------- \subsubsection*{Predicates in attributes} Predicates may be used with attributes, as in: \begin{verbatim} sp {blocks*example-production-conditions (state ^operator <o> + ^table <t>) (<t> ^<> type table) --> ... } \end{verbatim} which tests that the object with its identifier bound to \soar{<t>} must have an attribute whose value is \soar{table}, but the name of this attribute is not \soar{type}. % ---------------------------------------------------------------------------- \subsubsection*{Disjunctions of attributes} \index{disjunctions of attributes} \index{<< >>} Disjunctions may also be used with attributes, as in: \begin{verbatim} sp {blocks*example-production-conditions (state ^operator <o> + ^table <t>) (<t> ^<< type name>> table) --> ... } \end{verbatim} which tests that the object with its identifier bound to \soar{<t>} must have either an attribute \soar{type} whose value is \soar{table} or an attribute \soar{name} whose value is \soar{table}. % ---------------------------------------------------------------------------- \subsubsection*{Conjunctive tests for attributes} Section \ref{SYNTAX-pm-conjunctions} illustrated the use of conjunctions for the values in conditions. Conjunctive tests may also be used with attributes, as in: \begin{verbatim} sp {blocks*example-production-conditions (state ^operator <o> + ^table <t>) (<t> ^{<ta> <> name} table) --> ... } \end{verbatim} which tests that the object with its identifier bound to \soar{<t>} must have an attribute whose value is \soar{table}, and the name of this attribute is not \soar{name}, and the name of this attribute (whatever it is) is bound to the variable \soar{<ta>}. When attribute predicates or attribute disjunctions are used with multi-valued attributes, the production is rewritten internally to use a conjunctive test for the attribute; the conjunctive test includes a variable used to bind to the attribute name. Thus, \begin{verbatim} (<p1> ^type father ^ <> name sue sally) \end{verbatim} is interpreted to mean: \begin{verbatim} (<p1> ^type father ^ {<> name <a*1>} sue ^ <a*1> sally) \end{verbatim} % ---------------------------------------------------------------------------- \subsubsection{Attribute-path notation} \label{SYNTAX-pm-path} \index{dot notation} \index{path notation} \index{.} Often, variables appear in the conditions of productions only to link the value of one attribute with the identifier of another attribute. Attribute-path notation provides a shorthand so that these intermediate variables do not need to be included. Syntactically, path notation lists a sequence of attributes separated by dots (.), after the \carat \ in a condition. For example, using attribute path notation, the production: \begin{verbatim} sp {blocks-world*monitor*move-block (state <s> ^operator <o>) (<o> ^name move-block ^moving-block <block1> ^destination <block2>) (<block1> ^name <block1-name>) (<block2> ^name <block2-name>) --> (write (crlf) |Moving Block: | <block1-name> | to: | <block2-name> ) } \end{verbatim} could be written as: \begin{verbatim} sp {blocks-world*monitor*move-block (state <s> ^operator <o>) (<o> ^name move-block ^moving-block.name <block1-name> ^destination.name <block2-name>) --> (write (crlf) |Moving Block: | <block1-name> | to: | <block2-name> ) } \end{verbatim} Attribute-path notation yields shorter productions that are easier to write, less prone to errors, and easier to understand. When attribute-path notation is used, Soar internally expands the conditions into the multiple Soar objects, creating its own variables as needed. Therefore, when you print a production (using the \soar{print} command), the production will not be represented using attribute-path notation. %---------------------------------------------------------------------------- \subsubsection*{Negations and attribute path notation} \nocomment{can't negations be used with structured values? there's no description of this (yes -- bobd)} A negation may be used with attribute path notation, in which case it amounts to a negated conjunction. For example, the production: \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state) -{(<s> ^ontop <on>) (<on> ^bottom-object <bo>) (<bo> ^type table)} --> (<s> ^nothing-ontop-table true) } \end{verbatim} could be rewritten as: \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state -^ontop.bottom-object.type table) --> (<s> ^nothing-ontop-table true) } \end{verbatim} % ---------------------------------------------------------------------------- \subsubsection*{Multi-valued attributes and attribute path notation} \nocomment{can't multi-attributes be used with structured values? there's no description of this (yes -- bobd)} Attribute path notation may also be used with multi-valued attributes, such as: \begin{verbatim} sp {blocks-world*propose*move-block (state <s> ^problem-space blocks ^clear.block <block1> { <> <block1> <block2> } ^ontop <ontop>) (<block1> ^type block) (<ontop> ^top-block <block1> ^bottom-block <> <block2>) --> (<s> ^operator <o> +) (<o> ^name move-block + ^moving-block <block1> + ^destination <block2> +) } \end{verbatim} \subsubsection*{Multi-attributes and attribute-path notation} \label{SYNTAX-pm-caveat} \textbf{Note:} It would not be advisable to write the production in Figure \ref{fig:ex-prod} using attribute-path notation as follows: \begin{verbatim} sp {blocks-world*propose*move-block*dont-do-this (state <s> ^problem-space blocks ^clear.block <block1> ^clear.block { <> <block1> <block2> } ^ontop.top-block <block1> ^ontop.bottom-block <> <block2>) (<block1> ^type block) --> ... } \end{verbatim} This is not advisable because it corresponds to a different set of conditions than those in the original production (the \soar{top-block} and \soar{bottom-block} need not correspond to the same \soar{ontop} relation). To check this, we could print the original production at the Soar prompt: \begin{verbatim} soar> print blocks-world*propose*move-block*dont-do-this sp {blocks-world*propose*move-block*dont-do-this (state <s> ^problem-space blocks ^thing <thing2> ^thing { <> <thing2> <thing1> } ^ontop <o*1> ^ontop <o*2>) (<thing2> ^clear yes) (<thing1> ^clear yes ^type block) (<o*1> ^top-block <thing1>) (<o*2> ^bottom-block { <> <thing2> <b*1> }) --> (<s> ^operator <o> +) (<o> ^name move-block ^moving-block <thing1> ^destination <thing2>) } \end{verbatim} Soar has expanded the production into the longer form, and created two distinctive variables, \soar{$<$o*1$>$} and \soar{$<$o*2$>$} to represent the \soar{ontop} attribute. These two variables will not necessarily bind to the same identifiers in working memory. % ---------------------------------------------------------------------------- \subsubsection*{Negated multi-valued attributes and attribute-path notation} Negations of multi-valued attributes can be combined with attribute-path notation. However; it is very easy to make mistakes when using negated multi-valued attributes with attribute-path notation. Although it is possible to do it correctly, we strongly discourage its use. For example, \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state -^ontop.bottom-object.name table A) --> (<s> ^nothing-ontop-A-or-table true) } \end{verbatim} gets expanded to: \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state) -{(<s> ^ontop <o*1>) (<o*1> ^bottom-object <b*1>) (<b*1> ^name A) (<b*1> ^name table)} --> (<s> ^nothing-ontop-A-or-table true) } \end{verbatim} This example does not refer to two different blocks with different names. It tests that there is not an \soar{ontop} relation with a \soar{bottom-block} that is named \soar{A} and named \soar{table}. Thus, this production probably should have been written as: \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state -^ontop.bottom-object.name table -^ontop.bottom-object.name A) --> (<s> ^nothing-ontop-A-or-table true) } \end{verbatim} which expands to: \begin{verbatim} sp {blocks*negated-conjunction-example (state <s> ^name top-state) -{(<s> ^ontop <o*2>) (<o*2> ^bottom-object <b*2>) (<b*2> ^name a)} -{(<s> ^ontop <o*1>) (<o*1> ^bottom-object <b*1>) (<b*1> ^name table)} --> (<s> ^nothing-ontop-a-or-table true +) } \end{verbatim} \subsubsection*{Notes on attribute-path notation}\vspace{-12pt} \begin{itemize} \item Attributes specified in attribute-path notation may not start with a digit. For example, if you type \soar{\carat foo.3.bar}, Soar thinks the \soar{.3} is a floating-point number. (Attributes that don't appear in path notation can begin with a number.) \item Attribute-path notation may be used to any depth. \item Attribute-path notation may be combined with structured values, described in Section \ref{SYNTAX-pm-structured}. \end{itemize} % ---------------------------------------------------------------------------- \subsubsection{Structured-value notation} \label{SYNTAX-pm-structured} %pref-struc-cond} \index{structured value notation} \index{production!structured values} \index{value!structured notation} Another convenience that eliminates the use of intermediate variables is structured-value notation. Syntactically, the attributes and values of a condition may be written where a variable would normally be written. The attribute-value structure is delimited by parentheses. Using structured-value notation, the production in Figure \ref{fig:ex-prod} (on page \pageref{fig:ex-prod}) may also be written as: \begin{verbatim} sp {blocks-world*propose*move-block (state <s> ^problem-space blocks ^thing <thing1> {<> <thing1> <thing2>} ^ontop (^top-block <thing1> ^bottom-block <> <thing2>)) (<thing1> ^type block ^clear yes) (<thing2> ^clear yes) --> (<s> ^operator <o> +) (<o> ^name move-block ^moving-block <thing1> ^destination <thing2>) } \end{verbatim} Thus, several conditions may be ``collapsed'' into a single condition. \subsubsection*{Using variables within structured-value notation} Variables are allowed within the parentheses of structured-value notation to specify an identifier to be matched elsewhere in the production. For example, the variable \soar{<ontop>} could be added to the conditions (although it are not referenced again, so this is not helpful in this instance): \begin{verbatim} sp {blocks-world*propose*move-block (state <s> ^problem-space blocks ^thing <thing1> {<> <thing1> <thing2>} ^ontop (<ontop> ^top-block <thing1> ^bottom-block <> <thing2>)) (<thing1> ^type block ^clear yes) (<thing2> ^clear yes) --> (<s> ^operator <o> +) (<o> ^name move-block ^moving-block <thing1> ^destination <thing2>) } \end{verbatim} Structured values may be nested to any depth. Thus, it is possible to write our example production using a single condition with multiple structured values: \begin{verbatim} sp {blocks-world*propose*move-block (state <s> ^problem-space blocks ^thing <thing1> ({<> <thing1> <thing2>} ^clear yes) ^ontop (^top-block (<thing1> ^type block ^clear yes) ^bottom-block <> <thing2>) ) --> (<s> ^operator <o> +) (<o> ^name move-block ^moving-block <thing1> ^destination <thing2>) } \end{verbatim} \subsubsection*{Notes on structured-value notation}\vspace{-12pt} \begin{itemize} \item Attribute-path notation and structured-value notation are orthogonal and can be combined in any way. A structured value can contain an attribute path, or a structure can be given as the value for an attribute path. \item Structured-value notation may also be combined with negations and with multi-attributes. \item Structured-value notation may not be used in the actions of productions. \end{itemize} % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \subsection{The action side of productions (or RHS)} \label{SYNTAX-pm-action} \index{RHS of production} \index{production!RHS} \index{action side of production} The action side of a production, also called the right-hand side (or RHS) of the production, consists of individual actions that can: \begin{itemize} \item Add new elements to working memory. \item Remove elements from working memory. \item Create preferences. \item Perform other actions \end{itemize} When the conditions of a production match working memory, the production is said to be instantiated, and the production will fire during the next elaboration cycle. Firing the production involves performing the actions \emph{using the same variable bindings} that formed the instantiation. \subsubsection{Variables in Actions} \index{variable!action side} Variables can be used in actions. A variable that appeared in the condition side will be replaced with the value that is was bound to in the condition. A variable that appears only in the action side will be bound to a new identifier that begins with the first letter of that variable (e.g., \soar{<o>} might be bound to \soar{o234}). This symbol is guaranteed to be unique and it will be used for all occurrences of the variable in the action side, appearing in all working memory elements and preferences that are created by the production action. \subsubsection{Creating Working Memory Elements} An element is created in working memory by specifying it as an action. Multiple augmentations of an object can be combined into a single action, using the same syntax as in conditions, including path notation and multi-valued attributes. \begin{verbatim} --> (<s> ^block.color red ^thing <t1> <t2>) } \end{verbatim} The action above is expanded to be: \begin{verbatim} --> (<s> ^block <*b>) (<*b> ^color red) (<s> ^thing <t1>) (<s> ^thing <t2>) } \end{verbatim} This will add four elements to working memory with the variables replaced with whatever values they were bound to on the condition side. Since Soar is case sensitive, different combinations of upper- and lowercase letters represent \emph{different} constants. For example, ``\soar{red}'', ``\soar{Red}'', and ``\soar{RED}'' are all distinct symbols in Soar. In many cases, it is prudent to choose one of uppercase or lowercase and write all constants in that case to avoid confusion (and bugs). The constants that are used for attributes and values have a few restrictions on them:\vspace{-12pt} \begin{enumerate} \item There are a number of architecturally created augmentations for state and impasse objects; see Section \ref{SYNTAX-impasses} for a listing of these special augmentations. User-defined productions can not create or remove augmentations of states that use these attribute names.\vspace{-8pt} \item Attribute names should not begin with a number if these attributes will be used in attribute-path notation. \end{enumerate} \subsubsection{Removing Working Memory Elements} A element is explicitly removed from working memory by following the value with a dash: \soar{-}, also called a reject. \begin{verbatim} --> (<s> ^block <b> -)} \end{verbatim} If the removal of a working memory element removes the only link between the state and working memory elements that had the value of the removed element as an identifier, those working memory elements will be removed. This is applied recursively, so that all item that become unlinked are removed. The reject should be used with an action that will be o-supported. If reject is attempted with I-support, the working memory element will reappear if the reject loses I-support and the element still has support. % ---------------------------------------------------------------------------- \subsubsection{The syntax of preferences} \index{preference} Below are the eleven types of preferences as they can appear in the actions of a production for the selection of operators: \label{pref-list} \begin{tabular}{| l | l |} \hline \bf{RHS preferences} & \bf{Semantics} \\ \hline \soar{(id \carat operator value)} & acceptable \\ \soar{(id \carat operator value +)} & acceptable \\ \soar{(id \carat operator value !)} & require \\ \soar{(id \carat operator value \tild)} & prohibit \\ \soar{(id \carat operator value -)} & reject \\ \soar{(id \carat operator value > value2)} & better \\ \soar{(id \carat operator value < value2)} & worse \\ \soar{(id \carat operator value >)} & best \\ \soar{(id \carat operator value <)} & worst \\ \soar{(id \carat operator value =)} & unary indifferent \\ \soar{(id \carat operator value = value2)} & binary indifferent \\ \soar{(id \carat operator value = number)} & numeric indifferent \\ \hline \end{tabular} \vspace{10pt} \index{+} \index{"!} \index{~} \index{-} \index{>} \index{<} \index{=} \index{&} \index{"@} The identifier and value will always be variables, such as \soar{(<s1> \carat operator <o1> > <o2>)}. The preference notation appears similar to the predicate tests that appear on the left-hand side of productions, but has very different meaning. Predicates cannot be used on the right-hand side of a production and you cannot restrict the bindings of variables on the right-hand side of a production. (Such restrictions can happen only in the conditions.) Also notice that the \soar{+} symbol is optional when specifying acceptable preferences in the actions of a production, although using this symbol will make the semantics of your productions clearer in many instances. The \soar{+} symbol will always appear when you inspect preference memory (with the \soar{preferences} command). Productions are never needed to delete preferences because preferences will be retracted when the production no longer matches. Preferences should never be created by operator application rules, and they should always be created by rules that will give only I-support to their actions. % ---------------------------------------------------------------------------- \subsubsection{Shorthand notations for preference creation} There are a few shorthand notations allowed for the creation of operator preferences on the right-hand side of productions. Acceptable preferences do not need to be specified with a \soar{+} symbol. \soar{(<s> \carat operator <op1>)} is assumed to mean \soar{(<s> \carat operator <op1> +)}. Ambiguity can easily arise when using a preference that can be either binary or unary: \soar{> < =}. The default assumption is that if a value follows the preference, then the preference is binary. It will be unary if a carat (up-arrow), a closing parenthesis, another preference, or a comma follows it. Below are four examples of legal, although unrealistic, actions that have the same effect. \begin{verbatim} (<s> ^operator <o1> <o2> + <o2> < <o1> <o3> =, <o4>) (<s> ^operator <o1> + <o2> + <o2> < <o1> <o3> =, <o4> +) (<s> ^operator <o1> <o2> <o2> < <o1> <o4> <o3> =) (<s> ^operator <o1> ^operator <o2> ^operator <o2> < <o1> ^operator <o4> <o3> =) \end{verbatim} Any one of those actions could be expanded to the following list of preferences: \begin{verbatim} (<s> ^operator <o1> +) (<s> ^operator <o2> +) (<s> ^operator <o2> < <o1>) (<s> ^operator <o3> =) (<s> ^operator <o4> +) \end{verbatim} Note that structured-value notation may not be used in the actions of productions. % ---------------------------------------------------------------------------- \subsubsection{Righthand-side Functions} The fourth type of action that can occur in productions is called a \emph{righthand-side function}. Righthand-side functions allow productions to create side effects other than changing working memory. The RHS functions are described below, organized by the type of side effect they have. % ---------------------------------------------------------------------------- \subsubsection{Stopping and pausing Soar} \label{RHS-stopping} \begin{description} \index{halt} \item [\soarb{halt} ---] Terminates Soar's execution and returns to the user prompt. A \soar{halt} action irreversibly terminates the running of a Soar program. It should not be used if Soar is to be restarted (see the \soar{interrupt} RHS action below.) \begin{verbatim} sp { ... --> (halt) } \end{verbatim} \item [\soarb{interrupt} --- ] \index{interrupt} Executing this function causes Soar to stop at the end of the current phase, and return to the user prompt. This is similar to \soar{halt}, but does not terminate the run. The run may be continued by issuing a \soar{run} command from the user interface. The \soar{interrupt} RHS function has the same effect as typing \soar{stop-soar} at the prompt, except that there is more control because it takes effect exactly at the end of the phase that fires the production. \begin{verbatim} sp { ... --> (interrupt) } \end{verbatim} \label{interrupt-directive} Soar execution may also be stopped immediately before a production fires, using the \soar{:interrupt} directive. This functionality is called a matchtime interrupt and is very useful for debugging. See Section \ref{sp} on Page \pageref{sp} for more information. \begin{verbatim} sp {production*name :interrupt ... --> ... } \end{verbatim} \end{description} % ---------------------------------------------------------------------------- \subsubsection{Text input and output} The function \soar{write} is provided as a production action to do simple output of text in Soar. Soar applications that do extensive input and output of text should use Soar Markup Language (SML). To learn about SML, read the "SML Quick Start Guide" which should be located in the "Documentation" folder of your Soar install. \begin{description} \index{write} \item [\soarb{write} --- ] This function writes its arguments to the standard output. It does not automatically insert blanks, linefeeds, or carriage returns. For example, if \soar{<o>} is bound to 4, then \begin{verbatim} sp { ... --> (write <o> <o> <o> | x| <o> | | <o>) } \end{verbatim} prints \begin{verbatim} 444 x4 4 \end{verbatim} \index{carriage return, line feed} \index{crlf} \item [\soarb{crlf} --- ] Short for ``carriage return, line feed'', this function can be called only within \soar{write}. It forces a new line at its position in the \soar{write} action. \begin{verbatim} sp { ... --> (write <x> (crlf) <y>) } \end{verbatim} %\index{accept} %\item [\soarb{accept} --- ] Suspends Soar's execution and waits for the user % to type a constant, followed by a carriage return. The result of % \soar{accept} is the constant. The accept function does not read % in strings. It accepts a % single constant (which may look like a string). % Soar applications that make extensive use of text input should be % implemented using Tcl and Tk functionality, described in the % \emph{Soar Advanced Applications Manual}. %The \soarb{accept} function does not work properly under the TSI %(Tcl-Soar Interface), or any other Soar program that has a separate %``Agent Window'' instead of a Tcl or Wish Console. In this instance, %users should employ the \soar{tcl} RHS function %(described on page \pageref{SYNTAX-pm-otheractions-tcl}) to get user %input through a text widget. %\begin{verbatim} %sp { % ... % --> % (<s> ^input (accept)) } %\end{verbatim} \nocomment{Does this imply that a CR is not needed? I.e., will the constant be 'accepted' after a space is hit? } \end{description} % ---------------------------------------------------------------------------- \subsubsection{Mathematical functions} The expressions described in this section can be nested to any depth. For all of the functions in this section, missing or non-numeric arguments result in an error. \begin{description} \index{compute} \index{arithmetic operations} \index{floating-point number} \item [\soarb{+, -, *, /} --- ] These symbols provide prefix notation mathematical functions. These symbols work similarly to C functions. They will take either integer or real-number arguments. The first three functions return an integer when all arguments are integers and otherwise return a real number, and the last two functions always return a real number. The \soar{-} symbol is also a unary function which, given a single argument, returns the product of the argument and \soar{-1}. The \soar{/} symbol is also a unary function which, given a single argument, returns the reciprocal of the argument (1/x). \begin{verbatim} sp { ... --> (<s> ^sum (+ <x> <y>) ^product-sum (* (+ <v> <w>) (+ <x> <y>)) ^big-sum (+ <x> <y> <z> 402) ^negative-x (- <x>)) } \end{verbatim} \item [\soarb{div, mod} --- ] These symbols provide prefix notation binary mathematical functions (they each take two arguments). These symbols work similarly to C functions: They will take only integer arguments (using reals results in an error) and return an integer: \soar{div} takes two integers and returns their integer quotient; \soar{mod} returns their remainder. \begin{verbatim} sp { ... --> (<s> ^quotient (div <x> <y>) ^remainder (mod <x> <y>)) } \end{verbatim} \item [\soarb{abs, atan2, sqrt, sin, cos} --- ] These symbols provide prefix notation unary mathematical functions (they each take one argument). These symbols work similarly to C functions: They will take either integer or real-number arguments. The first function (\soar{abs}) returns an integer when its argument is an integer and otherwise returns a real number, and the last four functions always return a real number. \soar{atan2} returns as a float in radians, the arctangent of (first\_arg / second\_arg). \soar{sin} and \soar{cos} take as arguments the angle in radians. \begin{verbatim} sp { ... --> (<s> ^abs-value (abs <x>) ^sqrt (sqrt <x>)) } \end{verbatim} % ---------------------------------------------------------------------------- \index{int} \item [\soarb{int} --- ] Converts a single symbol to an integer constant. This function expects either an integer constant, symbolic constant, or floating point constant. The symbolic constant must be a string which can be interpreted as a single integer. The floating point constant is truncated to only the integer portion. This function essentially operates as a type casting function. For example, the expression \soar{2 + sqrt(6)} could be printed as an integer using the following: \begin{verbatim} sp { ... --> (write (+ 2 (int sqrt(6))) ) } \end{verbatim} % ---------------------------------------------------------------------------- \index{float} \item [\soarb{float} --- ] Converts a single symbol to a floating point constant. This function expects either an integer constant, symbolic constant, or floating point constant. The symbolic constant must be a string which can be interpreted as a single floating point number. This function essentially operates as a type casting function. For example, if you wanted to print out an integer expression as a floating-point number, you could do the following: \begin{verbatim} sp { ... --> (write (float (+ 2 3))) } \end{verbatim} % ---------------------------------------------------------------------------- \index{ifeq} \item [\soarb{ifeq} --- ] Conditionally return a symbol. This function takes four arguments. It returns the third argument if the first two are equal and the fourth argument otherwise. Note that symbols of different types will always be considered unequal. For example, 1.0 and 1 will be unequal because the first is a float and the second is an integer. \begin{verbatim} sp {example-rule (state <s> ^a <a> ^b <b>) ... --> (write (ifeq <a> <b> equal not-equal)) } \end{verbatim} \end{description} % ---------------------------------------------------------------------------- \subsubsection{Generating and manipulating symbols} A new symbol (an identifier) is generated on the right-hand side of a production whenever a previously unbound variable is used. This section describes other ways of generating and manipulating symbols on the right-hand side. \begin{description} \index{timestamp} \item [\soarb{timestamp} --- ] This function returns a symbol whose print name is a representation of the current date and time. For example: \begin{verbatim} sp { ... --> (write (timestamp)) } \end{verbatim} When this production fires, it will print out a representation of the current date and time, such as: \begin{verbatim} soar> run 1 e 8/1/96-15:22:49 \end{verbatim} \index{make-constant-symbol} \item [\soarb{make-constant-symbol} --- ] This function returns a new constant symbol guaranteed to be different from all symbols currently present in the system. With no arguments, it returns a symbol whose name starts with ``\soar{constant}''. With one or more arguments, it takes those argument symbols, concatenates them, and uses that as the prefix for the new symbol. (It may also append a number to the resulting symbol, if a symbol with that prefix as its name already exists.) \begin{verbatim} sp { ... --> (<s> ^new-symbol (make-constant-symbol)) } \end{verbatim} When this production fires, it will create an augmentation in working memory such as: \begin{verbatim} (S1 ^new-symbol constant5) \end{verbatim} \vspace{12pt} The production: \begin{verbatim} sp { ... --> (<s> ^new-symbol (make-constant-symbol <s> )) } \end{verbatim} will create an augmentation in working memory such as: \begin{verbatim} (S1 ^new-symbol |S14|) \end{verbatim} when it fires. The vertical bars denote that the symbol is a constant, rather than an identifier; in this example, the number 4 has been appended to the symbol S1. This can be particularly useful when used in conjunction with the \soar{timestamp} function; by using \soar{timestamp} as an argument to \soar{make-constant-symbol}, you can get a new symbol that is guaranteed to be unique. For example: \begin{verbatim} sp { ... --> (<s> ^new-symbol (make-constant-symbol (timestamp))) } \end{verbatim} When this production fires, it will create an augmentation in working memory such as: \begin{verbatim} (S1 ^new-symbol 8/1/96-15:22:49) \end{verbatim} \index{capitalize-symbol} \item [\soarb{capitalize-symbol} --- ] Given a symbol, this function returns a new symbol with the first character capitalized. This function is provided primarily for text output, for example, to allow the first word in a sentence to be capitalized. \nocomment{This command is possibly obsolete, since Soar7 is case sensitive?} \begin{verbatim} (capitalize-symbol foo) \end{verbatim} \index{concat} \item [\soarb{concat} --- ] Given an arbitrary number of symbols, this function concatenates them together into a single constant symbol. For example, \begin{verbatim} sp {example (state <s> ^type state) --> (<s> ^name (concat foo bar (+ 2 4))) } \end{verbatim} After this rule fires, the WME \verb=(S1 ^name foobar6)= will be added. \end{description} % ---------------------------------------------------------------------------- \subsubsection{User-defined functions and interface commands as RHS actions} %\label{SYNTAX-pm-otheractions-tcl} Any function which has a certain function signature may be registered with the Kernel and called as a RHS function. The function must have the following signature: \begin{verbatim} std::string MyFunction(smlRhsEventId id, void* pUserData, Agent* pAgent, char const* pFunctionName, char const* pArgument); \end{verbatim} The Tcl and Java interfaces have similar function signatures. Any arguments passed to the function on the RHS of a production are concatenated and passed to the function in the pArgument argument. Such a function can be registered with the kernel via the client interface by calling: \begin{verbatim} Kernel::AddRhsFunction(char const* pRhsFunctionName, RhsEventHandler handler, void* pUserData); \end{verbatim} The \soar{exec} and \soar{cmd} functions are used to call user-defined functions and interface commands on the RHS of a production. \begin{description} \index{exec} \item [\soarb{exec} --- ] Used to call user-defined registered functions. Any arguments are concatenated without spaces. For example, if \soar{<o>} is bound to \soar{x}, then \begin{verbatim} sp { ... --> (exec MakeANote <o> 1) } \end{verbatim} will call the user-defined \soar{MakeANote} function with the argument "\soar{x1}". The return value of the function, if any, may be placed in working memory or passed to another RHS function. For example, the log of a number \soar{<x>} could be printed this way: \begin{verbatim} sp { ... --> (write |The log of | <x> | is: | (exec log(<x>))|) } \end{verbatim} where "\soar{log}" is a registered user-defined function. \index{cmd} \item[\soarb{cmd} --- ] Used to call built-in Soar commands. Spaces are inserted between concatenated arguments. For example, the production \begin{verbatim} sp { ... --> (write (cmd print --depth 2 <s>)) } \end{verbatim} will have the effect of printing the object bound to \soar{<s>} to depth 2. \end{description} %There are no safety nets with this function, and users are warned that they %can get themselves into trouble if not careful. Users should %\emph{never} use the \soar{tcl} RHS function to invoke \soar{add-wme}, %\soar{remove-wme} or \soar{sp}. % ---------------------------------------------------------------------------- \subsubsection{Controlling chunking} \label{SYNTAX-pm-actions-learning} \nocomment{These RHS actions have not been implemented as of this writing. The functionality is achieved using the user-interface functions ``chunky-problem-spaces'' and ``chunk-free-problem-spaces''; see online help or the web pages for details on these functions.} Chunking is described in Chapter \ref{CHUNKING}. The following two functions are provided as RHS actions to assist in development of Soar programs; they are not intended to correspond to any theory of learning in Soar. This functionality is provided as a development tool, so that learning may be turned off in specific problem spaces, preventing otherwise buggy behavior. The \soar{dont-learn} and \soar{force-learn} RHS actions are to be used with specific settings for the \soar{learn} command (see page \pageref{learn}.) Using the \soar{learn} command, learning may be set to one of \soar{on}, \soar{off}, \soar{except}, or \soar{only}; learning must be set to \soar{except} for the \soar{dont-learn} RHS action to have any effect and learning must be set to \soar{only} for the \soar{force-learn} RHS action to have any effect. \begin{description} \index{dont-learn} \item [\soarb{dont-learn} --- ] When learning is set to \soar{except}, by default chunks can be formed in all states; the \soar{dont-learn} RHS action will cause learning to be turned off for the specified state. \begin{verbatim} sp {turn-learning-off (state <s> ^feature 1 ^feature 2 -^feature 3) --> (dont-learn <s>) } \end{verbatim} The \soar{dont-learn} RHS action applies when \soar{learn} is set to \soar{-except}, and has no effect when other settings for \soar{learn} are used. \index{force-learn} \item [\soarb{force-learn} --- ] When learning is set to \soar{only}, by default chunks are not formed in any state; the \soar{force-learn} RHS action will cause learning to be turned on for the specified state. \begin{verbatim} sp {turn-learning-on (state <s> ^feature 1 ^feature 2 -^feature 3) --> (force-learn <s>) } \end{verbatim} The \soar{force-learn} RHS action applies when \soar{learn} is set to \soar{-only}, and has no effect when other settings for \soar{learn} are used. \end{description} % ---------------------------------------------------------------------------- %\subsection{Writing Productions that Create O-supported Preferences} \nocomment{there's no discussion of o-support in this chapter, and probably there should be. maybe a quick separate section on the syntax of o-supported productions? [things to mention in this section: you can't always tell whether a preference will have o-support just by looking at the production (o-support is determined at runtime), and rules for determining o-support.] } % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \section{Impasses in Working Memory and in Productions} \label{SYNTAX-impasses} \index{subgoal} \index{impasse} When the preferences in preference memory cannot be resolved unambiguously, Soar reaches an impasse, as described in Section \ref{ARCH-impasses}:\vspace{- 12pt} \begin{itemize} \item When Soar is unable to select a new operator (in the decision cycle), it is said to reach an operator impasse.\vspace{-8pt} \end{itemize} All impasses appear as states in working memory, where they can be tested by productions. This section describes the structure of state objects in working memory. % ---------------------------------------------------------------------------- \subsection{Impasses in working memory} \label{SYNTAX-impasseaug} %perf-goal-impa} There are four types of impasses. \nocomment{rewrite this section to show templates of what the objects look like in working memory for different types of impasses} \index{decision!procedure} \index{impasse} Below is a short description of the four types of impasses. (This was described in more detail in Section \ref{ARCH-impasses} on page \pageref{ARCH-impasses}.)\vspace{-12pt} \begin{enumerate} \item \emph{tie}: when there is a collection of equally eligible operators competing for the value of a particular attribute;\vspace{-8pt} \item \emph{conflict}: when two or more objects are better than each other, and they are not dominated by a third operator;\vspace{-8pt} \item \emph{constraint-failure}: when there are conflicting necessity preferences; \vspace{-8pt} \item \emph{no-change}: when the proposal phase runs to quiescence without suggesting a new operator. \end{enumerate} \index{impasse!types} \index{tie impasse} \index{conflict impasse} \index{constraint-failure impasse} \index{no-change impasse} \index{elaboration!phase} \index{impasse!resolution} \index{goal!termination} \index{subgoal!termination} The list below gives the seven augmentations that the architecture creates on the substate generated when an impasse is reached, and the values that each augmentation can contain:\vspace{-12pt} \begin{description} \item [\soar{\carat type state}] \vspace{-8pt} \item [\soar{\carat impasse}] Contains the impasse type: \soar{tie}, \soar{conflict}, \soar{constraint-failure}, or \soar{no-change}.\vspace{- 8pt} \item [\soar{\carat choices}]Either \soar{multiple} (for tie and conflict impasses), \soar{constraint-failure} (for constraint-failure impasses), or \soar{none} (for no-change impasses).\vspace{-8pt} \item [\soar{\carat superstate}] Contains the identifier of the state in which the impasse arose.\vspace{-8pt} \index{superstate} \item [\soar{\carat attribute}] For multi-choice and constraint-failure impasses, this contains \soar{operator}. For no-change impasses, this contains the attribute of the last decision with a value (\soar{state} or \soar{operator}).\vspace{-8pt} \index{subgoal!augmentations} \item [\soar{\carat item}] For multi-choice and constraint-failure impasses, this contains all values involved in the tie, conflict, or constraint-failure. If the set of items that tie or conflict changes during the impasse, the architecture removes or adds the appropriate item augmentations without terminating the existing impasse.\vspace{- 8pt} \index{item (attribute)} \item [\soar{\carat item-count}] For multi-choice and constraint-failure impasses, this contains the number of values listed under the item augmentation above.\vspace{-8pt} \index{item-count (attribute)} \item [\soar{\carat non-numeric}] For tie impasses, this contains all operators that do not have numeric indifferent preferences associated with them. If the set of items that tie changes during the impasse, the architecture removes or adds the appropriate non-numeric augmentations without terminating the existing impasse. \vspace{-8pt} \index{item (attribute)} \item [\soar{\carat non-numeric-count}] For tie impasses, this contains the number of operators listed under the non-numeric augmentation above.\vspace{-8pt} \index{item-count (attribute)} \item [\soar{\carat quiescence}] States are the only objects with \soar{quiescence t}, which is an explicit statement that quiescence (exhaustion of the elaboration cycle) was reached in the superstate. If problem solving in the subgoal is contingent on quiescence having been reached, the substate should test this flag. The side-effect is that no chunk will be built if it depended on that test. See Section \ref{CHUNKING-creation} on page \pageref{CHUNKING-creation} for details. This attribute can be ignored when learning is turned off. \index{quiescence t (augmentation)} \index{exhaustion} \end{description} Knowing the names of these architecturally defined attributes and their possible values will help you to write productions that test for the presence of specific types of impasses so that you can attempt to resolve the impasse in a manner appropriate to your program. Many of the default productions in the \soar{demos/defaults} directory of the Soar distribution provide means for resolving certain types of impasses. You may wish to make use of some of all of these productions or merely use them as guides for writing your own set of productions to respond to impasses. \subsubsection*{Examples} The following is an example of a substate that is created for a tie among three operators: \index{goal!examples} \index{impasse!examples} \begin{verbatim} (S12 ^type state ^impasse tie ^choices multiple ^attribute operator ^superstate S3 ^item O9 O10 O11 ^quiescence t) \end{verbatim} \vspace{12pt} The following is an example of a substate that is created for a no-change impasse to apply an operator: \begin{verbatim} (S12 ^type state ^impasse no-change ^choices none ^attribute operator ^superstate S3 ^quiescence t) (S3 ^operator O2) \end{verbatim} \vspace{12pt} % ---------------------------------------------------------------------------- \subsection{Testing for impasses in productions} Since states appear in working memory, they may also be tested for in the conditions of productions. % There are numerous examples of this in the set of default productions (see % Section \ref{default} or Appendix \ref{DEFAULT} for more information). For example, the following production tests for a constraint-failure impasse on the top-level state. \begin{verbatim} sp {default*top-goal*halt*operator*failure "Halt if no operator can be selected for the top goal." :default (state <ss> ^impasse constraint-failure ^superstate <s>) (<s> ^superstate nil) --> (write (crlf) |No operator can be selected for top goal.| ) (write (crlf) |Soar must halt.| ) (halt) } \end{verbatim} % ---------------------------------------------------------------------------- \section{Soar I/O: Input and Output in Soar} \label{SYNTAX-io} \index{I/O} \index{motor commands|see{I/O}} Many Soar users will want their programs to interact with a real or simulated environment. For example, Soar programs could control a robot, receiving sensory \emph{inputs} and sending command \textit{outputs}. Soar programs might also interact with simulated environments, such as a flight simulator. The mechanisms by which Soar receives inputs and sends outputs to an external process is called \emph{Soar I/O}. This section describes how input and output are represented in working memory and in productions. The details of creating and registering the input and output functions for Soar are beyond the scope of this manual, but they are described in the \textit{SML Quick Start Guide}. This section is provided for the sake of Soar users who will be making use of a program that has already been implemented, or for those who would simply like to understand how I/O is implemented in Soar. % A simple example % of Soar I/O using Tcl is provided in Section (Appendix?) \ref{Interface-Tcl_I/O}. % ---------------------------------------------------------------------------- \subsection{Overview of Soar I/O} When Soar interacts with an external environment, it must make use of mechanisms that allow it to receive input from that environment and to effect changes in that environment. An external environment may be the real world or a simulation; input is usually viewed as Soar's perception and output is viewed as Soar's motor abilities. \index{I/O!input functions} \index{I/O!output functions} \index{input functions|see{I/O!input functions}} \index{output functions|see{I/O!output functions}} Soar I/O is accomplished via \emph{input functions} and \emph{output functions}. Input functions are called at the \emph{start} of every execution cycle, and add elements directly to specific input structures in working memory. These changes to working memory may change the set of productions that will fire or retract. Output functions are called at the \emph{end} of every execution cycle and are processed in response to changes to specific output structures in working memory. An output function is called only if changes have been made to the output-link structures in working memory. \index{I/O!io attribute} \index{io attribute|see{I/O!io attribute}} \index{I/O!input links} \index{I/O!output links} \index{input links|see{I/O!input links}} \index{output links|see{I/O!output links}} The structures for manipulating input and output in Soar are linked to a predefined attribute of the top-level state, called the \soar{io} attribute. The \soar{io} attribute has substructure to represent sensor inputs from the environment called \emph{input links}; because these are represented in working memory, Soar productions can match against input links to respond to an external situation. Likewise, the \soar{io} attribute has substructure to represent motor commands, called \emph{output links}. Functions that execute motor commands in the environment use the values on the output links to determine when and how they should execute an action. Generally, input functions create and remove elements on the input link to update Soar's perception of the environment. Output functions respond to values of working memory elements that appear on Soar's output link strucure. % ---------------------------------------------------------------------------- \subsection{Input and output in working memory} \label{ADVANCED-io-wm} All input and output is represented in working memory as substructure of the \soar{io} attribute of the top-level state. By default, the architecture creates an \soar{input-link} attribute of the \soar{io} object and an \soar{output-link} attribute of the io object. The values of the \soar{input-link} and \soar{output-link} attributes are identifiers whose augmentations are the complete set of input and output working memory elements, respectively. Some Soar systems may benefit from having multiple input and output links, or that use names which are more descriptive of the input or output function, such as \soar{vision-input-link}, \soar{text-input-link}, or \soar{motor-output-link}. In addition to providing the default \soar{io} substructure, the architecture allows users to create multiple input and output links via productions and I/O functions. Any identifiers for \soar{io} substructure created by the user will be assigned at run time and are not guaranteed to be the same from run to run. Therefore users should always employ variables when referring to input and output links in productions. Suppose a blocks-world task is implemented using a robot to move actual blocks around, with a camera creating input to Soar and a robotic arm executing command outputs. \begin{figure} \insertfigure{blocks-inputlink}{3.5in} \insertcaption{An example portion of the input link for the blocks-world task.} \label{fig:blocks-inputlink} \end{figure} The camera image might be analyzed by a separate vision program; this program could have as its output the locations of blocks on an xy plane. The Soar input function could take the output from the vision program and create the following working memory elements on the input link (all identifiers are assigned at runtime; this is just an example of possible bindings): \begin{verbatim} (S1 ^io I1) [A] (I1 ^input-link I2) [A] (I2 ^block B1) (I2 ^block B2) (I2 ^block B3) (B1 ^x-location 1) (B1 ^y-location 0) (B1 ^color red) (B2 ^x-location 2) (B2 ^y-location 0) (B2 ^color blue) (B3 ^x-location 3) (B3 ^y-location 0) (B3 ^color yellow) \end{verbatim} \vspace{12pt} The '[A]' notation in the example is used to indicate the working memory elements that are created by the architecture and not by the input function. This configuration of blocks corresponds to all blocks on the table, as illustrated in the initial state in Figure \ref{fig:blocks}. \begin{figure} \insertfigure{blocks-outputlink}{3.5in} \insertcaption{An example portion of the output link for the blocks-world task.} \label{fig:blocks-outputlink} \end{figure} Then, during the Apply Phase of the execution cycle, Soar productions could respond to an operator, such as ``move the red block ontop of the blue block'' by creating a structure on the output link, such as: \begin{verbatim} (S1 ^io I1) [A] (I1 ^output-link I3) [A] (I3 ^name move-block) (I3 ^moving-block B1) (I3 ^x-destination 2) (I3 ^y-destination 1) (B1 ^x-location 1) (B1 ^y-location 0) (B1 ^color red) \end{verbatim} \vspace{12pt} The '[A]' notation is used to indicate the working memory elements that are created by the architecture and not by productions. An output function would look for specific structure in this output link and translate this into the format required by the external program that controls the robotic arm. Movement by the robotic arm would lead to changes in the vision system, which would later be reported on the input-link. Input and output are viewed from Soar's perspective. An \emph{input function} adds or deletes augmentations of the \soar{input-link} providing Soar with information about some occurrence external to Soar. An \emph{output function} responds to substructure of the \soar{output-link} produced by production firings, and causes some occurrence external to Soar. Input and output occur through the \soar{io} attribute of the top-level state exclusively. \index{top-state!for I/O} Structures placed on the input-link by an input function remain there until removed by an input function. During this time, the structure continues to provide support for any production that has matched against it. The structure does \emph{not} cause the production to rematch and fire again on each cycle as long as it remains in working memory; to get the production to refire, the structure must be removed and added again. %The substructure of the input-link will remain in working memory until %the input function that %created it removes it. Thus working memory elements produced by an %input function provide support for condition-matching %in productions as long as the input persists in working memory, i.e. %until the input function specifically removes the elements of the %substructure. However, %a production that tests only a single element on the input structure will %result in instantiations that fire only once for each input element that %matches. The instantiation will not continue to fire for each matched %input element, unless the element is removed and then added again. % ---------------------------------------------------------------------------- \subsection{Input and output in production memory} \label{ADVANCED-io-pm} Productions involved in \emph{input} will test for specific attributes and values on the input-link, while productions involved in \emph{output} will create preferences for specific attributes and values on the output link. For example, a simplified production that responds to the vision input for the blocks task might look like this: \begin{verbatim} sp {blocks-world*elaborate*input (state <s> ^io.input-link <in>) (<in> ^block <ib1>) (<ib1> ^x-location <x1> ^y-location <y1>) (<in> ^block {<ib2> <> <ib1>}) (<ib2> ^x-location <x1> ^y-location {<y2> > <y1>}) --> (<s> ^block <b1>) (<s> ^block <b2>) (<b1> ^x-location <x1> ^y-location <y1> ^clear no) (<b2> ^x-location <x1> ^y-location <y2> ^above <b1>) } \end{verbatim} \vspace{12pt} This production ``copies'' two blocks and their locations directly to the top-level state. %This is a generally a good idea when using input, since the input %function may change the information on the link before the Soar program has %finished using it. It also adds information about the relationship between the two blocks. The variables used for the blocks on the RHS of the production are deliberately different from the variable name used for the block on the input-link in the LHS of the production. If the variable were the same, the production would create a link into the structure of the input-link, rather than copy the information. The attributes \soar{x-location} and \soar{y-location} are assumed to be values and not identifiers, so the same variable names may be used to do the copying. A production that creates wmes on the output-link for the blocks task might look like this: \begin{verbatim} sp {blocks-world*apply*move-block*send-output-command (state <s> ^operator <o> ^io.output-link <out>) (<o> ^name move-block ^moving-block <b1> ^destination <b2>) (<b1> ^x-location <x1> ^y-location <y1>) (<b2> ^x-location <x2> ^y-location <y2>) --> (<out> ^move-block <b1> ^x-destination <x2> ^y-destination (+ <y2> 1)) } \end{verbatim} \vspace{12pt} This production would create substructure on the output-link that the output function could interpret as being a command to move the block to a new location.
{ "alphanum_fraction": 0.6928487089, "avg_line_length": 38.2240973312, "ext": "tex", "hexsha": "2a6d6fbd8236c49cd97b1cd4659d4f35f70ef8db", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "sleyzerzon/soar", "max_forks_repo_path": "Documentation/ManualSource/syntax.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "sleyzerzon/soar", "max_issues_repo_path": "Documentation/ManualSource/syntax.tex", "max_line_length": 104, "max_stars_count": 1, "max_stars_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "sleyzerzon/soar", "max_stars_repo_path": "Documentation/ManualSource/syntax.tex", "max_stars_repo_stars_event_max_datetime": "2016-04-01T04:02:28.000Z", "max_stars_repo_stars_event_min_datetime": "2016-04-01T04:02:28.000Z", "num_tokens": 24170, "size": 97395 }
%!TEX root = ./ERL Industrial Robots.tex %-------------------------------------------------------------------- %-------------------------------------------------------------------- \subsection{Manipulation Place Functionality} \label{ssec:ManipulationPlace} %-------------------------------------------------------------------- \subsubsection{Functionality Description} \label{sssec:ManipulationPlaceDescription} This functionality benchmark assesses the robot's capability of placing different objects. An object from a known set of possible objects is presented in the test for the robot to be placed. After grasping the object, the robot needs to perform the placing motion, lift the object and place the object in a particular container/box. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{./fig/FBM/atwork/place_object.jpg} \label{fig:MarkerSetEndEffectorWithFrame} \caption{Manipulation Place Functionality.} \label{fig:ManipulationPlace} \end{center} \end{figure} %-------------------------------------------------------------------- \subsubsection{Feature Variation} \label{sssec:FBMManipulationPlaceVariation} The objects used in the benchmark will be selected from the list of parts to manipulate as presented in Section \ref{ssec:Objects}. Additionally, the precise position of the object placement differes in each test. %-------------------------------------------------------------------- \subsubsection{Input Provided} \label{sssec:FBMManipulationPlaceInput} The team will be provided with the following information: \begin{itemize} \item The list of possible objects used in the functionality benchmark. \item Possible placement of each object used in the functionality benchmark. \end{itemize} %-------------------------------------------------------------------- \subsubsection{Expected Robot Behaviour or Output} \label{sssec:FBMManipulationPlaceOutput} The robot is placed in front of the test area (a planar surface). One object will be placed on the robot or in its gripper. The robot now has to look the test area for any container to be placed. The robot performs the placing motion of the object and places the object in particular container in front of it. After placement it notifies the CFH. %-------------------------------------------------------------------- \subsubsection{Procedures and Rules} \label{sssec:FBMManipulationPlaceProcedures} The maximum time allowed for one functionality run is 4 minutes (30 seconds for preparation and 210 seconds for execution). A run consists of (1) a preparation phase where the robot is going to its initial configuration and grasp object and (2) an execution phase in which the robot detects, localizes, recognizes and manipulates one object. \begin{description} \item[Step 1] An object of known class and known instance will be placed on the robot. \item[Step 2] The robot must scan the test area and find a container. \item[Step 3] The robot must place the object inside the container and notify that placing has occurred. \item[Step 5] The preceding steps are repeated with five different objects. \end{description} \subsubsection{Communication with CFH} \label{sssec:FBMManipulationPlaceCommCFH} For this functionality benchmark the robot does not have to control any networked device in the environment. Only the communication as described below is necessary: \begin{description} \item[Step 1] The robot sends a \textbf{BeaconSignal} message at least every second. \item[Step 2] The robot waits for a \textbf{BenchmarkState} message. It starts the preparation procedure when the \emph{phase} field is equal to PREPARATION and the \emph{state} field is equal to RUNNING. \item[Step 3] As soon as the robot finishes the preparation phase, it sends a message of type \textbf{BenchmarkFeedback} to the CFH with the \emph{phase\_to\_terminate} field set to PREPARATION. The robot should do this until the \textbf{BenchmarkState}'s \emph{phase} and \emph{state} fields have changed. \item[Step 4] The robot waits for a \textbf{BenchmarkState} message. It starts the benchmark execution when the \emph{phase} field is equal to EXECUTION and the \emph{state} field is equal to RUNNING. \item[Step 5] As soon as the robot has finished manipulating the object, it sends a message of type \textbf{BenchmarkFeedback} to the CFH with the required results and the \emph{phase\_to\_terminate} field set to EXECUTION. The robot should do this until the \textbf{BenchmarkState}'s \emph{phase} and \emph{state} fields have changed. \item[Step 6] The robot continues with Step 2. \item[Step 7] The functionality benchmark ends when the \textbf{BenchmarkState}'s \emph{phase} field is equal to EXECUTION and the \emph{state} field is equal to FINISHED. \end{description} \noindent The messages to be sent and to be received can be seen on the Github repository located at \cite{rockin:CFHMessages}. %-------------------------------------------------------------------- \subsubsection{Acquisition of Benchmarking Data} \label{sssec:FBMManipulationPlaceData} General information on the acquisition of benchmarking data is described in Section \ref{sec:TbmAcquisitionOfData}. There, the \textbf{offline} part of the benchmarking data can be found. % \paragraph{Online data} In order to send online benchmarking data to the CFH, the robot has to use the \textbf{BenchmarkFeedback} message. The message contains: \begin{itemize} \item grasp\_notification (type: bool) \end{itemize} The \textbf{BenchmarkFeedback} message can be found at \cite{rockin:CFHMessages}. \paragraph{Offline data} The additional information described in the following table has to be logged: \begin{table}[h] \centering \begin{footnotesize} \begin{tabular}{|l|l|l|l|} \hline Topic & Type & Frame Id & Notes \\ \hline\hline /rockin/grasping\_pose\tablefootnote{Pose of the grasping position on the object.} & geometry\_msgs/PoseStamped & /base\_link & 10 Hz \\ \hline /rockin/gripper\_pose\tablefootnote{Pose of the gripper.} & geometry\_msgs/PoseStamped & /base\_link & 10 Hz \\ \hline /rockin/arm\_joints\tablefootnote{Joints data} & geometry\_msgs/JointState & /base\_link & 10 Hz \\ \hline \end{tabular} \end{footnotesize} \end{table} %-------------------------------------------------------------------- \subsubsection{Scoring and Ranking} \label{sssec:FBMManipulationPlaceScoring} Evaluation of the performance of a robot according to this functionality benchmark is based on: % \begin{enumerate} \item Number and percentage of correctly grasped objects (the object stops touching the table, see definition below); \item Execution time (if less than the maximum allowed for the benchmark). \end{enumerate} The scoring of teams is based on the number of objects correctly placed. A correct place is defined as the object being completely inside the container. In case of ties the overall execution time will be taken into account. %-------------------------------------------------------------------- % EOF %--------------------------------------------------------------------
{ "alphanum_fraction": 0.7041461701, "avg_line_length": 53.0970149254, "ext": "tex", "hexsha": "21d3be30d7c670e95b6a21e9a612458dc348f273", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-10-09T09:08:01.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-09T09:08:01.000Z", "max_forks_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "mhwasil/ProfessionalServiceRobots", "max_forks_repo_path": "rulebook/ssecErlirRulebookFBMManipulationPlace.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82", "max_issues_repo_issues_event_max_datetime": "2020-10-09T09:23:13.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-06T16:03:20.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "mhwasil/ProfessionalServiceRobots", "max_issues_repo_path": "rulebook/ssecErlirRulebookFBMManipulationPlace.tex", "max_line_length": 342, "max_stars_count": null, "max_stars_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "mhwasil/ProfessionalServiceRobots", "max_stars_repo_path": "rulebook/ssecErlirRulebookFBMManipulationPlace.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1645, "size": 7115 }
%\setcounter{page}{1200} \chapter{Taylor Series\label{TaylorSeriesChapter}} In this chapter we apply our knowledge of series of constant terms \begin{equation}\sum_{n=1}^\infty a_n =a_1+a_2+a_3+\cdots\end{equation} by generalizing to series with terms which are functions of a variable $x$. In particular, much can be done with series of the following form: \begin{definition} A {\bf power series centered at} $x=a$ is a series of the form \begin{equation}\sum_{n=0}^\infty a_n(x-a)^n =a_0+a_1(x-a)+a_2(x-a)^2+a_3(x-a)^3+\cdots,\label{powerseries}\end{equation} where $a$, $a_0$, $a_1$, etc., are constants. \end{definition} Given such an expression, two questions which arise. \begin{enumerate}[(i)] \item for which values of $x$ (besides the obvious, $x=a$) does the series (\ref{powerseries}) converge, and to what? \item Why would we study such things? \end{enumerate} The first question (i) is really just a natural extension of the convegence questions for constant series. For convergence, the ratio test and (less frequently) the root test play prominent roles, since the series---at first glance---look vaguely geometric.\footnote{% %%% FOOTNOTE This is meant in the sense that, if $a_0$, $a_1$, etc., were all the same number, the series would be geometric, with ratio $r=(x-a)$.% %%% END FOOTNOTE } To what these converge to will sometimes be obvious from how they arise. To answer (ii), for now we just mention some questions for which power series eventually provide answers: \begin{enumerate} \item How does a calculator ``compute'' $\sin x$, $\cos x$, $\tan^{-1}x$, and other transcendental functions? \item Why can physicists claim, for instance, $\sin \theta\approx\theta$? While this makes many computations easier, to what extent does it sacrifice precision? \item How does one come up with equations for the behaviors of extremely complicated systems where one cannot {\it a priori} derive the exact relationships among variables? \end{enumerate} We will answer the first question thoroughly in what follows, spending most of our time developing the theory and computation of Taylor Polynomials and Taylor Series. The second question will be answered in the process of tackling the first. One such example where physicists use $\sin \theta\approx \theta$ (for small $|\theta|$) is explored. The third question will be addressed anectdotaly when opportunity arises. We begin the chapter with a derivation of the Taylor Polynomials,\footnote{% %%% FOOTNOTE Named for Brook Taylor (1685--1731), though it is not clear that he was the initial discoverer. In fact Johann Bernoulli (1667--1748) and others are often given equal or greater credit for the discovery of the polynomials or related series, but apparently after a paper by in 1786 by Simon Antoine Jean Lhuilier (1750--1840) referred to ``Taylor series,'' the terms became standard. Lhuilier was also responsible for the ``lim'' notation in limits, as well as left- and right-hand limits, and many other important aspects of our modern notation. %%% END FOOTNOTE } with examples. We then look at the accuracy of Taylor Polynomials, and give a sufficient condition that the accuracy becomes perfect as the degree of the polynomials is allowed to approach infinity. When that occurs, Taylor Polynomials give rise to Taylor Series, which are of the form (\ref{powerseries}). We derive many such series, and examine where they converge. We also apply the series to real-world applications, as well as derivative and integration problems. \newpage \section{Taylor Polynomials: Motivation, Derivation and Definition} This section introduces Taylor Polynomials. The idea is to find simple polynomial approximations for a more complicated function given certain data regarding its behavior. In particular, if we know $f(a)$, $f'(a)$, $f''(a)$, and so on, then we should know something about how the function $f(x)$ behaves near $x=a$, and produce a polynomial which mimics that behavior. Subsections~\ref{LemmaSubsectionForTaylorPolynomials}% ---\ref{SubsectionForDerivP_N} contain the arguments and derivations of the forms of these polynomials. These subsections can be skipped, or skimmed, in a first reading.\footnote{%%% %%% FOOTNOTE Indeed, most textbooks give a different derivation, or just verify the properties of these polynomials, in the same spirit as our Subsection~\ref{DefAndExamplesOfTaylorPolynomials}, beginning on page~\pageref{DefAndExamplesOfTaylorPolynomials}.} %%% END FOOTNOTE However the derivation provides some further insight into the properties of the Taylor Polynomials, particularly regarding their underlying assumptions and accuracy. Before we make the derivations, we will need a simple lemma which we use repeatedly. We derive that lemma next. \subsection{A Lemma\label{LemmaSubsectionForTaylorPolynomials}} We will make repeated use of the following lemma: \begin{lemma} Given any function $g$, with derivative $g'$ existing and continuous on the closed interval with endpoints $x$ and $a$ (i.e., $[a,x]$ or $[x,a]$, depending upon whether $x\le a$ or $a\le x$), the following equation holds: \begin{equation}g(x)=g(a)+\int_a^xg'(t)\,dt.\label{alttaylemmaeq} \end{equation} \label{alttaylemma} \end{lemma} \begin{proof}Since $g$ is clearly an antiderivative of $g'$, the Fundamental Theorem of Calculus gives \begin{align*} g(a)+\int_a^xg'(t)\,dt&=\left.\vphantom{\int}g(a)+g(t)\right|_a^x\\ &=g(a)+g(x)-g(a)\\ &=g(x),\end{align*} which is the equation (\ref{alttaylemmaeq}) in reverse, q.e.d. \end{proof} It is interesting to note that when $x=a$, (\ref{alttaylemmaeq}) becomes $g(a)=g(a)+\int_a^ag'(t)\,dt=g(a)+0=g(a)$. Of course the integral over any interval of length zero is necessarily zero. \subsection{Derivation of $P_0(x)$} \begin{figure} \begin{center} \begin{pspicture}(-1,-2)(6.5,6) \psaxes% [labels=none,Dx=10,Dy=10] {<->}(0,-1)(-1,-2)(6.5,6) \psplot[plotpoints=2000,linewidth=1pt]{-.3}{6.4}% {x 1 sub 2 mul 180 mul 3.1415926536 div sin x 3 sub dup mul .5 mul add 1 add}%% % %\psplot[linecolor=red]{3}{5}{1.220584502 x 4 sub 2.920340573 mul add} %\psplot[linecolor=blue]{3}{5}{1.220584502 x 4 sub 2.920340573 mul add % % x 4 sub dup mul 2.117661993 mul add} \pscircle*(3.5,0.166075725){.06} \psline(3.5,-1.2)(3.5,-.8) \rput(3.5,-1.5){$a$} \psline(-1,0.166075725)(6.5,0.166075725) \rput(6,.5){$P_0(x)$} \rput[l](6,6){$f(x)$} \end{pspicture} \end{center} \caption{A function $f(x)$, and its zeroth-order (constant) approximation centered at $x=a$, namely $f(x)\approx P_0(x)=f(a)$. For $x$ very close to $a$, this is not an unreasonable approximation, but its accuracy quickly degenerates as $|x-a|$ gets larger.} \label{F(x)andP_0(x)} \end{figure} For a function $f(x)$, if we would like to approximate the value of the function for $x$ near $a$, the simplest assumption is that the function is approximately constant near $x=a$. The obvious choice for that constant is $f(a)$ itself. Hence we might assume $f(x)\approx f(a)$. (Note that $f(a)$ is itself a constant.) The approximation of $f(x)$ which assumes the function approximately constant is called $P_0(x)$: \begin{equation} P_0(x)=f(a).\label{DefinitionOfP0} \end{equation} This is also called the {\it zeroth-order approximation} of $f(x)$ centered at $x=a$, and we can write $f(x)\approx P_0(x)$ for $x$ near $a$, i.e., $|x-a|$ small. A function $f$ and its zeroth-order approximation for a particular $a$ are graphed together if Figure~\ref{F(x)andP_0(x)}. Summarizing, for $x$ near $a$, \begin{equation} f(x)\approx \underbrace{f(a)}_{P_0(x)}.\label{0thOrderApprox}\end{equation} A natural question then arises: how good is the approximation (\ref{0thOrderApprox})? Later we will have a sophisticated estimate on the error in assuming $f(x)\approx P_0(x)=f(a)$. For now we take the opportunity to forshadow that result by attacking the question intuitively. The answer will depend upon the answers to two related questions, which can be paraphrased as the following. \begin{enumerate}[(i)] \item How good is the assumption that $f$ is {\it constant} on the interval from $a$ to $x$? In other words, how fast is $f$ changing on that interval? \item How far is $x$ from $a$? \end{enumerate} These factors both contribute to the error. For instance if the interval from $a$ to $x$ is short, then a relatively slow change in $f$ means small error $f(x)-P_0(x)=f(x)-f(a)$ over such an interval. Slow change can, however, accumulate to create a large error if the interval from $a$ to $x$ is long. On the other hand, a small interval can still allow for large error if $f$ changes quickly on the interval. The key to estimating how fast the function changes is, as always, the size of $f'$, assuming it exists. Translating (i) and (ii) above into mathematical quantities, we say the bounds of the error will depend upon \begin{enumerate}[(a)] \item the size of $|f'(t)|$ as $t$ ranges from $a$ to $x$ (assuming $f'(t)$ exists for all such $t$), and \item the distance $|x-a|$. \end{enumerate} We will see similar factors accounting for error as we look at higher-order approximations $P_1(x)$, $P_2(x)$ and so on in this section, and the actual form of the general estimate for the error (also known as the {\it remainder}) in subsequent sections. \subsection{Derivation of $P_1(x)$} It was remarked in the last subsection that $P_0$ is not likely a good approximation for $x$ very far from $a$ if $f'$ is large. In computing $P_1(x)$, we will not assume $f$ is approximately constant (as we did with $P_0$), but instead assume that $f'$ is approximately constant. To be clear, here are the assumptions from which $P_1$ is computed: \begin{itemize} \item We know $f(a)$ and $f'(a)$; \item $f'(t)$ is approximately constant for $t$ from $a$ to $x$. \end{itemize} For this derivation we will use the lemma from the beginning of this section (that is Lemma~\ref{alttaylemma}, page~\pageref{alttaylemma}). Note that the following derivation uses $f'(a)$ is a constant. \begin{align*} f(x)&=f(a)+\int_a^xf'(t)\,dt\\ &\approx f(a)+\int_a^xf'(a)\,dt\\ &=f(a)+\left.\vphantom{\int}f'(a)t\right|_a^x\\ &=f(a)+f'(a)x-f'(a)a =\underbrace{f(a)+f'(a)(x-a)}_{P_1(x)}.\end{align*} Thus we define $P_1(x)$, the {\it first-order} approximation of $f(x)$ centered at $x=a$ by \begin{equation}P_1(x)=f(a)+f'(a)(x-a).\label{DefOfP_1(x)} \end{equation} This was also called the {\it linear approximation} of $f(x)$ at $a$ in Chapter~\ref{DerivativesToAnalyzeFunctions} ((\ref{LinearApproxWithX/A}), page~\pageref{LinearApproxWithX/A}). Figure~\ref{F(x),P_0(x)andP_1(x)} shows the same function as before (that is, in the illustration for $P_0$ in Figure~\ref{F(x)andP_0(x)}), but this time with both $P_0(x)$ and $P_1(x)$ for the same $a$. Because assuming constant derivative is often less problematic than considering constant height, $P_1(x)$ is usually a better approximation for $f(x)$ near $x=a$, and indeed one can usually stray farther from $x=a$ and have a reasonable approximation for $f(x)$ if $P_1(x)$ is used instead of $P_0(x)$.\footnote{%% %%% FOOTNOTE Note that in an example of motion, this is like choosing between an assumption of constant position, and of constant velocity. Intuitively the constant velocity assumption should yield a better approximation of position, for a while, than would a constant position assumption. Further consideration of this point is left to the reader.% %%% END FOOTNOTE } Again we ask how good is this newer approximation $P_1(x)$, and again the intuitive response is that it depends upon answers two questions: \begin{enumerate}[(i)] \item How close is $f'(t)$ to constant in the interval between $a$ and $x$? \item How far are we from $x=a$? \end{enumerate} The first question can be translated into, ``how fast is $f'$ changing on the interval between $a$ and $x$?'' This can be measured by the size of $f''$ in that interval, if it exists there. Again translating (i) and (ii) into quantifiables, we get that the accuracy of $P_1(x)$ depends upon \begin{enumerate}[(a)] \item the size of $|f''(t)|$ as $t$ ranges from $a$ to $x$ (assuming $f''(t)$ exists for all such $t$), and \item the distance $|x-a|$. \end{enumerate} If $f''$ is relatively small, then $f'$ is relatively constant, and then the computation we made giving $f(x)\approx f(a)+f'(a)(x-a)$, i.e., $f(x)\approx P_1(x)$, will be fairly accurate as long as $|x-a|$ is not too large. \begin{figure} \begin{center} \begin{pspicture}(-1,-2)(6.5,6) \psaxes% [labels=none,Dx=10,Dy=10] {<->}(0,-1)(-1,-2)(6.5,6) \psplot[plotpoints=2000,linewidth=1pt]{-.3}{6.4}% {x 1 sub 2 mul 180 mul 3.1415926536 div sin x 3 sub dup mul .5 mul add 1 add}%% % \psplot[linewidth=.75pt]{1.5}{5.5}{0.166075725 x 3.5 sub 1.067324371 mul add} \pscircle*(3.5,0.166075725){.05} \psline[linewidth=.5pt](3.5,-1.2)(3.5,-.8) \rput(3.5,-1.5){$a$} \psline(-1,0.166075725)(6.5,0.166075725) \rput(6,.5){$P_0(x)$} \rput(6,2.5){$P_1(x)$} \end{pspicture} \end{center} \caption{A function $f(x)$, with approximations $P_0(x)$ and $P_1(x)$ centered at $x=a$. Note how $P_1(x)$ matches both the height and slope of $f(x)$ at $x=a$, and thus tends to be a more accurate approximation of $f(x)$ near $x=a$ than is $P_0(x)$. Indeed, we can stray a little further from $x=a$ (particularly to the right for this function) and still have a reasonable approximation, then we could using the approximation $P_0$. \newline $\text{\qquad}$Now for this particular function, coincidentally $P_0(x)$ turns out to be briefly a better approximation as $f$ turns back upwards to the left of $x=a$. But closer to $x=a$, we see $P_1(x)$ is definitely the better approximation since it better ``follows the curve.''} \label{F(x),P_0(x)andP_1(x)} \end{figure} \subsection{Derivation of $P_2(x)$} To better accommodate the change in $f'$, we next replace the assumption that $f'$ is constant with the assumption that, rather than constant, it is changing at a constant rate. In other words, we assume that $f''$ is constant. So our assumptions in deriving $P_2(x)$ are: \begin{itemize} \item $f(a)$, $f'(a)$ and $f''(a)$ are known; \item $f''(t)$ is approximately constant from $t=a$ to $t=x$. \end{itemize} Again we use the lemma at the beginning of the section, except this time we use it twice: first, in approximating $f'$; and then integrating that approximation to approximate $f$. \begin{align*} f'(x)&=f'(a)+(f'(x)-f'(a))\\ &=f'(a)+\int_a^x f''(t)\,dt\\ &\approx f'(a)+\int_a^xf''(a)\,dt\\ &=f'(a)+f''(a)(x-a). \end{align*} Note that the computation above was the same as from the previous section, except that the part of $f'$ there is played by $f''$ here, and the part of $f$ there is played by $f'$ here. We integrate again to approximate $f$. The second line below uses the approximation for $f'$ derived above. \begin{alignat*}{2} f(x)&=f(a)+\int_a^xf'(t)\,dt&&\text{(Lemma~\ref{alttaylemma})}\\ &\approx f(a)+\int_a^x\left[f'(a)+f''(a)(t-a)\right]\,dt &&\text{(Approximation for }f'\text{ above)}\\ &=f(a)+f'(a)(x-a)+\left.\left[\frac{f''(a)}2(t-a)^2\right]\right|_a^x\\ &=f(a)+f'(a)(x-a)+\frac12f''(a)(x-a)^2-\frac12f''(a)(a-a)^2\\ &=\underbrace{f(a)+f'(a)(x-a)+\frac12f''(a)(x-a)^2}_{P_2(x)}. \end{alignat*} Thus we define the {\it second-order} (or {\it quadratic}) approximation of $f(x)$ centered at $x=a$ by \begin{equation} P_2(x)=f(a)+f'(a)(x-a)+\frac12f''(a)(x-a)^2.\label{DefOfP_2(x)} \end{equation} Again, the accuracy depends upon (i) how close $f''(t)$ is to constant from $t=a$ to $t=x$, and (ii) how far we are from $x=a$. These can be quantified by the sizes of (a)~$|f'''(t)|$ on the interval from $t=a$ to $t=x$, and (b) how large is $|x-a|$. It is reasonable to take into account how fast $f'$ changes on the interval from $a$ to $x$. For $P_2$ we assume, not that $f'$ is approximately constant as we did with $P_1(x)$, but that the rate of change of $f'$ is constant on the interval, i.e., that $f''$ is constant (and equal to $f''(a)$) on the interval. In fact this tends to make $P_2(x)$ ``hug'' the graph of $f(x)$ better, since it accounts for the concavity. Figure~\ref{F(x),P_0(x),P_1(x)andP_2(x)} shows how $P_0(x)$, $P_1(x)$ and $P_2(x)$ approximate $f(x)$ near $x=a$. The extent to which we err in that assumption is the extent to which $f''$ (related to concavity) is non-constant, but at least near $x=a$, $P_2(x)$ accommodates concavity, as well as slope and height of the function $f(x)$. \begin{figure} \begin{center} \begin{pspicture}(-1,-2)(6.5,6) \psaxes% [labels=none,Dx=10,Dy=10] {<->}(0,-1)(-1,-2)(6.5,6) \psplot[plotpoints=2000,linewidth=1pt]{-.3}{6.4}% {x 1 sub 2 mul 180 mul 3.1415926536 div sin x 3 sub dup mul .5 mul add 1 add}%% % \psplot[linewidth=.7pt]{1.5}{5.5}{0.166075725 x 3.5 sub 1.067324371 mul add} \psplot[plotpoints=2000,% linewidth=.8pt]{1.8}{4.75}{0.166075725 x 3.5 sub 1.067324371 mul add % x 3.5 sub dup mul .5 mul 4.835697099 mul add} \pscircle*(3.5,0.166075725){.05} \psline[linewidth=.5pt](3.5,-1.2)(3.5,-.8) \rput(3.5,-1.5){$a$} \psline(-1,0.166075725)(6.5,0.166075725) \rput(6,.4){$P_0(x)$} \rput(6,2.5){$P_1(x)$} \rput(4.7,5.5){$P_2(x)$} \end{pspicture} \end{center} \caption{A function $f(x)$, with approximations $P_0(x)$, $P_1(x)$ and $P_2(x)$ centered at $x=a$.} \label{F(x),P_0(x),P_1(x)andP_2(x)} \end{figure} \subsection{Derivation of $P_3(x)$} For the next order of approximation, we assume not that $f''$ is constant, but that its change is constant, i.e., that $f'''$ is constant on the interval from $a$ to $x$. Our assumptions are then \begin{itemize} \item $f(a)$, $f'(a)$, $f''(a)$ and $f'''(a)$ are known; \item $f'''(t)$ is approximately constant from $t=a$ to $t=x$. \end{itemize} We start with the approximation $f'''(t)=f(a)$, and use the lemma three times to get an approximation of $f$. First we use $f'''(t)\approx f'''(a)$ to approximate $f''(x)$: \begin{align*} f'''(t)&\approx f'''(a)\\ \implies f''(x)&=f''(a)+(f''(x)-f''(a))\\ &= f''(a)+\int_a^xf'''(t)\,dt\\ &\approx f''(a)+\int_a^xf'''(a)\,dt\\ &=f''(a)+f'''(a)(x-a).\end{align*} From this we approximate $f'(x)$: \begin{align*} f'(x)&=f'(a)+(f'(x)-f'(a))\\ &= f'(a)+\int_a^xf''(t)\,dt\\ &\approx f'(a)+\int_a^x\left(f''(a)+f'''(a)(t-a)\right)\,dt\\ &=f'(a)+f''(a)(x-a)+\left.\left[\frac12f''(a)(t-a)^2\right] \right|_a^x\\ &=f'(a)+f''(a)(x-a)+\frac12f''(a)(x-a)^2. \end{align*} Finally we use this to approximate $f(x)$: \begin{align*} f(x)&=f(a)+(f(x)-f(a))\\ &=f(a)+\int_a^xf'(t)\,dt\\ &\approx f(a)+\int_a^x\left[f'(a)+f''(a)(t-a)+\frac12f''(a)(t-a)^2 \right]\,dt\\ &=f(a)+\left.\left[f'(a)(t-a)+\frac12f''(a)(t-a)^2+\frac1{2\cdot3} (t-a)^3\right]\right|_a^x\\ &=f(a)+f'(a)(x-a)+\frac12f''(a)(x-a)^2+\frac1{2\cdot3}(x-a)^3. \end{align*} \begin{figure} \begin{center} \begin{pspicture}(-1,-2)(6.5,6) \psaxes% [labels=none,Dx=10,Dy=10] {<->}(0,-1)(-1,-2)(6.5,6) \psplot[plotpoints=2000,linewidth=1pt]{-.3}{6.4}% {x 1 sub 2 mul 180 mul 3.1415926536 div sin x 3 sub dup mul .5 mul add 1 add}%% % \psplot[linewidth=.5pt]{1.5}{6.3}{0.166075725 x 3.5 sub 1.067324371 mul add} \psplot[plotpoints=2000,% linewidth=.6pt]{1.8}{4.75}{0.166075725 x 3.5 sub 1.067324371 mul add % x 3.5 sub dup mul .5 mul 4.835697099 mul add} \psplot[plotpoints=2000,% linewidth=.8pt]{1.9}{5}{0.166075725 x 3.5 sub 1.067324371 mul add % x 3.5 sub dup mul .5 mul 4.835697099 mul add x 3.5 sub dup dup mul mul 6 div -2.269297484 mul add} \pscircle*(3.5,0.166075725){.05} \psline[linewidth=.5pt](3.5,-1.2)(3.5,-.8) \rput(3.5,-1.5){$a$} \psline(-1,0.166075725)(6.5,0.166075725) \rput(-.8,.5){$P_0(x)$} \rput(6.5,3.4){$P_1(x)$} \rput(4.1,5){$P_2(x)$} \rput(1.3,5){$P_2(x)$} \rput(2.4,6){$P_3(x)$} \rput(5.5,5.7){$P_3(x)$} \end{pspicture} \end{center} \caption{A function $f(x)$, with approximations $P_0(x)$, $P_1(x)$, $P_2(x)$ and $P_3(x)$ centered at $x=a$. $P_3(x)$ accommodates the way $f''$ is changing very close to $x=a$, and this is reflected in the way $P_3(x)$ better approximates $f(x)$ for longer as we move right from $x=a$. However, as we move further left of $x=a$, coincidentally $P_2(x)$ for this particular $f(x)$ and particular $a$ it happens that $P_2(x)$ shortly becomes a slightly better approximation. This occurs occasionally, as we saw (and see here) that $P_0(x)$ is briefly a better approximation somewhat left of $x=a$. In fact $f^{(4)}$ is rather large near $x=a$, especially left of $x=a$, which explains why $P_2$ becomes a better approximation just left of $x=a$. Still, both are respectable approximations a distance from $x=a$, and quite excellent improvements over $P_0$ and $P_1$. } \label{F(x),P_0(x),P_1(x),P_2(x)andP_3(x)} \end{figure} Now we are ready to notice a pattern, and state the general form of $P_N(x)$, the $N$th-order approximation of a function $f(x)$, centered at some $x=a$, assuming we know $f(a)$, $f'(a)$, $\cdots$, $f^{(N)}(a)$. \newpage\subsection{Derivation of $P_N(x)$\label{SubsectionForDerivP_N}} We will find the general $P_N(x)$ by an induction proof. First we look at the forms of $P_N(x)$ for $N=0,1,2,3$ and try to deduce a pattern: \begin{align*} P_0(x)&=f(a),\\ P_1(x)&=f(a)+f'(a)(x-a),\\ P_2(x)&=f(a)+f'(a)(x-a)+\frac12f''(a)(x-a)^2,\\ P_3(x)&=f(a)+f'(a)(x-a)+\frac12f''(a)(x-a)^2+\frac1{2\cdot3}f'''(a)(x-a)^3. \end{align*} Recalling that $0!=1$, $1!=1$, $2!=2$ and $3!=6=2\cdot3$, we will conjecture that \begin{itemize} \item assuming we know $f(a)$, $f'(a)$, $f''(a)$, $\cdots$, $f^{(N)}(a)$, and \item assuming $f^{(N)}(x)$ is approximately constant, \end{itemize} then $P_N(x)$, constructed as before, is in general of the form \begin{align} P_N(x)&=\frac{f(a)}{0!}+\frac{f'(a)}{1!}(x-a)^1+\frac{f''(a)}{2!}(x-a)^2 +\cdots+\frac{f^{(N)}(a)}{N!}(x-a)^N\notag\\ &=\sum_{k=0}^N\frac{f^{(k)}(a)}{k!}(x-a)^k.\label{DefOfP_N(x)}\end{align} Here we use the convention that $(x-a)^0=1$ and $f^{(0)}(x)=f(x)$ (that is, we have taken zero derivatives). For the induction proof we proceed as follows: \begin{enumerate} \item (\ref{DefOfP_N(x)}) is true for $N=0,1,2,3$. This was proved in the previous subsections. (Actually for this step knowing the case $N=0$ is true is sufficient.) \item Prove (\ref{DefOfP_N(x)}) holds for $N=n$ $\implies$ (\ref{DefOfP_N(x)}) holds for $N=n+1$. This will prove the theorem, since $P_3$ ``bootstraps'' $P_4$, which ``bootstraps'' $P_5$, and so on. So for our proof, we assume (\ref{DefOfP_N(x)}) holds for $N=n$, and show that it follows that (\ref{DefOfP_N(x)}) holds for $N=n+1$. A key observation is that if we make the assumption that we know $f(a)$, $f'(a)$, $\cdots$, $f^{(n+1)}(a)$ and assume $f^{(n+1)}(t)$ is approximately constant for $t$ between $a$ and $x$, then when we integrate $n$ times from $t=a$ to $t=x$, we get \begin{equation} f'(x)\approx f'(a)+f''(a)(x-a)+\frac{1}{2!}f'''(a)(x-a)^2 +\cdots+\frac1{n!}f^{(n+1)}(a)(x-a)^{n}, \end{equation} To see this note the following. \begin{enumerate} \item Assuming $f(a)$, $f'(a)$, $\cdots$, $f^{(n+1)}(a)$ are known and $f^{(n+1)}(x)$ is approximately constant contains the assumptions $f'(a)$, $(f')'(a)$, $(f')''(a)$, $\cdots$, $(f')^{(n)}(a)$ are known and $(f')^{(n)}(x)$ is approximately constant. In other words, the assumptions needed to construct the $n$th-order approximation for $f'(x)$ are present. \item Construction the $n$th-order approximation for $f'(x)$ would be accomplished by $n$ applications of Lemma~\ref{alttaylemma}, yielding (since we assume the conjecture true for $N=n$) $$f'(x)\approx\sum_{k=0}^n\frac{(f')^{(k)}(a)}{k!}(x-a)^k.$$ \item Integrating again would give us \begin{align*} f(x)&=f(a)+\int_a^xf'(t)\,dt\\ &\approx f(a)+\int_a^x\sum_{k=0}^n\frac{f^{(k+1)}}{k!}(t-a)^k\,dt\\ &=f(a)+\left.\left[\sum_{k=0}^n\frac{f^{(k+1)}}{k!\cdot(k+1)}(t-a)^{k+1} \right]\right|_a^x\\ &=f(a)+\sum_{k=0}^n\frac{f^{(k+1)}}{(k+1)!}(x-a)^{k+1}\\ &=f(a)+\frac{f'(a)}{1!}(x-a)^1+\frac{f''(a)}{2!}(x-a)^2 +\cdots+\frac{f^{(n+1)}(a)}{(n+1)!}(x-a)^{n+1}\\ &=\sum_{k=0}^{n+1}\frac{f^{(k)}(a)}{k!}(x-a)^k,\text{ q.e.d.} \end{align*} \end{enumerate} \end{enumerate} There is a simple real-world motivation for this kind of approach. Suppose a passenger on a train wishes to know approximately where the train is. At some time $t_0$, he passes the engineer's compartment and sees the mile marker $s_0$ out the front window. He also sees the speedometer reading $v_0$. If the train is not accelerating or decelerating noticeably, he can follow his watch and expect the train to move approximately $v_0(t-t_0)$ in the time $[t_0,t]$. In other words, \begin{equation}s\approx s_0+v_0(t-t_0).\end{equation} On the other hand, perhaps he feels some acceleration, as the train leaves an urban area, for instance. If the engineer has an acceleration indicator, and it reads $a_0$ at time $t_0$, then the passenger could assume that the acceleration will be constant for a while, and use \begin{equation}s\approx s_0+v_0(t-t_0)+\frac12a_0(t-t_0)^2. \label{2nd-OrderApproxOfPosition}\end{equation} If our passenger can even compute how $s''$ is changing, then assuming that change is at a constant rate, i.e., that $s'''(t)\approx s'''(t_0)$, we can go another order higher and claim\footnotemark \begin{equation} s\approx s_0+v_0(t-t_0)+\frac12a_0(t-t_0)^2+\frac13s'''(t_0)(t-t_0)^3. \label{3rd-OrderApproxOfPosition}\end{equation} \footnotetext{% %%% FOOTNOTE Notice that if $f''$ were truly constant, then (\ref{2nd-OrderApproxOfPosition}) would be exact and not an approximation. Similarly, if $f'''$ were truly constant, then (\ref{3rd-OrderApproxOfPosition}) would be exact. %%% END FOOTNOTE } Indeed this will likely be the best estimate thus far when $|t-t_0|$ is small (and $s'''$ is still relatively constant). However, we have to be aware that this latest approximation is a degree-three polynomial, and will therefore act like one as $|t|$ (and therefore $|t-t_0|$) gets large, so we have to always be aware of the range of $t$ for which the approximation is accurate. \newpage \subsection{Taylor Polynomials: Definition, Properties and Examples \label{DefAndExamplesOfTaylorPolynomials}} The approximations $P_0(x)$, $P_1(x)$, $P_2(x)$ and so on are also known as {\it Taylor Polynomials} of order 0, 1, 2 and so on. We now record this renaming with the following definition. \begin{definition} The {\bf $N$th order Taylor Polynomial} for the function $f(x)$ centered at the point $a$, where $f(a)$, $f'(a)$, $\cdots$, $f^{(N)}(a)$ all exist, is given by\,\footnotemark \footnotetext{We normally do not bother to write the factors $\frac1{0!}$ and $\frac1{1!}$ in the first two terms, since $0!=1!=1$. We also use the convention that $f^{(0)}=f$, $f^{(1)}=f'$, $f^{(2)}=f''$, etc.} \begin{align}P_N(x)=\sum_{n=0}^N\frac{f^{(n)}(a)(x-a)^n}{n!} \notag =&f(a)+f'(a)(x-a)+\frac{f''(a)(x-a)^2}{2!}+\frac{f'''(a)(x-a)^3}{3!}\\ &+\cdots+ \underbrace{\frac{f^{(n)}(a)(x-a)^n}{n!}}_{\text{``$n${th'' term\footnotemark}}}+\cdots+ \frac{f^{(N)}(a)(x-a)^N}{N!}.\label{TaylorPolynomial} \end{align} \end{definition} \bigskip \footnotetext{To be pedantic, if we count the $n=0$ term as the ``first term'' then this is the $(n+1)$\,st term.} The above formula (\ref{TaylorPolynomial}) should be committed to memory. Before we continue to explicit examples, we should note some important---indeed defining---properties of the Taylor Polynomials $P_N(x)$, centered at $x=a$. \begin{theorem} If $f(x)$ is $N$-times differentiable at $x=a$, then $P_N(x)$, as defined by {\rm (\ref{TaylorPolynomial}),} satisfies: \begin{alignat*}{2} P_N(a)&=f(a)\\ P_N'(a)&=f'(a)\\ P_N''(a)&=f''(a)\\ &\ \vdots\\ P_N^{(N-1)}(a)&=f^{(N-1)}(a)\\ P_N^{(N)}(x)&=f^{(N)}(a)&\qquad& \text{(i.e., } P_N^{(N)}(x)\text{ is constant)}\\ P_N^{(m)}(x)&=0 &&\text{ for all }m\in\left\{N+1,N+2,N+3,\cdots\right\} \end{alignat*} \label{TheoremOnP_N^(K)(a)} \end{theorem} \begin{proof} First we note the how derivative of the $n$th term in our polynomial (\ref{TaylorPolynomial}) simplifies, assuming $n\ge1$: \begin{align*} \frac{d}{dx}\left[\frac{f^{(n)}(a)(x-a)^n}{n!}\right] &=\frac{f^{(n)}(a)}{n!}\cdot n(x-a)^{n-1}\cdot\frac{d(x-a)}{dx} =\frac{f^{(n)}(a)}{(n-1)!\cdot n}\cdot n(x-a)^{n-1}\cdot1\\ &=\frac{f^{(n)}(a)}{(n-1)!}\cdot(x-a)^{n-1}. \end{align*} We made use of the fact that $a$, $f^{(n)}(a)$ and $n!$ are all constants in the computation above. In what follows, it is important to also note that any term ``$(x-a)^0$,'' i.e., additive {\it constant} term, in what follows will have derivative zero. Now we demonstrate the computations in Theorem~\ref{TheoremOnP_N^(K)(a)}. To make the pattern clear, we assume here that $N>3$. In each of what follows, we take derivatives at each line, and evaluate at $x=a$. \begin{alignat*}{4} P_N(x)&=\sum_{n=0}^N\frac{f^{(n)}(a)}{n!}(x-a)^n &&\implies&P_N(a)&=\frac{f^{(0)}(a)}{0!}&&=f(a)\\ P_N'(x)&=\sum_{n=1}^N\frac{f^{(n)}(a)}{(n-1)!}(x-a)^{n-1} &&\implies&P_N'(a)&=\frac{f^{(1)}(a)}{0!}&&=f'(a)\\ P_N''(x)&=\sum_{n=2}^N\frac{f^{(n)}(a)}{(n-2)!}(x-a)^{n-2} &&\implies&P_N''(a)&=\frac{f^{(2)}(a)}{0!}&&=f''(a)\\ P_N'''(x)&=\sum_{n=3}^N\frac{f^{(n)}(a)}{(n-3)!}(x-a)^{n-3} &&\implies&P_N'''(a)&=\frac{f^{(3)}(a)}{0!}&&=f'''(a)\\ &\ \ \vdots&&\ \ \ \ \vdots&&\ \ \vdots&&\ \ \vdots\\ P_N^{(N-1)}(x)&=\sum_{n=N-1}^N\frac{f^{(n)}(a)}{(n-(N-1))!}(x-a)^{n-(N-1)}\\ &=\frac{f^{(N-1)}(a)}{0!}+\frac{f^{(N)}(a)}{1!}(x-a) &&\implies &P_N^{(N-a)}(a)&=f^{(N-1)}(a)\\ P_N^{(N)}(x)&=f^{(N)}(a)&&\implies&P_N^{(N)}(a)&=f^{(N)}(a)\\ P_N^{(m)}(x)&=0,\quad m\in\{N+1,N+2,N+3,\cdots\},&&&\text{q.e.d.} \end{alignat*} \end{proof} The significance of the theorem is that $P_N(x)$---which was constructed by assuming we know $f(a)$, $f'(a)$, $\cdots$, $f^{(N)}(a)$ and that $f^{(N)}(x)\approx f^{(N)}(a)$---is a polynomial that agrees in height, slope, second derivative, and so on up to the $N$th derviative at $x=a$. Being an $N$th-degree polynomial, the $(N+1)$st and higher derivatives of $P_N(x)$ are all zero. Next we look at some examples. \bex Find $P_5(x)$ at $x=0$ for the function $f(x)=e^x$. \medskip \underline{\bf Solution:} We first construct the following chart, with $a=0$. \begin{alignat*}{3} f(x)&=e^x&&\implies&f(0)&=1\\ f'(x)&=e^x&&\implies&f'(0)&=1\\ f''(x)&=e^x&&\implies&f''(0)&=1\\ f'''(x)&=e^x&&\implies&f'''(0)&=1\\ f^{(4)}(x)&=e^x&&\implies&f^{(4)}(0)&=1\\ f^{(5)}(x)&=e^x&&\implies&f^{(5)}(0)&=1\end{alignat*} Now, according to our definition {\rm (\ref{TaylorPolynomial}), } $$P_5(x)=f(0)+f'(0)(x-0)+\frac{f''(0)(x-0)^2}{2!}+\frac{f'''(0)(x-0)^3}{3!} +\frac{f^{(4)}(0)(x-0)^4}{4!}+\frac{f^{(5)}(0)(x-0)^5}{5!}$$ $$=1+1x+\frac1{2!}x^2+\frac1{3!}x^3+\frac1{4!}x^4+\frac1{5!}x^5.$$ \begin{figure} \begin{multicols}{2}\begin{center} \begin{pspicture}(-3,-1.5)(3,5) \psset{xunit=.75cm,yunit=.125cm} \psaxes[Dy=10]{<->}(0,0)(-4,-8)(4,40) \psplot[linewidth=2pt,plotpoints=1000]% {-4}{3.6888794}{2.718281828 x exp} \psplot[plotpoints=1000]{-4}{4}% {1} \rput(3.5,3){$P_0(x)$} \end{pspicture} \begin{pspicture}(-3,-1.5)(3,5) \psset{xunit=.75cm,yunit=.125cm} \psaxes[Dy=10]{<->}(0,0)(-4,-8)(4,40) \psplot[linewidth=2pt,plotpoints=1000]% {-4}{3.6888794}{2.718281828 x exp} \psplot[plotpoints=1000]{-4}{4}% {1 x add} \rput(4,8){$P_1(x)$} \end{pspicture} \begin{pspicture}(-3,-1.5)(3,5) \psset{xunit=.75cm,yunit=.125cm} \psaxes[Dy=10]{<->}(0,0)(-4,-8)(4,40) \psplot[linewidth=2pt,plotpoints=1000]% {-4}{3.6888794}{2.718281828 x exp} \psplot[plotpoints=1000]{-4}{4}% {1 x add x dup mul 2 div add} \rput(4,15){$P_2(x)$} \end{pspicture} \begin{pspicture}(-3,-1.5)(3,5) \psset{xunit=.75cm,yunit=.125cm} \psaxes[Dy=10]{<->}(0,0)(-4,-8)(4,40) \psplot[linewidth=2pt,plotpoints=1000]% {-4}{3.6888794}{2.718281828 x exp} \psplot[plotpoints=1000]{-4}{4}% {1 x add x dup mul 2 div add % x 3 exp 6 div add} \rput(4,26){$P_3(x)$} \end{pspicture} \begin{pspicture}(-3,-1.5)(3,5) \psset{xunit=.75cm,yunit=.125cm} \psaxes[Dy=10]{<->}(0,0)(-4,-8)(4,40) \psplot[linewidth=2pt,plotpoints=1000]% {-4}{3.6888794}{2.718281828 x exp} \psplot[plotpoints=1000]{-4}{4}% {1 x add x dup mul 2 div add % x 3 exp 6 div add x 4 exp 24 div add} \rput(4.3,36){$P_4(x)$} \end{pspicture} \begin{pspicture}(-3,-1.5)(3,5) \psset{xunit=.75cm,yunit=.125cm} \psaxes[Dy=10]{<->}(0,0)(-4,-8)(4,40) \psplot[linewidth=2pt,plotpoints=1000]% {-4}{3.6888794}{2.718281828 x exp} \psplot[plotpoints=1000]{-4}{3.8}% {1 x add x dup mul 2 div add % x 3 exp 6 div add x 4 exp 24 div add x 5 exp 120 div add} \rput(4.5,37){$P_5(x)$} \end{pspicture} \vfill \end{center} \end{multicols} \caption{Graphs of $y=e^x$ and the Taylor Polynomial approximations $P_0(x)$--$P_5(x)$.} \label{GraphsOfTaylorsForE^X}\end{figure} This is the simplest polynomial which matches the height, slope, ``concavity,'' third derivative, fourth derivative and fifth derivative of $e^x$ at $x=0$. Thus we expect $P_5(x)$ to approximate the behavior of $e^x$ as long as $x$ is reasonably close to $a=0$. The pattern for finding $P_N(x)$, where $x=0$ and $f(x)=e^x$ seems clear. If we desired $P_6(x)$, we would simply add $\frac1{6!}x^6$. In fact, assuming $f^{(N+1)}(a)$ exists, we always have the following recursion relationship: \begin{equation}P_{N+1}(x)=P_N(x)+\frac{f^{(N+1)}(a)(x-a)^{N+1}}{(N+1)!}. \label{RecursionForTaylorPolys}\end{equation} It is interesting to compare how the graphs of $P_0(x), P_1(x),\cdots, P_5(x)$ approximate the graph of $f(x)=e^x$, as shown in Figure~\ref{GraphsOfTaylorsForE^X}, page~\pageref{GraphsOfTaylorsForE^X} (with $f(x)=e^x$ and $a=0$). \eex \bex Find $P_3(x)$ at $a=1$ if $f(x)=2x^3-9x^2+5x+11.$ \medskip \underline{\bf Solution:} Again we construct a chart. $$\begin{array}{rclrcl} f(x)&=&2x^3-9x^2+5x+11\qquad&f(1)&=&9\\ f'(x)&=&6x^2-18x+5&f'(1)&=&-7\\ f''(x)&=&12x-18&f''(1)&=&-6\\ f'''(x)&=&12&f'''(1)&=&12 \end{array}$$ Now $$P_4(x)=f(1)+f'(1)(x-1)+\frac{f''(1)(x-1)^2}{2!} +\frac{f'''(1)(x-1)^3}{3!}+\frac{f^{(4)}(1)(x-1)^4}{4!}$$ $$=9-7(x-1)+\frac{-6(x-1)^2}{2!}+\frac{12(x-1)^3}{3!}$$ $$=9-7(x-1)-3(x-1)^2+2(x-1)^3.$$ \eex This is a trivial, yet important kind of example, for if we expanded out the last line above in powers of $x$ we would get back the original polynomial, which shows that the simplest polynomial matching this function and its first three derivatives at $x=1$ is the polynomial itself. Furthermore, we can see from our chart, that $f^{(4)}(x)=0, f^{(5)}(x)=0,$ etc., and so $P_4, P_5, \cdots=P_3$. We will enshrine this result in the following theorem: \begin{theorem} Suppose $f(x)$ is an $N$th-degree polynomial, i.e., \begin{equation}f(x)=A_Nx^N+A_{N-1}x^{N-1}+\cdots+A_1x+A_0.\end{equation} Then regardless of $a\in\Re$, we have \begin{equation} (\forall m\ge N)\left[P_m(x)=f(x)\right].\label{PolyForATaylorTheorem} \end{equation} \end{theorem} \begin{proof} We will prove this in stages. \begin{enumerate}[(1)] %` \item An important general observation we will use repeatedly is the following: \begin{equation} (\forall x\in\Re)[g'(x)=h'(x)]\iff (\exists C)[g(x)-h(x)=C]. \label{ObservationForTaylorTheoremProofForG'=H'}\end{equation} In other words, if two functions have the same derivative functions, then the original two functions differ by a constant. %2 \item Since $f$ and $P_N$ are both $N$th-degree polynomials, we have $f^{(N)}(x)$ and $P^{(N)}(x)$ are constants. %3 \item By Theorem~\ref{TheoremOnP_N^(K)(a)}, $f^{(N)}(a)=P^{(N)}(a)$. %4 \item From (2) and (3), we have \begin{equation}P^{(N)}(x)=P^{(N)}(a)=f^{(N)}(a)=f^{(N)}(x). \end{equation} Thus $P^{(N)}(x)=f^{(N)}(x).$ %5 \item By (1), we can thus conclude that $P^{(N-1)}(x)$ and $f^{(N-1)}(x)$ differ by a constant. %6 \item Since $P^{(N-1)}(a)=f^{(N-1)}(a)$, and (5), we must have $P^{(N-1)}(x)=f^{(N-1)}(x)$. In other words, since $P^{(N-1)}(x)$ and $f^{(N-1)}(x)$ differ by a constant, and since $P^{(N-1)}(a)-f^{(N-1)}(a)=0$, the constant referred to in (5) must be zero. %7 \item The argument above can be repeated to get $P^{(N-2)}(x)=f^{(N-2)}(x)$, and so on, until finally we indeed get $P'(x)=f'(x)$. %8 \item The last step is the same. From (1), $P$ and $f$ differ by a constant, but since $P(a)=f(a)$, that constant must be zero, so $P(x)-f(x)=0$, i.e., $P(x)=f(x)$. \end{enumerate} \end{proof} It is important that the original function above was a polynomial, or else the conclusion is false. The theorem is useful for both analytical and algebraic reasons. If we want to expand a polynomial (\ref{PolyForATaylorTheorem}) in powers of $x-a$ (instead of $x$), then we can just compute $P_N(x)$ centered at $x=a$. On the other hand, we can in principle also use the theorem to simply re-center any polynomial. \bex Write the following polynomial in powers of $x$: $f(x)=(x+5)^4$. \underline{Solution}: We can use the binomial expansion (with Pascal's Triangle, for instance) for this, but we can also use the Taylor Polynomial centered at $a=0$: $$\begin{array}{rclrcl} f(x)&=&(x+5)^4\qquad&f(0)&=&625\\ f'(x)&=&4(x+5)^3& f'(0)&=&4\cdot5^3\\ f''(x)&=&4\cdot3(x+5)^2&f''(0)&=&4\cdot3\cdot5^2\\ f'''(x)&=&4\cdot3\cdot2(x+5)&f'''(0)&=&4\cdot3\cdot2\cdot5\\ f^{(4)}(x)&=&4\cdot3\cdot2\cdot1&f^{(4)}(0)&=&4\cdot3\cdot2\cdot1\\ f^{(m)}(x)&=&0&&&\text{ any }m>4 \end{array}$$ \begin{align*} P_4(x)&=f(0)+f'(0)x+\frac{f''(0)x^2}{2!}+\frac{f'''(0)x^3}{3!} +\frac{f^{(4)}(0)x^4}{4!}\\ &=5^4+4\cdot5^3x+\frac{4\cdot3\cdot5^2x^2}{2!} +\frac{4\cdot3\cdot2\cdot5x^3}{3!}+\frac{4\cdot3\cdot2\cdot1x^4}{4!}\\ &=625+500x+150x^2+20x^3+x^4.\end{align*} Because this is $P_4(x)$ for a fourth-degree polynomial function, it equals that polynomial function, i.e., $$(x+5)^4=625+500x+150x^2+20x^3+x^4.$$ \eex \newpage \bex Consider the function $f(x)=\sqrt[3]{x}$, with $a=27$. \begin{description} \item a. Calculate $P_1(x)$, $P_2(x)$, $P_3(x)$. \item b. Use these to approximate $\sqrt[3]{26}$. \item c. Compare these to the actual value of $\sqrt[3]{26}$, as determined by calculator. \end{description} \underline{Solution:} a. First we will construct a chart. \begin{alignat*}{2} f(x)&=x^{1/3}\qquad&f(27)&=3\\ f'(x)&=\frac13x^{-2/3}\qquad&f'(27)&=\frac13\cdot\frac1{9}=\frac1{27}\\ f''(x)&=-\,\frac29x^{-5/3}\qquad&f''(27)&= -\,\frac29\cdot\frac1{243}= -\,\frac2{2187}\\ f'''(x)&=\frac{10}{27}x^{-8/3}\qquad &f'''(27)&=\frac{10}{27}\cdot\frac1{6561} =\frac{10}{177,147}\end{alignat*} Thus, \begin{align*} P_1(x)&=3+\frac1{27}(x-27)\\ P_2(x)&=3+\frac1{27}(x-27)+\frac{-\,\frac2{2187}}2(x-27)^2\\ &=3+\frac1{27}(x-27)-\frac1{4374}(x-27)^2\\ P_3(x)&=3+\frac1{27}(x-27)-\frac1{4374}(x-27)^2 +\frac{\left(\frac{10}{177,147}\right)}{3!}(x-27)^3\\ &=3+\frac1{27}(x-27)-\frac1{4374}(x-27)^2+\frac{10}{1,062,882}(x-27)^3 .\end{align*} b. From these we get \begin{align*} P_1(26)&=3+\frac1{27}(26-27)=3+\frac1{27}(-1)=3-\frac1{27}=\frac{80}{27} \approx2.962963\\ P_2(26)&=3+\frac1{27}(-1)+\frac1{4374}(-1)^2=\frac{12,961}{4374} \approx2.9627343 \\ P_3(26)&=P_2(26)+ \frac{10}{1,062,882}(-1)^3=\frac{3149513}{1062882} \approx2.9627249.\end{align*} c. The actual value (to 8 digits) is $\sqrt[3]{26}\approx 2.9624961.$ The errors $R_1(26), R_2(26)$ and $R_3(26)$, in each of the above approximations are respectively \begin{align*} R_1(26)&=\sqrt[3]{26}-P_1(26)\approx2.9624961-2.962963=-0.0004669\\ R_2(26)&=\sqrt[3]{26}-P_2(26)\approx2.9624961-2.9627343=-0.0002382\\ R_3(26)&=\sqrt[3]{26}-P_3(26)\approx2.9624961-2.9627249=-0.0002288. \end{align*} Thus we see some improvement in these estimates. For other functions it can be more or less dramatic. In Section \ref{AccuracyOfPN} we will state the form of the error, or {\it remainder} $R_N(x)$, and thus be able to explore the accuracy of $P_N(x)$. \eex \bex Find $P_5(x)$ at $a=0$ for $f(x)=\sin x$. \medskip \underline{\bf Solution:} Again we construct the chart. \begin{alignat*}{3} f(x)&=\sin x &\quad&\implies& f(0)&=0\\ f'(x)&=\cos x &&\implies& f'(0)&=1\\ f''(x)&=-\sin x&&\implies& f''(0)&=0\\ f'''(x)&=-\cos x&&\implies& f'''(0)&=-1\\ f^{(4)}(x)&=\sin x&&\implies& f^{(4)}(0)&=0\\ f^{(5)}(x)&=\cos x&&\implies& \quad f^{(5)}(0)&=1, \end{alignat*} from which we get \begin{eqnarray*} P_5(x)&=&\ds{0+1x+\frac{0x^2}{2!}+\frac{-1x^3}{3!}+\frac{0x^4}{4!} +\frac{1x^5}{5!}}\\ &=&\ds{x-\frac{x^3}{3!}+\frac{x^5}{5!}}.\end{eqnarray*} From this chart we can see an obvious pattern where $$P_6(x)=P_5(x)+0=P_5(x),$$ $$P_8(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+0=P_7(x),$$ and so on. \eex This answers the question of how the calculators compute $\sin x$: by means of just such a Taylor Polynomial. It also hints at an answer for why physicists often simplify a problem by replacing $\sin x$ with $x$: That is the simplest polynomial which matches the height, slope and concavity of $\sin x$ at $x=0$ is a very simple function indeed, namely $P_2(x)=x$. See Figures \ref{sin13}--\ref{sinall} to compare $\sin x$ to $P_1(x)$, $P_3(x)$, $\cdots$, $P_{15}(x)$. Clearly the polynomials do increasingly better at approximating $\sin x$ as we add more terms. On the other hand, as $|x|$ gets large these approximations eventually behave like the polynomials they are in the sense that $|P_n(x)|\to\infty$ as $|x|\to\infty$. This is not alarming, since it is the {\it local} behavior, in this case near $x=0$, that we exploit when we use polynomials to approximate functions. It is worth remembering, however, so that we do not attempt to use a Taylor Polynomial to approximate a function too far from the center, $x=a$, of the Taylor Polynomial. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% Figures for \sin x %%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure} \begin{center} \begin{pspicture}(-7,-3.7)(7,3.7) \psset{xunit=.7cm,yunit=.7cm} \psaxes[Dx=10]{<->}(0,0)(-10,-4.8)(10,4.8) \psline(-9.424777961,-.15)(-9.424777961,.15) \rput(-9.424777961,-.35){$-3\pi$} \psline(-6.283185307,-.15)(-6.283185307,.15) \rput(-6.283185307,-.35){$-2\pi$} \psline(-3.141592654,-.15)(-3.141592654,.15) \rput(-3.141592654,-.35){$-\pi$} \psline(3.141592654,-.15)(3.141592654,.15) \rput(3.141592654,-.35){$\pi$} \psline(6.283185307,-.15)(6.283185307,.15) \rput(6.283185307,-.35){$2\pi$} \psplot[plotpoints=2000]{-10}{10}{x 3.1415926535 div 180 mul sin} \psplot[plotpoints=2000,linewidth=.5pt]{-4.8}{4.8}{x} \psplot[plotpoints=2000,linewidth=.5pt]% {-3.7}{3.7}{x x dup dup mul mul 6 div sub} \rput(-5.3,4){$P_3(x),P_4(x)$} \rput(6,4){$P_1(x),P_2(x)$} \end{pspicture} \end{center} \caption{$\sin x$, $P_1(x),P_2(x)=x$, and $P_3(x),P_4(x)=x-\frac{x^3}{3!}$.} \label{sin13}\end{figure} \begin{figure} \begin{center} \begin{pspicture}(-7,-3.7)(7,3.7) \psset{xunit=.7cm,yunit=.7cm} \psaxes[Dx=10]{<->}(0,0)(-10,-4.8)(10,4.8) \psline(-9.424777961,-.15)(-9.424777961,.15) \rput(-9.424777961,-.35){$-3\pi$} \psline(-6.283185307,-.15)(-6.283185307,.15) \rput(-6.283185307,-.35){$-2\pi$} \psline(-3.141592654,-.15)(-3.141592654,.15) \rput(-3.141592654,-.35){$-\pi$} \psline(3.141592654,-.15)(3.141592654,.15) \rput(3.141592654,-.35){$\pi$} \psline(6.283185307,-.15)(6.283185307,.15) \rput(6.283185307,-.35){$2\pi$} \psplot[plotpoints=2000]{-10}{10}{x 3.1415926535 div 180 mul sin} \psplot[plotpoints=2000,linewidth=.5pt]% {-4.515}{4.515}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add} \rput(6,4){$P_5(x),P_6(x)$} \end{pspicture} \end{center} \caption{$\sin x$ and $P_5(x),P_6(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}$.} \label{sin5}\end{figure} \begin{figure} \begin{center} \begin{pspicture}(-7,-3.7)(7,3.7) \psset{xunit=.7cm,yunit=.7cm} \psaxes[Dx=10]{<->}(0,0)(-10,-4.8)(10,4.8) \psline(-9.424777961,-.15)(-9.424777961,.15) \rput(-9.424777961,-.35){$-3\pi$} \psline(-6.283185307,-.15)(-6.283185307,.15) \rput(-6.283185307,-.35){$-2\pi$} \psline(-3.141592654,-.15)(-3.141592654,.15) \rput(-3.141592654,-.35){$-\pi$} \psline(3.141592654,-.15)(3.141592654,.15) \rput(3.141592654,-.35){$\pi$} \psline(6.283185307,-.15)(6.283185307,.15) \rput(6.283185307,-.35){$2\pi$} \psplot[plotpoints=2000]{-10}{10}{x 3.1415926535 div 180 mul sin} \psplot[plotpoints=2000,linewidth=.5pt]% {-4.9}{4.9}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add % x x 2 div mul x 3 div mul x 4 div mul x 5 div mul x 6 div mul x 7 div mul sub} \rput(6.5,-4){$P_7(x),P_8(x)$} \end{pspicture} \end{center} \caption{$\sin x$ and $P_7(x),P_8(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!} -\frac{x^7}{7!}$.} \label{sin7}\end{figure} \begin{figure} \begin{center} \begin{pspicture}(-7,-3.7)(7,3.7) \psset{xunit=.7cm,yunit=.7cm} \psaxes[Dx=10]{<->}(0,0)(-10,-4.8)(10,4.8) \psline(-9.424777961,-.15)(-9.424777961,.15) \rput(-9.424777961,-.35){$-3\pi$} \psline(-6.283185307,-.15)(-6.283185307,.15) \rput(-6.283185307,-.35){$-2\pi$} \psline(-3.141592654,-.15)(-3.141592654,.15) \rput(-3.141592654,-.35){$-\pi$} \psline(3.141592654,-.15)(3.141592654,.15) \rput(3.141592654,-.35){$\pi$} \psline(6.283185307,-.15)(6.283185307,.15) \rput(6.283185307,-.35){$2\pi$} \psplot[plotpoints=2000]{-10}{10}{x 3.1415926535 div 180 mul sin} \psplot[plotpoints=2000,linewidth=.5pt]{-4.8}{4.8}{x} \psplot[plotpoints=2000,linewidth=.5pt]% {-3.7}{3.7}{x x dup dup mul mul 6 div sub} \psplot[plotpoints=2000,linewidth=.5pt]% {-4.515}{4.515}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add} \psplot[plotpoints=2000,linewidth=.5pt]% {-4.9}{4.9}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add % x x 2 div mul x 3 div mul x 4 div mul x 5 div mul x 6 div mul x 7 div mul sub} \psplot[plotpoints=2000,linewidth=.5pt]% {-5.84}{5.84}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add % x x 2 div mul x 3 div mul x 4 div mul x 5 div mul x 6 div mul x 7 div mul sub x 9 exp 362880 div add} \psplot[plotpoints=2000,linewidth=.5pt]% {-6.5}{6.5}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add % x x 2 div mul x 3 div mul x 4 div mul x 5 div mul x 6 div mul x 7 div mul sub x 9 exp 362880 div add x 11 exp 39916800 div sub} \psplot[plotpoints=2000,linewidth=.5pt]% {-7.16}{7.16}% {x x dup dup mul mul 6 div sub x dup mul dup mul x mul 120 div add % x x 2 div mul x 3 div mul x 4 div mul x 5 div mul x 6 div mul x 7 div mul sub x 9 exp 362880 div add x 11 exp 39916800 div sub x 13 exp 6227020800 div add} \rput(3.55,-2){$P_3$} \rput(4.75,-2){$P_7$} \rput(6.5,-2){$P_{11}$} \rput(2.6,2){$P_1$} \rput(4.35,2){$P_5$} \rput(5.84,2){$P_9$} \rput(7.2,2){$P_{13}$} \end{pspicture} \end{center} \caption{$\sin x$ and $P_1(x)$, $P_3(x)$, $\cdots$, $P_{15}(x)$ (same as $P_2(x)$, $P_4(x)$, $\cdots$, $P_{16}(x)$, respectively)} \label{sinall}\end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% End Figures for \sin x %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \bigskip \bex {\bf (Application)} As already mentioned, physicists often take advantage of the second order approximation $\sin x\approx P_2(x)=0+x+0x^2,$ that is, \begin{equation}\sin x\approx x\qquad\mathrm{for\ }|x|\mathrm{\ small.} \label{SinXApproxXForXSmallForPendulum}\end{equation} The classic example is the modeling of the simple pendulum. See Figure \ref{Simple Pendulum}. Suppose a pendulum of mass $m$ is hanging from an always taut and straight string of negligible weight. Let $\theta$ be the angle the string makes with the downward vertical direction. We will take $\theta>0$ if $\theta$ represents a counterclockwise rotation, as is standard. \begin{figure} \begin{center} \begin{pspicture}(-3,-2.5)(3,3) %\psline(-3,-3)(3,-3)(3,3)(-3,3)(-3,-3) \psline{->}(-1,3)(-1,-.46227766) \psline{->}(-1,3)(0.637323862,-1.819181489)%(1,-3) \psarc{->}(-1,3){1}{270}{288.43494882} \rput(-.8,1.8){$\theta$} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,0){.3} \rput(0,0){$m$} \psline{->}(0.09486833,-.284604989)(0.09486833,-2) \rput{90}(-.15,-1.3){$\vec{F}=m\vec{g}$} \psline{<-}(0.09486833,-2)(0.637323862,-1.819181489) \rput{18.435}(.5,-2.1){$F_{\text{tan}}$} \rput{-71.56505118}(.6,-1){$F_{\text{norm}}$} \psarc{->}(0.09486833,-.284604989){1}{270}{288.43494882} \psarc[linestyle=dashed](-1,3){3.46227766}{240}{300} \rput(0.3,-1.5){$\theta$} \rput(-1.3,1.){$l$} \rput(-.6,-.25){$s$} \end{pspicture} \end{center} \caption{Simple pendulum with a free body diagram. From trigonometry, recall $s=r\theta$.}\label{Simple Pendulum} %\end{picture} \end{figure} The component of velocity which is in the direction of motion of the pendulum is given by $\frac{ds}{dt}=\frac{d(l\theta)}{dt}=l\frac{d\theta}{dt}$, and the acceleration by its derivative, $\frac{d^2s}{dt^2}\frac{d\,^2(l\theta)}{dt^2}=l\frac{d^2\theta}{dt^2}$. Now the force in the direction of the motion has magnitude $mg\sin\theta$, but is a restorative force, and is thus in the opposite direction of the angular displacement. It is not too difficult to see that this force is given by $-mg\sin\theta$, for $\theta\in\left[-\frac{\pi}2,\frac{\pi}2\right]$. Thus, by equating the force and the acceleration in the angular direction, we get\footnotemark \footnotetext{For those familiar with moments of inertia, the analog of $F=ma$ is $$N=I\alpha,$$ where $N$ is torque, $I$ is the moment of inertia, and $\alpha$ is the angular acceleration, in $\mathrm{rad/sec}^2$. Using the fact that, for this example, torque is also defined by $N=F_{\mathrm{tan}}l=-mgl\sin\theta$, we get the equations $$N=-mgl\sin\theta=ml^2\frac{d\,^2\theta}{dt^2},$$ giving equation (\ref{pendeqtn0}) after dividing by $l$.} \begin{equation} ml\frac{d\,^2\theta}{dt^2}=-mg\sin\theta\label{pendeqtn0}\end{equation} which simplifies to \begin{equation} \frac{d\,^2\theta}{dt^2}=-\frac{g}l\sin\theta.\label{pendeqtn} \end{equation} This is a relatively difficult differential equation to solve. However, if we assume $|\theta|$ is small, we can use $\sin\theta\approx\theta$ and instead solve the following equation which holds approximately true \footnotemark \footnotetext{We should point out here that (\ref{*}) is an example of a {\it simple harmonic oscillator}, which is any physical system governed by an equation of the form $$Q''(t)=-\kappa Q(t),\qquad\kappa>0$$ ($\kappa$ being a constant) which has solution $$Q(t)=A\sin\sqrt{\kappa }\,t+B\cos\sqrt{\kappa}\, t,$$ and period $2\pi/\sqrt{\kappa}$. Examples include springs which are governed by Hooke's Law ${F}(s)=-ks$, where $k>0$ and $s=s(t)$. (Recall $F=m\frac{d^2s}{dt^2}$.)}: \begin{equation}\frac{d\,^2\theta}{dt^2} =-\frac{g}l\theta\label{*}\end{equation} The solution to (\ref{*}) is \begin{equation}\theta=A\sin\left(\sqrt{\frac{g}l}\cdot t\right) +B\cos\left(\sqrt{\frac{g}l}\cdot t\right).\label{**}\end{equation} Here $A$ and $B$ are arbitrary constants depending on the initial ($t=0$) position and velocity of the pendulum. Notice that (\ref{**}) is periodic, with a period $\tau$ where $\tau=2\pi/\sqrt{g/l}$, i.e., \begin{equation}\tau=2\pi\sqrt{\frac{l}g}.\label{period}\end{equation} That is the formula found in most physics texts for the period of a pendulum. However, it is based upon an approximation, albeit quite a good one for $|\theta|$ small. \eex \bigskip \bigskip \bex Let us find $P_6(x)$ where $f(x)=\cos x$ and $a=0$. \medskip \underline{\bf Solution:} We construct the table again: \begin{alignat*}{3} f(x)&=\cos x&\quad&\implies& f(0)&=1\\ f'(x)&=-\sin x&&\implies&f'(0)&=0\\ f''(x)&=-\cos x&&\implies&f''(0)&=-1\\ f'''(x)&=\sin x&&\implies&f'''(0)&=0\\ f^{(4)}(x)&=\cos x&&\implies&f^{(4)}(0)&=1\\ f^{(5)}(x)&=-\sin x&&\implies&f^{(5)}(0)&=0\\ f^{(6)}(x)&=-\cos x&&\implies&\quad f^{(6)}(0)&=-1\end{alignat*} Since the odd derivatives are zero at $x=0$, only the even-order terms appear, and we have \begin{eqnarray*} P_6(x)&=&\ds{1+\frac{-1(x-0)^2}{2!}+\frac{1(x-0)^4}{4!}+\frac{-1(x-0)^6}{6!}}\\ &=&\ds{1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}.}\end{eqnarray*} From this a pattern clearly emerges, and we could easily calculate $$P_{14}(x)=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!} +\frac{x^8}{8!}-\frac{x^{10}}{10!}+\frac{x^{12}}{12!}-\frac{x^{14}}{14!}.$$ We might also point out that $P_{15}$ would be the same, since the odd terms were all zero. \eex \bex Find $P_5$ for $f(x)=\ln x$ with center $a=1$. \medskip \underline{\bf Solution:} First, the table is constructed as usual. \begin{alignat*}{3} f(x)&=\ln x&&\implies&f(1)&=0\\ f'(x)&=x^{-1}&&\implies&f'(1)&=1\\ f''(x)&=-1x^{-2}&&\implies&f''(1)&=-1\\ f'''(x)&=2x^{-3}&&\implies& f'''(1)&=2\\ f^{(4)}(x)&=-3\cdot2x^{-4}&&\implies&f^{(4)}(1)&=-3\cdot2\\ f^{(5)}(x)&=4\cdot3\cdot2x^{-5}&&\implies&\quad f^{(5)}(1)&=4\cdot3\cdot2 \end{alignat*} Now we construct $P_5$ from (\ref{TaylorPolynomial}). $$P_5(x)=0+1(x-1)+\frac{-1(x-1)^2}{2!} +\frac{2(x-1)^3}{3!}+\frac{-3\cdot2(x-1)^4}{4!} +\frac{4\cdot3\cdot2(x-1)^5}{5!}.$$ Recalling the definition of factorials, in which $2!=2\cdot1$, $3!=3\cdot2\cdot1$, $4!=4\cdot3\cdot2\cdot1$, and $5!=5\cdot4\cdot3\cdot2\cdot1$, we see that the above simplifies to $$P_5(x)=1(x-1)-\frac12(x-1)^2+\frac13(x-1)^3-\frac14(x-1)^4+\frac15 (x-1)^5.$$ It is not hard to see that $f^{(n)}(x)=(-1)^{n+1}(n-1)!x^{-n}$, and so $f^{(n)}(1)=(-1)^{n+1}(n-1)!.$ The obvious pattern which appears in $P_5$ should continue for $P_6$, $P_7$, etc. Thus we can calculate any $P_N(x)$ for this example: $$P_N(x)=\sum_{n=1}^N\frac{(-1)^{n+1}(x-1)^n}n.$$ \label{ln example}\eex \begin{center} \underline{\Large{\bf Exercises}} \end{center} \begin{multicols}{2} \begin{enumerate} \item If $f(x)=\ds{\frac1{1-x}}$, and $a=0$, show (remembering the chain rule where appropriate) that $$P_5(x)=1+x+x^2+x^3+x^4+x^5=\sum_{n=0}^5x^n.$$ What do you suppose is the formula for $P_N(x)$? \item Find $P_5(x)$ where $a=\pi$ and $f(x)=\sin x.$ \item Find $P_3(x)$ where $a=\ds{\frac\pi4}$ and $f(x)=\tan x$. \label{geompartsums} \item Show that (\ref{**}) is indeed a solution to (\ref{*}) by taking two time derivatives of each side of (\ref{*}), remembering to employ the chain rule where appropriate. \end{enumerate} \end{multicols} \newpage \section{Accuracy of $P_N(x)$\label{AccuracyOfPN}} \bigskip All of this makes for lovely pictures, but one usually needs some certainty regarding just how accurate $P_N(x)$ can be expected to be. Fortunately, there is an estimate on the {\it error} arising from replacing $f(x)$ with $P_N(x)$. This difference $f(x)-P_N(x)$ is also referred to as the {\it remainder} $R_N(x)$: \begin{equation} R_N(x)=f(x)-P_N(x).\label{Remainder}\end{equation} Perhaps the name ``remainder'' makes more sense if we rewrite (\ref{Remainder}) in the form \begin{equation}f(x)=P_N(x)+R_N(x).\label{Remainder'}\end{equation} Of course if we knew the {\it exact} value of $R_N(x)$, then by (\ref{Remainder'}) we know $f(x)$ since we can always calculate $P_N(x)$ exactly with pencil and paper (after all, it is just a polynomial). So the best we can expect is to possibly have some estimate on the size of $R_N(x)$. This can often be accomplished by knowing the rough form of $R_N$, as is given in the following theorem. \begin{theorem} {\bf(Taylor's Remainder Theorem)} Suppose that $f$, $f'$, $f''$, $\cdots$, $f^{(N)}$ and $f^{(N+1)}$ all exist and are continuous on the closed interval with endpoints both $x$ and $a$. Then \begin{equation}R_N(x)=\frac{f^{(N+1)}(z)(x-a)^{N+1}}{(N+1)!} \label{RemainderTheorem}\end{equation} where $z$ is some (unknown) number between $a$ and $x$. \end{theorem} This could be rewritten $$f(x)=P_N(x)+\frac{f^{(N+1)}(z)(x-a)^{N+1}}{(N+1)!}.$$ Thus, the remainder looks just like the next term to be added to construct $P_{N+1}(x)$, except that the term $f^{(N+1)}(a)$ is replaced by the unknown quantity $f^{(N+1)}(z)$. A general proof of Taylor's Remainder Theorem is beyond the scope of this textbook. However, in the exercises we can explore the first two cases and give some explanation for how it can be generalized. There are several cases where this is useful. \bex Suppose that $|x|<.75$. In other words, $-.75<x<.75.$ Then what is the possible error if we use the approximation $\ds{\sin x\approx x-\frac{x^3}{3!}+\frac{x^5}{5!}}$? \medskip \underline{\bf Solution:} Notice that we are asking what is the remainder for the Taylor Polynomial $P_6(x)$ (see Figure \ref{sin5}) where $f(x)=\sin x$ and $a=0$, if $|x|<.75.$ (Recall that, for $\sin x$, we have $P_5=P_6$ when $a=0$.) We will use the fact that $|\sin z|\le1$ and $|\cos z|\le 1$ no matter what value $z$. Thus $$\left|R_6(x)\right|=\left|\frac{f^{(7)}(z)(x-0)^7}{7!}\right| =\left|\frac{-\cos z\cdot x^7}{7!}\right| =\frac1{7!}|\cos z|\cdot|x|^7 $$ $$\le \frac1{7!}\cdot1\cdot.75^7=0.00002648489. $$ This should be encouraging, since we have nearly five digits of accuracy from a polynomial with only three terms, when our angle is in the range $\pm.75\approx\pm43^\circ$. \eex \bex Suppose we want to use the approximation $$e^x\approx1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \frac{x^4}{4!}.$$ \begin{description} \item a. How accurate is this if $|x|<5$? \item b. How accurate is this if $|x|<2$? \item c. What if $|x|<1$? \end{description} \medskip \underline{Solution:} Since the approximating polynomial is $P_4(x)$ with $a=0$, we are looking for a bound for $$|R_4(x)|=\left|\frac{f^{(5)}(z)x^5}{5!}\right| =\left|\frac{e^zx^5}{5!}\right|=\frac1{120}e^z|x|^5.$$ a. $|x|<5$: Now $z$ is between $0$ and $x$, and since the exponential function is increasing, the worst possible case scenario is to have the greatest possible value for $z$ (which will be $x$ or $0$, which ever is greater). Since the greatest $x$ can be is 5, it is safe to use $e^z<e^5$. Thus, $$|R_4(x)|=\frac1{120}e^z|x|^5<\frac1{120}e^5\cdot5^5 \approx 3865.$$ Thus we see the exponential is not necessarily approximated by $P_4(x)$ for the whole range $|x|<5$. b. $|x|<2$: Now we have $z$ between $0$ and $x$, and $x$ between $-2$ and $2$, so the the it is only safe to assume $z<2$. Similar to the above, this gives $$|R_4(x)|=\frac1{120}e^z|x|^5<\frac1{120}e^2\cdot2^5 \approx 1.97.$$ We see we have a much better approximation if $|x|<2$. c. $|x|<1$: Here we can only assume $z<1$: $$|R_4(x)|=\frac1{120}e^z|x|^5<\frac1{120}e^1\cdot1^5 \approx 0.02265.$$ There are several remarks which should be made about this example. \begin{enumerate} \item Notice that we ``begged the question,'' since we used calculations of $e^5$, $e^2$ and $e^1$ to approximate the error. This is all correct, but perhaps a strange thing to do since such quantities are exactly what we are trying to approximate with the Taylor Polynomial. But even with this problem, the polynomial is useful because it can be quickly calculated for the whole range $|x|<5,2$ or $1$ for some application, and the accuracy estimated using only $e^5$, $e^2$ or $e^1$, which are finitely many values. One way to avoid this philosophical problem entirely is to use $e^x<3^x$ for $x>0$, since $3^x$ is easier to calculate for the integers we used. For example, $e^5<3^5$. (However, we need to be careful, since $3^x<e^x$ if $x<0$. Here it would be fine to use $3^x$, since we were interested in a larger range of $x$ which included positive numbers. If only interested in $x\in(-5,0)$, for example, we might use $e^x<2^x$ there.) \item Note that the error shrinks in a-c for two reasons: \begin{enumerate} \item $\left|f^{(5)}(z)\right|=e^z$ shrinks, since $z$ is more constrained. \item $|x|^5$ shrinks, since the maximum $|x|$ is smaller. \end{enumerate} We benefit from both these factors when we shrink $|x|$. \item If we truly needed more accuracy for $|x|<5$, we could take a higher-order Taylor Polynomial, such as $P_{15}(x)$, giving $$|R_{15}(x)|=\frac1{15!}e^z|x|^{15} <\frac1{15!}e^55^{15}\approx 3.5$$ This might still seem like a large error, but it is relatively small considering $e^5\approx148$. If the error is still too large, consider $P_{20}(x)$, with $$|R_{20}(x)|=\frac1{21!}e^z|x|^{21}<\frac1{20!}e^55^{20} \approx0.000277. $$ When we increase the order of the Taylor Polynomial, we always have the benefit of a growing factorial term $N!$ in the remainder. As long as the term $\left|f^{N+1}(z)\right|$ does not grow significantly, the factorial will dominate the exponential $|x-a|^{N+1}$. \item Finally, the exponential will always increase faster as $x\to\infty$ than any polynomial (be it $P_N(x)$ for a fixed $N$ or any other polynomial), and ``flatten out'' like no polynomial (except the zero polynomial) as $x\to\,-\infty$, so it is really not a good candidate for approximation very far from zero. For this reason, most calculating devices have exponential tables (and hence log tables) built into their memories. This makes the ``calculation'' very fast and accurate, since it is not really a calculation but simply a look-up of the values. Calculating devices also use these ``log tables'' to compute products, quotients and powers the way earlier generations of students used slide rules and log tables. \end{enumerate} \eex \newpage \begin{center}{\Large\bf\underline{Exercises}}\end{center} \begin{multicols}{2} \begin{enumerate} \item Find $P_N(x)$ and $R_N(x)$ for the following: \begin{enumerate} \item $f(x)=\sin x$, $a=\pi$, $N=5$ \item $f(x)=\sqrt x$, $a=1$, $N=3$ \item $f(x)=\frac1x$, $a=10$, $N=4$ \item $f(x)=e^x$, $a=0$, $N=9$. \item $f(x)=\sec x$, $a=\pi$, $N=2$. \item $f(x)=\ln x$, $a=e$, $N=3$. \end{enumerate} \item Explain why the series below converges, and to the limit claimed below: $$e=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots.$$ \item Many physics problems take advantage of the approximation $\tan x\approx x$ for $|x|$ small. \begin{enumerate} \item Conjecture on where this approximation comes from. \item Estimate the error when $|x|<1,.1,.01$. \end{enumerate} \item Suppose we wanted to find a Taylor Polynomial for $f(x)=\sin x$, centered at $a=0$, with accuracy $\left|R_N(x)\right|\le10^{-10}$ valid for $-2\pi\le x\le2\pi$. Find the lowest-order Taylor Polynomial which guarantees that accuracy for that interval. (This may require some numerical experimentation with the extimates.) \item Repeat the previous problem, but for $f(x)=e^x$ and the interval $|x|\le10$. \end{enumerate} \end{multicols} \newpage \section{Taylor/MacLaurin Series} \bigskip Now we come to the heart of the matter. Basically, the {\it Taylor Series} of a function $f$ which has all derivatives $f'$, $f''$, $\cdots$ existing at $a$, is the series we get when we let $N\to\infty$ in the expression for $P_N(x)$. The Taylor Series equals the function if and only if the remainder terms shrink to zero as $N\to\infty$: \subsection{Validity of Taylor Series} \begin{theorem}$\ds{\lim_{N\to\infty}R_N(x)=0\iff}$ \begin{equation}f(x)=\lim_{N\to\infty}P_N(x) = \sum_{n=0}^\infty\frac{f^{(n)}(a)(x-a)^n}{n!} \label{TaylorSeries}\end{equation} \end{theorem} \underline{Proof:} First we prove ($\Longleftarrow$). Assume $\ds{f(x)=\sum_{n=0}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!}}$. Then $$R_N(x)=f(x)-P_N(x) =\sum_{n=N+1}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!} \longrightarrow 0 \qquad\text{as }N\to\infty.\footnotemark$$ \footnotetext{ Recall that the ``tail end'' $\ds{\sum_{n=N+1}^\infty b_n}$ of a convergent series $\ds{\sum_{n=0}^\infty b_n}$ shrinks to zero as $N\to\infty$. See (???).} Next we prove ($\Longrightarrow$). Assume $R_N(x)\longrightarrow0$ as $N\to\infty$. Then $$\begin{array}{rrl} &\ds{f(x)-R_N(x)}&\ds{=\quad\sum_{n=0}^N\frac{f^{(n)}(a)(x-a)^n}{n!}}\\ \\ \implies&\ds{\lim_{N\to\infty}\left(f(x)-R_N(x)\right)}& \ds{=\quad\lim_{N\to\infty}\sum_{n=0}^N\frac{f^{(n)}(a)(x-a)^n}{n!}}\\ \\ \implies&\ds{f(x)+0}& \ds{=\quad\sum_{n=0}^\infty\frac{f^{(n)}(a)(x-a)^n}{n!}}, \text{ q.e.d.}\end{array} $$ The series we get from Theorem~\ref{TaylorSeries} has the following name: \begin{definition}Given all derivatives of $f$ exist at $x=a$, the series \begin{equation} \sum_{n=0}^\infty\frac{f^{(n)}(a)(x-a)^n}{n!} \label{TaylorSeriesDefined}\end{equation} is call the {\bf Taylor Series} of $f(x)$ centered at $x=a$.\end{definition} Just to be clear, Theorem~\ref{TaylorSeries} gives the criterion that the Taylor Series be equal to the function: \begin{equation} f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)(x-a)^n}{n!} \iff \lim_{N\to\infty}R_N(x)=0.\label{Taylor<>Remainder} \end{equation} A special case of the Taylor Series is the case $a=0$. This occurs often enough it is given its own name: \begin{comment} Let us look at a few remainders to see if we can derive series. \begin{center} \begin{tabular}{|c|c|c|l|} \hline {\bf Function} &{\bf $a$}& {\bf Remainder} &{\bf Conclusion}\\ \hline $e^x$ & $0$&$\ds{\frac{{e^z}x^{n+1}}{(n+1)!}}$ &\parbox{2.0in} {Since $e^z$ is between $e^0$ and $e^x$, it is bounded. As $n\to\infty$, the numerator grows at most exponentially, while the denominator grows like a factorial. Therefore $R_n(x)\to0$ as $n\to\infty$ for every $x\in\Re$.} \\ \hline $\sin x$ &$0$&$\ds{\frac{\pm\left(\begin{array}{c}\sin z\\ \cos z\end{array} \right)x^{n+1}}{(n+1)!}}$&\parbox{2.0in}{ The derivatives of $\sin x$ and $\cos x$ all being $\pm\sin x$ or $\pm\cos x$, the $z$-term of the remainder is always bounded between $-1$ and $1$. Just as in the case for $e^x$, the numerator grows exponentially and the denominator as a factorial. The denominator dominates, and $R_n(x)\to0$ as $n\to\infty$ for every $x\in\Re$.} \\ \hline $\cos x$&same&same\vphantom{$\ds{\frac12}$} form &Same as for $\sin x$. $x\in\Re$.\\ \hline $\ln x$&$1$&$\ds{\frac{(-1)^{n+2}(x-1)^{n+1}}{z^{(n+1)}(n+1)}}$& \parbox{2.0in}{Here we need $|x-1|<1$, or else we have exponential growth in $n$ which the denominator cannot counter. An exponential growth will make $R_n$ blow up. We also need to stay on the right hand side of $x=0$, or else we no longer have $f$ and all its derivatives continuous. Thus $R_n(x)\to0$ as $n\to\infty$ so long as $|x-1|<1$, i.e., $x\in(0,2)$.\footnotemark} \\ \hline $\ds{\frac1{1-x}}$&0&$\ds{\frac{(n+1)!x^{n+2}}{z^{(n+2)}(n+1)!}}$ &\parbox{2.0in}{Notice here the factorials cancel. Here we must have $|x|<1$ or we get exponential growth. This already keeps us away from $x=1$, which is where the function and derivatives blow up. Conclude $R_n(x)\to0$ as $n\to\infty$ so long as $|x|<1$, i.e., $x\in(-1,1)$.}\\ \hline \end{tabular} \end{center} \footnotetext{\label{four} Actually this series is also valid at $x=2$, where we have an alternating series with shrinking terms. It is not so trivial to prove, but the series will in fact converge to $\ln 2$ for $x=2$, so the interval of validity of the series is $(0,2]$.} \bigskip For each of the above functions, we get a range of $x$-values for which the remainders of the Taylor Polynomials shrink to zero, and hence the Taylor Polynomials converge to the function in that range of $x$'s. We can therefore equate the function with the relevant Taylor Series (\ref{TaylorSeries}). We point out here a note of terminology for a special case of the Taylor Series is the case where $a=0$ in (\ref{TaylorSeries}). In this case the series is called the {\it MacLaurin Series} for $f(x)$. \end{comment} \begin{definition} If a Taylor Series is centered at $a=0$, it is called a \linebreak {\bf MacLaurin Series}. In other words, if all derivatives of $f(x)$ exist at $x=0$, the function's MacLaurin Series is given by \begin{equation}\sum_{n=0}^\infty \frac{f^{(n)}(0)x^n}{n!}.\end{equation} \end{definition} The partial sums are sometimes called {\it MacLaurin Polynomials}. In the following propositions, we will consider several Taylor and MacLaurin Series, and show where they converge based on Theorem \ref{TaylorSeries} (which we restated in (\ref{Taylor<>Remainder})) and other observations. \bprop \begin{equation} e^x=\ds{1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots =\sum_{n=0}^\infty\frac{x^n}{n!}} \qquad\text{for all }x\in\Re.\label{e^x}\end{equation} \eprop \underline{Proof:} Recall that $f^{(n)}(x)=e^x$ for all $n\in\{0,1,2,\dots\}$. Thus, for any fixed $x\in\Re$, we have $$R_N(x)=\frac{f^{(N+1)}(z)x^{N+1}}{(N+1)!} =e^z\frac{x^{N+1}}{(N+1)!}.$$ Now $z$ is between $x$ and $0$, and so $e^z<\max\{e^0,e^x\}$, and is thus bounded by $M=\max\{e^0,e^x\}$. Thus $$|R_N(x)|=e^z\frac{|x|^{N+1}}{(N+1)!}\le M\cdot\frac{|x|^{N+1}}{(N+1)!} \longrightarrow M\cdot 0=0\qquad\text{as }N\to\infty,$$ since the numerator grows geometrically (or shrinks geometrically), while the denominator grows as a factorial. Recall that the factorial will dominate the exponential regardless of the base as $N\to\infty$. Since we showed $R_N(x)\to 0$ for any $x$, by Theorem~\ref{TaylorSeries}, (\ref{e^x}) follows, q.e.d. It was important to notice that $e^z$ was bounded {\it once $x$ was chosen, }and that the bound is going to change with each $x$. Also, absolute values were not needed around the $e^z$-term, since it will always be positive. Finally, to accommodate the case $x=0$, we substituted the weaker ``$\le$'' for the ``$<$''. For the case $x=0$, a careful look at the $P_N$ show $R_N(0)\equiv0$. This is because $0$ is where the series is centered. (Recall $P_N(a)=f(a)$.) \bprop \begin{equation} \sin x=\ds{x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots =\sum_{n=0}^\infty\frac{(-1)^{n}x^{n+1}}{(2n+1)!}} \qquad\text{for all }x\in\Re. \label{sin x} \end{equation} \eprop \underline{Proof:} Now $f^{(n)}(x)$ is of the form $\pm\sin x$ or $\pm\cos x$, which means it is bounded absolutely by 1, i.e., $\ds{\left|f^{(n)}\right|\le1}$. Thus $$|R_N(x)|=\left|\frac{f^{(N+1)}(z)x^{N+1}}{(N+1)!}\right| \le 1\cdot\frac{|x|^{N+1}}{(N+1)!}\to1\cdot0=0 \text{ as }N\to\infty.$$ Again this is because the geometric term $|x|^{N+1}$ is a lower order of growth (and may even decay if $x\in(-1,1)$) than the factorial $(N+1)!$. Thus, according to Theorem~\ref{TaylorSeries}, (\ref{sin x}) follows, q.e.d. A nearly identical argument shows that \bprop \begin{equation} \cos x=\ds{1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\cdots =\sum_{n=0}^\infty\frac{(-1)^{n}x^{2n}}{(2n)!}} \qquad\text{for all }x\in\Re.\label{cos x} \end{equation} \eprop Not all Taylor series converge to the function for all of $x\in\Re$. Furthermore, it is often difficult to prove $R_N(x)\to0$ when other techniques can give us that the Taylor Series in fact converges. For example, consider the following: \bprop \begin{equation} \ds{\frac1{1-x}}=\ds{1+x+x^2+x^3+x^4+\cdots =\sum_{n=0}^\infty x^n}. \qquad\text{for all }x\in(-1,1).\label{Geometric}\end{equation} \label{GeometricProp}\eprop Though we can calculate the series directly (see Exercise~\ref{geompartsums}), (\ref{Geometric}) is obvious if we read it backwards, realizing that the series is geometric with first term $a=0$ and ratio $x$. Moreover, the series converges when $|x|<1$ and diverges otherwise, from what we know of geometric series. >From these observations, Proposition~\ref{GeometricProp} is proved. We will see in Sections \ref{D&I} and \ref{OtherManipulations} that many of the connections and manipulations we would like to make with Taylor/MacLaurin Series are legitimate. In fact, these methods are often much easier than the more basic approaches. Consider Proposition~\ref{Geometric}. The actual remainder is of the form \begin{equation} R_N(x)=\frac{(N+1)!\,(1-z)^{-(N+2)}\,x^{N+1}}{(N+1)!} =\frac{x^{N+1}}{(1-z)^{N+2}}.\label{GeometricRemainder}\end{equation} We know $z$ is between $0$ and $x$, but without knowing more about where, it is not obvious that the numerator in our simplified $R_N$ will decrease in absolute size faster than the denominator. We will not belabor the point here, but just conclude that resorting to using facts about geometric series is a much simpler approach than attempting to prove $R_N(x)\to0$ when $|x|<1$. Another interesting Taylor Series is the following: \bprop The following is the Taylor Series for $\ln x$ centered at $x=1$: \begin{align} \ln x&=1(x-1)-\frac12(x-1)^2+\frac13(x-1)^3-\frac14(x-1)^4+\cdots \label{ln x}\\ &=\sum_{n=1}^\infty\frac{(-1)^{n+1}(x-1)^n}{n} \qquad\text{for } |x-1|<1.\notag \end{align} \label{ln series prop}\eprop We found $P_N$ in Example \ref{ln example}. A proof that (\ref{ln x}) is valid for $(1/2,2)$ in which one shows $R_N(x)\to0$ in that interval is left as Exercise~\ref{partiallnintervalproof}. The proof that the series is valid for all of $(0,2)$ is left as an exercise in Section~\ref{OtherManipulations}, after other methods are available. Finally, in Exercise~\ref{basic int of conv exercises} we show the series in fact converges for $(0,2]$, so that by Abel's Theorem (Theorem~\ref{Abel's}) the series converges to $\ln x$ in all of $(0,2]$. \subsection{Techniques for Writing Series using $\mathbf{\Sigma}$-Notation} Notice some of the tricks for getting the correct terms in the summation. For instance, inserting a factor $(-1)^n$ or $(-1)^{n+1}$ to achieve the alternation of sign, depending upon whether the first term carries a ``$+$" or ``$-$." We also pick up only the odd terms in the $\sin x$ expansion by using the $2n+1$ factors, and get the evens in the $\cos x$ using the $2n$. The way to get comfortable with these manipulations is to write out a few terms of the summations on the right of (\ref{sin x}), (\ref{cos x}) and (\ref{ln x}). For example, we can check the summation notation is consistent in (\ref{sin x}) as follows: $$\sum_{n=0}^\infty\frac{(-1)^{n}x^{2n+1}}{(2n+1)!} =\underbrace{\frac{(-1)^0x}{1!}}_{n=0\ \mathrm{term}} +\underbrace{\frac{(-1)x^3}{3!}}_{n=1\ \mathrm{term}} +\underbrace{\frac{(-1)^2x^5}{5!}}_{n=2\ \mathrm{term}} +\underbrace{\frac{(-1)^3x^7}{7!}}_{n=3\ \mathrm{term}}+\cdots $$ $$=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots $$ We see that we get the correct alternation of sign and the correct powers and factorials from our sigma ($\sum$) notation. \bex Write the following in a compact $\sum$-notation. \begin{description} \item a. $\ds{\frac{x}2+\frac{x^2}{4}+\frac{x^3}{6}+\frac{x^4}{8}+\cdots}$ \item b. $\ds{x-\frac{x^3}{2}+\frac{x^5}{4}-\frac{x^7}{8}+\cdots}$ \item c. $\ds{-\,\frac{x^2}{1}+\frac{x^4}{1\cdot3}-\frac{x^6}{1\cdot3\cdot5} +\frac{x^8}{1\cdot3\cdot5\cdot7}+\cdots}$ \end{description} \underline{Solution:} \begin{description} \item a. We see the powers of $x$ are increasing by 1, while the denominators are increasing by 2 with each new term added. The summations will appear different depending upon where the indices begin. Here are two possibilities, though the first is more obvious. \begin{align*} \frac{x}2+\frac{x^2}{4}+\frac{x^3}{6}+\frac{x^4}{8}+\cdots &=\sum_{n=1}^\infty\frac{x^n}{2n}.\\ \frac{x}2+\frac{x^2}{4}+\frac{x^3}{6}+\frac{x^4}{8}+\cdots &=\sum_{n=0}^\infty\frac{x^{n+1}}{2(n+1)}.\end{align*} \item b. Here we have only odd powers of $x$. It is worth noting that therefore the powers of $x$ are increasing by 2. We have alternating factors of $\pm 1$. In the denominator we have powers of $2$. This can be written \begin{align*} x-\frac{x^3}{2}+\frac{x^5}{4}-\frac{x^7}{8}+\cdots &=\sum_{n=1}^\infty\frac{(-1)^{n+1}\,x^{2n-1}}{2^{n-1}}\\ x-\frac{x^3}{2}+\frac{x^5}{4}-\frac{x^7}{8}+\cdots &=\sum_{n=0}^\infty\frac{(-1)^n\,x^{2n+1}}{2^n}. \end{align*} \item c. The powers of $x$ here are all even, hence increasing by 2 with each step. There is also alternation of signs. Finally the denominators are products of odd numbers, similar to a factorial but skipping the even factors. In a case like this, we allow for a more expanded writing of the pattern in the $\sum$-notation. We write the following: \begin{align*} -\,\frac{x^2}{1}+\frac{x^4}{1\cdot3}-\frac{x^6}{1\cdot3\cdot5} +\frac{x^8}{1\cdot3\cdot5\cdot7}+\cdots &=\sum_{n=1}^\infty\frac{(-1)^n\,x^{2n}}{1\cdot3\cdot5 \cdots(2n-1)}\\ -\,\frac{x^2}{1}+\frac{x^4}{1\cdot3}-\frac{x^6}{1\cdot3\cdot5} +\frac{x^8}{1\cdot3\cdot5\cdot7}+\cdots &=\sum_{n=0}^\infty\frac{(-1)^{n+1}\,x^{2n+2}}{ 1\cdot3\cdot5\cdots(2n+1)}.\end{align*} If we had some compelling reason, we might even begin at $n=3$, for instance: $$-\,\frac{x^2}{1}+\frac{x^4}{1\cdot3}-\frac{x^6}{1\cdot3\cdot5} +\frac{x^8}{1\cdot3\cdot5\cdot7}+\cdots =\sum_{n=3}^\infty\frac{(-1)^{n+1}\,x^{2n-5}}{1 \cdot3\cdot5\cdot(2n-5)}.$$ It is understood that the denominator contains all the odd factors up to $(2n-1)$ or $(2n+1)$, depending on the form chosen. Though the first two terms do not contain all of $1\cdot3\cdot5$, we put in those three numbers to establish the pattern, which is understood to terminate at $(2n-1)$ or $(2n+1)$ even if that means stopping before 3 and/or 5. Whenever there is alternation, expect $(-1)^n$ or $(-1)^{n+1}$ or similar factors to be present. An increase by 2 at each step is achieved by $(2n+k)$, where $k$ is chosen to get the first term correct. An increase by 3 would require a $(3n+k)$. With some practice it is not difficult to translate a series written longhand, but with a clear pattern, into $\sum$-notation. \end{description} \eex \newpage \bigskip\begin{center}{\Large \underline{Exercises}}\end{center} \bhw As we did just above, begin with the right hand sides of (\ref{e^x}), (\ref{cos x}), (\ref{Geometric}), and (\ref{ln x}) to show that the $\sum$-notation used does indeed return the desired series for these functions. \ehw \bhw Write the following series using $\sum$-notation. Begin each series with both $n=1$ and $n=0$ for the first term. \begin{description} \item a. $\ds{1-\frac{x^2}{2}+\frac{x^4}{3}-\frac{x^6}{4}+\cdots}$ \item b. $\ds{x^2+\frac{x^4}{4}+\frac{x^6}{9}+\frac{x^8}{16} +\frac{x^{10}}{25}+\cdots}$ \item c. $\ds{\frac{x}{2}-\frac{x^2}{2\cdot4}+\frac{x^3}{2\cdot4\cdot6} -\frac{x^4}{2\cdot4\cdot6\cdot8}+\cdots}$ \item d. $\ds{\frac{x}{1\cdot1}+\frac{x^3}{3\cdot1\cdot2} +\frac{x^5}{5\cdot1\cdot2\cdot3}+\frac{x^7}{7\cdot1\cdot2\cdot3\cdot4} +\cdots}$ \item e. $\ds{\frac2{4}-\frac{4x}{7}+\frac{6x^2}{10}-\frac{8x^3}{13} +\cdots}$ \end{description} \ehw \bhw Prove Proposition~\ref{cos x}. \ehw \bhw Prove that the remainder $\ds{R_N(x)=\frac{x^{N+1}}{(1-z)^{N+2}}}$ from (\ref{GeometricRemainder}) does approach zero as $N\to\infty$ for the case $x\in(-1,0)$. Note that it is enough to show $|R_N(x)|\to0$. (Hint: What can you say about $1-z$ in this case?) \ehw \bhw Prove that the remainder term $\ds{R_N(x)=\frac{(-1)^N(x-1)^{N+1}}{(N+1)\,z^{N+1}}}$ from Proposition \ref{ln series prop} converges to zero as $N\to\infty$ for the following two cases: \begin{description} \item a. $x\in[1,2)$; \item b. $x\in(1/2,1)$. \end{description} Thus Proposition \ref{ln series prop} is proved in part. (Hint: A number line showing the various quantities may be helpful.) \label{partiallnintervalproof} \ehw \newpage \section{Derivatives and Integrals With Taylor Series \label{D&I}} \bigskip As has already been mentioned, many of the manipulations we would hope we can do with Taylor Series are in fact possible. For instance, we can take derivatives and integrals as expected: \bigskip \begin{theorem} Suppose that $f(x)$ is given by some Taylor Series \begin{equation} f(x)=a_0+a_1(x-a)+a_2(x-a)^2+a_3(x-a)^3+\cdots=\sum_{n=0}^\infty a_n(x-a)^n.\label{SeriesAgain}\end{equation} \begin{enumerate} \item If the series converges in an open interval containing $x$, then \begin{equation} f'(x)=a_1+2a_2(x-a)+3a_3(x-a)^2+\cdots=\sum_{n=1}^\infty na_n(x-a)^{n-1}. \end{equation} \item Furthermore, integrating (\ref{SeriesAgain}) term by term we get\footnotemark \begin{align}\int f(x)\,dx&= a_0(x-a)+a_1\frac{(x-a)^2}{2\cdot1!}+a_3\frac{(x-a)^3}{3\cdot2!} +\cdots+C\notag\\ &=\sum_{n=0}^\infty a_n\frac{(x-a)^{n+1}}{(n+1)!}+C, \label{SeriesIntegral}\end{align} with the special case that, if the series converges on the interval containing both $a$ and $x$, we have \begin{equation}\int_a^xf(t)\,dt=\left.\sum_{n=0}^\infty a_n\frac{(t-a)^{n+1}} {(n+1)!}\right|_a^x =\sum_{n=0}^\infty a_n\frac{(x-a)^{n+1}}{(n+1)!}. \label{SeriesDefiniteIntegral}\end{equation} \end{enumerate}\end{theorem}\footnotetext{ We should notice that in (\ref{SeriesIntegral}) we have an extra $a_0(-a)$ in the first term, but that is not a problem since we have an arbitrary constant $+C$ at the end which can account for any discrepancies.} Let us see how the derivative part of the theorem plays out first. \bigskip \bex We do the following calculations using known formulas first, and using series to show the reasonableness of it all. $$\frac{d}{dx}e^x=e^x.$$ \begin{eqnarray*}\ds{\frac{d}{dx}\left(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots \right)}&=&\ds{0+1+\frac{2x}{2\cdot1}+\frac{3x^2}{3\cdot2\cdot1}+ \frac{4x^3}{4\cdot3\cdot2\cdot1}+\cdots }\\ &=&\ds{1+\frac{x}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots}\end{eqnarray*} Using $\sum$-notation, keeping in mind that the first ($k=0$) term differentiates to zero, we get $$\frac{d}{dx}\left(\sum_{k=0}^\infty\frac1{k!}x^k\right) =\sum_{k=0}^\infty k\frac1{k!}x^{k-1} =\sum_{k=1}^\infty\frac1{(k-1)!}x^{k-1}.$$ Writing out the first few terms of the new summation we see that it is the same as $$=\sum_{k=0}^\infty\frac1{k!}x^k.$$ \eex The following are very useful exercises for students to attempt themselves. One should first attempt these using the written out expansion $$a_0+a_1x+a_2x^2+a_3x^3+\cdots,$$ and then using the $\sum$-notation if possible, comparing the results. \begin{center}\underline{\bf\Large Exercises}\end{center} \bhw Use the method above to show that we can derive the two derivative formulas from the MacLaurin Series representations (as we showed above $\frac{d}{dx}e^x=e^x$ is verified with series): \begin{description} \item a. $\ds{\frac{d}{dx}\sin x=\cos x}$. \item b. $\ds{\frac{d}{dx}\cos x=-\sin x}$. \end{description} \ehw \bhw Use the fact that $\ds{\frac{d}{dx}\left(\frac1{1-x}\right) =\frac1{(1-x)^2}}$ to find the MacLaurin Series expansion for $$f(x)=\frac1{(1-x)^2}.$$ \ehw \bhw Use the facts that $\tan^{-1}x=\int_0^x\frac{1}{1+t^2}\,dt$, and that $\frac1{1+t^2}=\frac1{1-[-t^2]}$ to compute the MacLauring Series for $\tan^{-1}x$. \ehw \bhw Find a series expansion for the general antiderivative of $\ds{e^{x^2}}$. \ehw \newpage \section{Other Manipulations With Taylor Series\label{OtherManipulations}} \bigskip One very nice property of the Taylor Series is the following fact: \begin{theorem}If there are two power series representations which are valid at a given point $x$: $$f(x)=\sum_{k=0}^\infty a_k x^k =\sum_{k=0}^\infty b_kx^k,$$ then $a_0=b_0$, $a_1=b_1$, and so on. \end{theorem} The theorem is stating that any two such representations must really be the same. In math-speak, we would say that the Taylor/MacLaurin Series representation for a function is unique at each point where it is valid. One main usefulness of the theorem lies in the fact that it is often easier to calculate the series {\it algebraically} or from simple calculus, from a known series, than to calculate directly the Taylor Coefficients $a_0=f(a)$, $a_1=f'(a)$, $a_2=\frac12f''(a)$, etc. For instance, rather than calculating the MacLaurin Series for $\sin x$ and $\cos x$ separately, we could have calculated the series for $\sin x$ first, and then taken its derivative to get the $\cos x$ series. But there are more compelling examples to be sure. \bex\label{e^x^2}Use the MacLaurin Series for $e^x$ to calculate the MacLaurin Series for $e^{x^2}$. \medskip \underline{\bf Solution:} We simply replace $x$ with $x^2$ in the series for $e^x$. \begin{eqnarray*} e^x&=&\ds{1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots =\sum_{k=0}^\infty\frac{x^k}{k!}}\\ e^{x^2}&=&\ds{1+(x^2)+\frac{(x^2)^2}{2!}+\frac{(x^2)^3}{3!} +\frac{(x^2)^4}{4!}=\sum_{k=0}^\infty\frac{(x^2)^k}{k!}}\\ &=&\ds{1+x^2+\frac{x^4}{2!}+\frac{x^6}{3!}+\frac{x^8}{4!}+\cdots =\sum_{k=0}^\infty\frac{x^{2k}}{k!}}.\end{eqnarray*} \eex We will now dispell any doubt that this is not superior to calculating this series from scratch. Remember that we would need a formula for $f^{(n)}(x)$ to fill out our chart. The first two are easy enough: $f(x)=e^{x^2}$; $f'(x)=2xe^{x^2}$. For $f''$, we need a product rule and another chain rule: $f''(x)=2e^{x^2}+2x(2xe^{x^2})=2e^{x^2}(1+4x)$. Next we would need another product rule and a couple chain rules to find $f'''$. By then, we would certainly conclude the method above is superior. Essentially we used an algebraic method in Example \ref{e^x^2}. We simply replaced $x$ by $x^2$, just as we learn in algebra, except this time we did it with series: $$f(x)=e^x \qquad\Longrightarrow\qquad f(x^2)=e^{x^2}.$$ \bex \label{ArctanxSeries}Use the series for $\ds{\frac1{1-x}}$ to derive a series for $\ds{\frac1{1+x^2}}$. Then use that series to find a series for $\tan^{-1}x$. \medskip \underline{\bf Solution:} The series for $\ds{\frac1{1-x}}$ was given in (\ref{Geometric}), but is not difficult to memorize due to its relationship with geometric series. We first replace $x$ with $-x^2$ in that series, since $\ds{\frac1{1+x^2}=\frac1{1-(-x^2)}}$: \begin{eqnarray*} \frac1{1-x}&=&1+x+x^2+x^3+x^4+\cdots=\sum_{n=0}^\infty x^n\\ \frac1{1+x^2}&=&1+(-x^2)+(-x^2)^2+(-x^2)^3+(-x^2)^4+\cdots =\sum_{n=0}^\infty(-x^2)^n\\ &=&1-x^2+x^4-x^6+x^8+\cdots=\sum_{n=0}^\infty(-1)^nx^{2n}.\end{eqnarray*} This is valid wherever $|x^2|<1$, which it is not too difficult to see is again wherever $|x|<1$.\footnotemark \footnotetext{Recall that $|x^2|=|x|^2$. Also recall that the sqare root function is increasing on $[0,\infty)$, and so (by definition) preserves inequalities. Thus $$|x|^2<1\iff \sqrt{|x^2|}<\sqrt1 \iff |x|<1.$$} Next we use the fact that $$\tan^{-1}x=\tan^{-1}x-\tan^{-1}0=\int_0^x\frac1{1+t^2}\,dt$$ $$=\int_0^x\sum_{n=0}^\infty(-1)^nt^{2n}\,dx =\left.\sum_{n=0}^\infty(-1)^n\frac{t^{2n+1}}{2n+1}\right|_0^x =\sum_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{2n+1}-0.$$ Thus \begin{equation} \tan^{-1}x=\sum_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{2n+1} =x-\frac{x^3}{3}+\frac{x^5}5-\frac{x^7}7+\cdots.\end{equation} Once again, this is valid where $|x^2|<1$, i.e., where $|x|<1$. \eex \bex Find the MacLaurin Series for $f(x)=x^3\sin2x$. \medskip \underline{\bf Solution:} This will follow quickly from the series for $\sin x.$ \begin{eqnarray*} \sin x&=&x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots =\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)!}\\ \sin 2x&=&(2x)-\frac{(2x)^3}{3!}+\frac{(2x)^5}{5!}-\frac{(2x)^7}{7!}+\cdots =\sum_{n=0}^\infty\frac{(-1)^n(2x)^{2n+1}}{(2n+1)!}\\ &=&2x-\frac{8x^3}{3!}+\frac{32x^5}{5!}-\frac{128x^7}{7!}+\cdots =\sum_{n=0}^\infty\frac{(-1)^n2^{2n+1}x^{2n+1}}{(2n+1)!}\\ x^3\sin2x&=&x^3\left(2x-\frac{8x^3}{3!}+\frac{32x^5}{5!}- \frac{128x^7}{7!}+\cdots\right) =x^3\left(\sum_{n=0}^\infty \frac{(-1)^n2^{2n+1}x^{2n+1}}{(2n+1)!} \right) \\ &=&2x^4-\frac{8x^6}{3!}+\frac{32x^8}{5!}-\frac{128x^{10}}{7!}+\cdots =\sum_{n=0}^\infty\frac{(-1)^n2^{2n+1}x^{2n+4}}{(2n+1)!}.\end{eqnarray*} >From this example, perhaps on can see an advantage in using the $\sum$-notation instead of writing out several terms to find the pattern. \eex \bex \label{Inte^x^2}Find $\ds{\int_0^x e^{t^2}\,dt.}$ \medskip \underline{\bf Solution:} It is an interesting but futile exercise to try to find the antiderivatives of $e^{x^2}$ using the usual tricks: substitution, integration by parts, etc. It is well-known that there is no ``closed form" for this antiderivative, i.e., using the usual functions in the usual manners. It is also true that, since $e^{x^2}$ is continuous on $\Re$, there must exist continuous antiderivatives.\footnotemark \footnotetext{This comes from one of the statements of the Fundamental Theorem of Calculus.} Our discussion here presents a strategy for calculating this integral: write the integrand as a series, and integrate term by term. As before, we will write the steps and the solution in two ways: one method is to write out several terms of the series; the other, done simultaneously, is to use the $\sum$-notation. Hopefully by now they are equally simple to deal with. \begin{eqnarray*} e^t&=&1+\frac{t}{1!}+\frac{t^2}{2!}+\frac{t^3}{3!}+\cdots =\sum_{n=0}^\infty\frac{t^n}{n!}\\ e^{t^2}&=&1+\frac{t^2}{1!}+\frac{(t^2)^2}{2!}+\frac{(t^2)^3}{3!}+\cdots =\sum_{n=0}^\infty\frac{(t^2)^n}{n!}\\ e^{t^2}&=&1+\frac{t^2}{1!}+\frac{t^4}{2!}+\frac{t^6}{3!}+\cdots =\sum_{n=0}^\infty\frac{t^{2n}}{n!}\end{eqnarray*} $$\int_0^xe^{t^2}\,dt=\int_0^x\left(1+\frac{t^2}{1!}+ \frac{t^4}{2!}+\frac{t^6}{3!}+\cdots\right)\,dt =\int_0^x\left(\sum_{n=0}^\infty\frac{t^{2n}}{n!}\right)\,dt$$ $$=\left.\left(t+\frac{t^3}{3\cdot1!}+\frac{t^5}{5\cdot2!} +\frac{t^7}{7\cdot3!}+\cdots\right)\right|_0^x =\left.\sum_{n=0}^\infty\frac{t^{2n+1}}{(2n+1)n!}\right|_0^x$$ $$=\left(x+\frac{x^3}{3\cdot1!}+\frac{x^5}{5\cdot2!} +\frac{x^7}{7\cdot3!}+\cdots\right)-0 =\sum_{n=0}^\infty\frac{x^{2n+1}}{(2n+1)n!}-0.$$ Thus $$\int_0^xe^{t^2}\,dt=\sum_{n=0}^\infty\frac{x^{2n+1}}{(2n+1)n!}.$$ We could also write the general antiderivative $$\int e^{x^2}\,dx=\sum_{n=0}^\infty\frac{x^{2n+1}}{(2n+1)n!}+C.$$ \eex Other antiderivatives which must be found this way are $\int\sin x^2\,dx$, $\int\cos x^2\,dx$. \newpage \begin{center}\underline{\bf\Large Exercises}\end{center} \noindent In each of the following, unless otherwise stated, leave your final answers in $\sum$-notation. \bhw Find the MacLaurin series for $f(x)=\ln(x+1)$ using (\ref{ln x}). Where is this series valid? i%(See Footnote \ref{four}.) \ehw \bhw Write the MacLaurin series for $\ds{f(x)=\frac12\sin2x}$ by\begin{description} \item a. Using the series for $\sin x$. \item b. Using the series for $\sin x$ and $\cos x$ and the fact (from the double angle formula) that $$f(x)=\sin x\cos x.$$ (Just write out the first several terms of the product, being careful to distribute correctly, to verify the answer is the same as in part a.) \end{description}\ehw \bhw Approximate $\ds{\int_0^{\sqrt\pi}\cos x^2\,dx}$ \ by computing the first five nonzero terms of the MacLaurin series for $\int\cos x^2\,dx$. \ehw \bhw The Hyperbolic Functions: The three most important hyperbolic functions are \begin{eqnarray} \sinh x&=&\ds{\frac{e^x-e^{-x}}2}\\ \cosh x&=&\ds{\frac{e^x+e^{-x}}2}\\ \tanh x&=&\ds{\frac{e^x-e^{-x}}{e^x+e^{-x}}}.\end{eqnarray} Though not immediately obvious, it is true that $\tanh x$ is invertible, and that its inverse has the property that \begin{equation}\frac{d}{dx}\tanh^{-1}x=\frac1{1-x^2}.\end{equation} Find the MacLaurin series for $f(x)=\tanh^{-1}x$ given that \begin{equation}\tanh^{-1}x=\int_0^x\frac1{1-t^2}\,dt.\label{tanh2} \end{equation} (See Example~\ref{ArctanxSeries}, page~\pageref{ArctanxSeries}.) Where is this series valid? (Actually the integral in (\ref{tanh2}) can also be computed with partial fractions, and the final answer written without resorting to series.) \label{HyperbolicFunctsMaclaurinSeries}\ehw \bhw (Proof of Proposition~\ref{ln series prop}) Derive the Taylor Series for $\ln x$ with $a=1$ using the fact that $$\ln x=\int_1^x\frac1t\,dt$$ for $x>0$, and $$\frac1t=\frac1{1-(1-t)}.$$ Where is this series guaranteed valid? \ehw \section{The Binomial Series and an Application} The following series comes up in enough applications that it is worth some focus. It is the following: \begin{equation} (1+x)^\alpha=1+\alpha x+\frac{\alpha(\alpha-1)x^2}{2!} +\frac{\alpha(\alpha-1)(\alpha-2)x^3}{3!}+\cdots %=\sum_{n=0}^\infty\frac{\alpha(\alpha-1)\cdots(\alpha-n+1)x^n}{n!}. \label{Binomial}\end{equation} This can also be written $$(1+x)^\alpha=\sum_{n=0}^\infty \frac{\alpha(\alpha-1)\cdots(\alpha-n+1)x^n}{n!}.$$ This series is valid for $|x|<1$, and sometimes also valid at one or both endpoints $x=\pm1$. It is not difficult to prove, and is a worthwhile exercise. In fact, for $\alpha\in\{0,1,2,3,\cdots\}$, the function is a polynomial and the series terminates (in the sense that all but finitely many terms are zero), simply giving an expansion of the polynomial, valid for all $x$. The derivation of (\ref{Binomial}) is straightforward. See Exercise \ref{BinomialDerivation}. Here are some quick examples: $$\begin{array}{rcll} \ds{\frac1{\sqrt{1+x}}}&=&1-\frac12x+\frac{\left(-\frac12\right) \left(-\frac32\right)x^2}{2!} +\frac{\left(-\frac12\right)\left(-\frac32\right)\left(-\frac52\right) x^3}{3!}+\cdots\qquad & \left(\alpha=-\frac12\right)\\ \ds{\frac1{1+x^2}}&=& 1-(x^2)+\frac{(-1)(-2)(x^2)^2}{2!}+\frac{(-1)(-2)(-3)(x^2)^3}{3!} +\cdots\vphantom{\ds{\int}}&(\alpha=-1)\\ &=&1-x^2+x^4-x^6+\cdots& (``x''=x^2)\\ (1+x)^3&=&1+3x+\frac{3\cdot2x^2}{2!}+\frac{3\cdot2\cdot1x^3}{3!} +\frac{3\cdot2\cdot1\cdot0x^4}{4!}+\frac{3\cdot2\cdot1\cdot0\cdot(-1)x^5}{5!} +\cdots\\ &=&1+3x+3x^2+x^3& \vphantom{\ds{\int}}(\alpha=3)\\ \end{array}$$ Actually, the last one is valid for all $x$, and the one above it was done in Example 9. \bigskip There is a beautiful application of binomial series which relates Einstein's Special Relativity to Newtonian Mechanics. This application is given in the following example. \bex{\bf (Application)} According to Einstein, kinetic energy is that energy which is due to the motion of an object, and can be defined as the following function of velocity (for a given rest mass $m$): \begin{eqnarray*}E_k&=&E_{\mathrm{total}}-E_{\mathrm{rest}}\\ E_k(v)&=&\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}} -mc^2\\ &=&mc^2\left(\frac1{\sqrt{1-\frac{v^2}{c^2}}}\right)-mc^2. \end{eqnarray*} Contained in the above is the very famous equation $E_{\mathrm{rest}} =mc^2$. Also notice that the total energy $E_{\mathrm{total}}$ blows up as $v\to c^-$ or $v\to-c^+$, i.e., as velocity approaches the speed of light. At $v=\pm c$, we are dividing by zero in the total energy, and thus the theory that ordinary objects cannot achieve the speed of light (for it would take infinite energy to achieve it). Now let us expand this expression of $E_k(v)$ by expanding the fraction $\ds{\frac1{\sqrt{1-\frac{v^2}{c^2}}}}$ using the binomial series, with $\alpha=-1/2$ and replacing $x$ with $-v^2/c^2$. \begin{align}E_k(v)&=mc^2\left(1-\frac12\left(-\frac{v^2}{c^2}\right) +\frac{\left(-\frac12\right)\left(-\frac32\right) \left(-\frac{v^2}{c^2}\right)^2}{2!}+\cdots \right)-mc^2\label{einexpand}\\ &\approx mc^2\left(1+\frac12\frac{v^2}{c^2}\right)-mc^2 \qquad \mathrm{when\ } \frac{v^2}{c^2}\ \mathrm{is\ small.}\end{align} Multiplying this out, we see that \begin{equation}E_k\approx mc^2+mc^2\cdot\frac12\frac{v^2}{c^2}-mc^2 =\frac12mv^2.\end{equation} Summarizing, \begin{equation}E_k(v)\approx\frac12mv^2\qquad\qquad\mathrm{when\ } |v|<<c.\label{einsumm}\end{equation} Here the notation $|v|<<c$ means that $|v|$ is much smaller than $c$, giving us that $v^2/c^2$ is very small. So we see that Newton's kinetic energy formula is just an approximation of Einstein's, which is to be expected since Newton was not considering objects at such high speeds. \eex \bigskip \newpage \begin{center}{\Large\underline{Exercises}}\end{center} \bigskip \bhw Derive the series (\ref{Binomial}) using the formula for Taylor/MacLaurin Series where $f(x)=(1+x)^\alpha$ and $a=0$. \label{BinomialDerivation}\ehw \bhw Find a series representation for the following functions using the binomial series (\ref{Binomial}). Do not attempt to use $\sum$-notation, but rather write out the first five terms of the series to establish the pattern. \begin{description} \item a. $\ds{f(x)=(1+x)^{3/2}}$ \item b. $\ds{f(x)=(1-x)^{3/2}}$ \item c. $\ds{f(x)=\frac{1}{\sqrt[3]{1+x}}}$ \item d. $\ds{f(x)=\frac{1}{\sqrt[3]{1+x^3}}}$ \item e. $\ds{f(x)=\frac{x^3}{\sqrt{1+x}}}$ \item f. $\ds{f(x)=\frac1{\sqrt{1-x^2}}}$. \end{description} \ehw \bhw Use the fact that $\ds{\ln\left(x^2+1\right)=\int_0^x\frac{2t} {1+t^2}\,dt}$ to find the series expansion for $\ds{f(x)=\ln\left(1+x^2\right)}$. \ehw \bhw Find a more general form of the binomial series by using (\ref{Binomial}) to derive a series for \begin{equation} f(x)=(b+x)^\alpha \end{equation} and determine for what values of $x$ is it valid. (Hint: Use (\ref{Binomial}) after factoring out $\ds{b^\alpha}$ from $f$.) \ehw \bhw Complete the square and use the binomial series to write a series expansion for the following. Also determine an interval $|x-a|<R$ where the series is guaranteed to be valid. \begin{description} \item a. $\ds{f(x)=\frac1{\sqrt{x^2-6x+10}}}$ \item b. $\ds{f(x)=\sqrt{4x^2+12x+13}}$ \item c. $\ds{f(x)=(-2x^2+3x+5)^{-2/3}}$ \end{description} \ehw \bhw Using (\ref{einexpand}), show that $\ds{E_k(v)\ge\frac12mv^2}$ for $|v|<c$, with equality only occuring when $v=0$. Thus (\ref{einsumm}) is always an underestimation unless $v=0$. (Hint: Look at the signs of all the terms we ignore in the approximation.) \ehw \newpage \section[Power Series, Interval of Convergence]{General Power Series and Interval of Convergence} \subsection{Definition of General Power Series} While most of our favorite functions can be written as power series \begin{equation}f(x)=\sum_{n=0}^\infty a_n(x-a)^n,\label{Power Series} \end{equation} there are many functions which must be written as series (for instance, $\int e^{x^2}\,dx$). In some sense, there are more power series than ``nice" functions (usual combinations of powers, trig, log, and exponential functions) which also have power series. It is therefore interesting to study power series without reference to functions they may or may not have been derived from. In fact, from what we learned before, we can easily construct a generalized power series centered at $x=a$ if we know $f(a)$, $f'(a)$, $f''(a)$, $f'''(a)$, and so on, using (\ref{asubn}), which states $$a_n=\frac{f^{(n)}(a)}{n!}.$$ It seems that we should be able to determine a power series representation (\ref{Power Series}) for such a function by simply plugging in the prescribed $a_n$ values. It is almost that simple. For the resulting series to have the requisite properties, it must converge in an open interval around $a$, since derivatives are defined by limits in which $a$ must be approachable from both sides. For instance, $$f'(a)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}.$$ Thus $f$ must be defined in an open interval containing $a$ or this limit cannot exist. Fortunately, given any power series (\ref{Power Series}), we are guaranteed to have only certain cases for the domain of a function defined in such a way. We rely on the following very nice result. \subsection{Abel's Theorem} \begin{theorem} {\bf (Abel's Theorem):} A power series of form (\ref{Power Series}) will converge at $a$ only and diverge elsewhere, or converge absolutely in an open interval $x\in(a-R,a+R)$, and diverge outside the closed interval with the same endpoints, i.e., diverge for $x\in(-\infty,a-R)\cup(a+R,\infty)$. If the power series also converges at an endpoint $a-R$ or $a+R$, it will be continuous to the endpoint from the side interior to the interval. \label{Abel's}\end{theorem} \begin{definition} The number $R$ is called the {\bf radius of convergence} of (\ref{Power Series}). We say $R=0$ if the power series converges at $a$ only. It is quite possible that $R=\infty$. in which case the power series converges on all of $\Re$. Otherwise, $R>0$ is finite and the power series \begin{description} \item a. converges for $|x-a|<R$, and \item b. diverges for $|x-a|>R$.\end{description} \end{definition} A nice example we have already considered is the series for $\ln x$. In most cases, the Ratio and Root Tests are the tools used to find the \underline{\it interval of convergence} for a given power series. Below we use the Ratio Test. \bex Find the interval of convergence for the series $\ds{\sum_{n=0}^\infty \frac{x^n}{n!}}$. \medskip \underline{\bf Solution:} Actually we know this series, and that it converges to $e^x$ for all $x\in\Re$. But how would we determine where it converges without knowing the form of the remainder? The key here is to use the Ratio Test for an arbitrary $x$. First write $$f(x)=\sum_{n=0}^\infty a_n(x-a)^n\equiv\sum_{n=0}^\infty u_n.$$ This is just for convenience in organizing the application of the Ratio Test. Next calculate $$\rho=\lim_{n\to\infty}\left|\frac{u_{n+1}}{u_n}\right| =\lim_{n\to\infty}\left|\frac{\ds{\frac{x^{n+1}}{(n+1)!}}} {\ds{\frac{x^n}{n!}}}\right| =\lim_{n\to\infty}\left|\frac{x^{n+1}}{x^n}\right|\cdot\frac{n!}{(n+1)!}$$ $$=\lim_{n\to\infty}|x|\cdot\frac{n!}{n!(n+1)}=\lim_{n\to\infty}|x| \cdot\frac1n=0\qquad\mathrm{for\ every\ }x\in\Re.$$ Recall that the series will converge absolutely if $\rho<1$, and we in fact have $\rho=0$ for every real $x$. Thus the series converges absolutely on $\Re=(-\infty,\infty)$, which gives the interval of convergence. (Here we take the radius to be $R=\infty$.) \eex \bex Do the same for the series $\ds{\sum_{n=0}^\infty\frac{2^n(x-5)^n}{2n-1}}$. \medskip\underline{\bf Solution:} Just as above, \begin{align*} \rho&=\lim_{n\to\infty}\left|\frac{u_{n+1}}{u_n}\right| =\lim_{n\to\infty}\left|\frac{\left( \ds{\frac{2^{n+1}(x-5)^{n+1}}{2(n+1)-1}} \right)}{\left( \ds{\frac{2^n(x-5)^n}{2n-1}}\right)}\right| \\ &=\lim_{n\to\infty}\frac{2^{n+1}}{2^n}\cdot\frac{2n-1}{2(n+1)-1} \cdot \left|\frac{(x-5)^{n+1}}{(x-5)^n}\right| \\ &=\lim_{n\to\infty}2\cdot\frac{2n-1}{2n+1}\cdot|x-5|=2\cdot1\cdot|x-5|. \end{align*} Remember that the $x$ in the line above is constant as far as the limit goes (since the limit is in $n$). To find the region where $\rho<1$ we simply solve $$2|x-5|<1 \iff |x-5|<\frac12 \iff -1/2< x-5< 1/2$$ $$\iff 9/2<x<11/2.$$ Thus we know for a fact that the series converges absolutely for $x\in(9/2,11/2).$ A similar calculation gives us divergence in $(-\infty,9/2)\cup (11/2,\infty)$, and we usually do not bother repeating the calculations to see this. The only question left is what happens at the two boundary points. \begin{description}\item\underline{$x=9/2$}: $$\sum_{n=0}^\infty\frac{2^n(9/2-5)^n}{2n-1} =\sum_{n=0}^\infty\frac{2^n(-1/2)^n}{2n-1} =\sum_{n=0}^\infty\frac{2^n\left(\frac12\right)^n(-1)^n}{2n-1} =\sum_{n=0}^\infty\frac{(-1)^n}{2n-1}.$$ The resultant series converges by the Alternating Series Test (alternates, terms shrink in absolute size monotonically to zero). Thus the series does converge at the left endpoint $x=9/2$. \item\underline{$x=11/2$}: $$\sum_{n=0}^\infty\frac{2^n(11/2-5)^n}{2n-1} =\sum_{n=0}^\infty\frac{2^n(1/2)^n}{2n-1} =\sum_{n=0}^\infty\frac{2^n\left(\frac12\right)^n}{2n-1} =\sum_{n=0}^\infty\frac{1}{2n-1}.$$ This series diverges (limit-comparable to the harmonic series $\sum\frac1n$). Thus the power series diverges at this endpoint. \end{description} The conclusion is that the interval of convergence is $x\in[9/2,11/2)$. \eex \newpage \subsection{Taylor/Power Series Connection} There is a nice connection between Taylor and Power Series centered at a given point $a$. To see this connection we first need the following theorem: \begin{theorem}{\rm Manipulations with Power Series: } Suppose we are given a function defined by a power series \begin{equation} \sum_{n=0}^\infty a_n(x-a)^n\end{equation} which converges in some open interval $|x-a|<R$, where $R>0$. Then inside that same interval, \begin{enumerate} \item all derivatives of $f$ exist and are continuous; \item $\ds{f^{(n)}(x)=\sum_{n=0}^\infty \frac{d^n}{dx^n}\left[a_n(x-a)^n\right]}$; \item $\ds{f^{(n)}(a)=n!\,a_n}$; \item $\ds{\int_a^xf(x)=\sum_{n=0}^\infty \frac{a_n(x-a)^{n+1}}{(n+1)!}}$\end{enumerate} \label{PowerSeriesManipulations} \end{theorem} In other words, the manipulations we could do with Taylor Series are all valid for power series. We will use Theorem \ref{PowerSeriesManipulations} to show the relationship between Taylor and Power Series which converge in an interval $|x-a|<R$. Put simply: they are the same. We can best illustrate this in the form of a proof. First we fix a point $a$. Then we define \begin{align*}T(a)&=\{\text{all Taylor Series centered at $a$, converging in some }|x-a|<R\},\\ P(a)&=\{\text{all power series centered at $a$, converging in some }|x-a|<R\}. \end{align*} Thus these are sets of series which converge in an open interval centered at $a$, but the interval (and thus $R$) will likely be different for different series. Now clearly $T(a)\subset P(a)$, since for any $\ds{\sum_{n=0}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!}\in T(a)}$ is a power series with \begin{equation}a_n=\frac{f^{(n)}}{n!}\label{a_n2}\end{equation} for its coefficients. This shows $T(a)\subset P(a)$. Next we claim that any power series which converges in an open interval centered at $a$ is also a Taylor Series for some function $f(x)$. This seems almost trivial, for if we are given a power series defining a function \begin{equation}f(x)=\sum_{n=0}^\infty a_n(x-a)^n,\label{Power2} \end{equation} in fact the power series (\ref{Power2}) is also the Taylor Series of $f$ since, according to Theorem \ref{PowerSeriesManipulations}, \begin{equation}a_n=\frac{f^{(n)}(a)}{n!}\end{equation} which is equivalent to the definition of the Taylor Series coefficients (\ref{a_n2}). Thus $P(a)\subset T(a)$. Since we had the reverse set inclusion before, we can conclude $P(a)=T(a)$. In advanced calculus, functions which can be represented in $|x-a|<R$ by a convergent power series are given a special name: \begin{definition} A function $f(x)$ which has a power series representation {\rm(\ref{Power2})} converging in some open interval $|x-a|<R$ is called {\bf real-analytic} in that interval. \end{definition} Equivalently, a function is real-analytic on an open interval $|x-a|<R$ if and only if its Taylor Series converges to the function in the same interval. There is a very rich and beautiful theory of real-analytic functions which is beyond the scope of this text. It is a theory which has a remarkably simple extension to functions of a complex variable $$z\in\mathbb{C}=\{x+iy: x,y\in\Re\},\qquad i=\sqrt{-1}.$$ This may seem a complication, but the theory is often simplified by this generality, after which the real-analytic results follow from the complex theory. In fact the term {\it radius of convergence} comes from the complex-analytic theory, where the values for which $\sum a_n(z-a)^n$ converge lie in a disk of radius $R$ inside the complex plane $\mathbb{C}$. Such are topics for advanced calculus or complex analysis courses, usually at the senior or graduate levels. \newpage \begin{center}{\Large\underline{Exercises}}\end{center} \bigskip \bhw Find the interval of convergence for each of the following. Check the endpoints unless otherwise instructed. \begin{description} \item a. $\ds{f(x)=\sum_{n=1}^\infty\frac{x^n}{2^n\sqrt{n}}}$. \item b. $\ds{f(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots}$. See Example~\ref{ArctanxSeries}. \item c. $\ds{\ln x=\sum_{n=1}^\infty\frac{(-1)^{n+1}(x-1)^n}{n}}$. See Proposition~\ref{ln series prop}. \item d. $\ds{f(x)=\sum_{n=0}^\infty nx^n}$. \item e. $\ds{f(x)=\sum_{n=0}^\infty n!\,x^n}$. \item f. $\ds{e^{x^2}=\sum_{n=0}^\infty\frac{x^{2n}}{n!}}$. See Proposition~\ref{e^x}. \item g. $\ds{f(x)=\sum_{n=2}^\infty\frac{(x+1)^n}{(\ln n)^n}}$. \item h. $\ds{f(x)=\sum_{n=0}^\infty \frac{x^n}{n^n}}$. \item i. $\ds{f(x)=\sum_{n=0}^\infty \frac{(n!)^2(x-5)^n}{(2n)!}}$. \item j. $\ds{f(x)=\sum_{n=0}^\infty\frac{x^{2n}}{n^2\cdot10^n}}$. \item k. $\ds{\sin x=\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)!}}$. \item l. $\ds{f(x)=\sum_{n=1}^\infty\frac{(-1)^nn!\,(x-3)^n}{n^n}}$. \item m. $\ds{f(x)=\sum_{n=2}^\infty\frac{3^n(x+2)^n}{\ln n}}$. \end{description} \label{basic int of conv exercises}\ehw \bhw Assume for a moment that all our work with Taylor Series can be generalized to the complex plane $\mathbb{C}$. Note that $i=i$, $i^2=-1$, $i^3=-i$, $i^4=1$, $i^5=i$, etc. Use all this and known MacLaurin Series to prove {\it Euler's Identity:} \begin{equation}e^{i\theta}=\cos\theta+i\sin\theta. \label{Euler'sIdentity}\end{equation} \label{Euler'sDerived} Note that this implies that $\ds{e^{i\pi}=-1}$, an often-cited, beautifully compact equation relating four of the most important numbers in mathematics. \ehw \bhw Use (\ref{Euler'sIdentity}) and the facts that $\sin(-\theta)=-\sin\theta$ and $\cos(-\theta)=\cos\theta$, to show the following relationship between trigonometric and hyperbolic functions (see Exercise~\ref{HyperbolicFunctsMaclaurinSeries}): \begin{description} \item a. $\ds{\cos x=\cosh(ix)}$; \item b. $\ds{\sin x=\frac1i\sinh(ix)}$. \end{description}\label{TrigToHyperbolic}\ehw \bhw Use Exercises \ref{Euler'sDerived} and \ref{TrigToHyperbolic} to prove the following trigonometric identities: \begin{description} \item a. $\ds{\sin^2x+\cos^2x=1}$; \item b. $\ds{\sin2x=2\sin x\cos x}$; \item c. $\ds{\cos2x=\cos^2x-\sin^2x}$; \item d. $\ds{\sin(x+y)=\sin x\cos y+\cos x\sin y}$. \end{description} \ehw \newpage \section{Complications} Most of our familiar functions are analytic where they are defined, and so can be represented by Taylor Series for usefully large intervals. These functions include all polynomials, rational functions, exponentials, roots, logarithms and trigonometric functions, as well as combinations of these through addition, subtraction, multiplication, division and composition. We already mentioned that there are power series which are perfectly respectable functions, but which cannot be written as combinations of familiar functions. This may leave the student with the incorrect impression that we can always find and manipulate Taylor Series for all functions with impunity. However, there are many functions we encountered in Chapter~\ref{LimitsAndContinuityChapter} which had more pathological behaviors and will not be analytic where defined. Therefore Taylor Series are useless and inappropriate in dealing with such functions. The purpose of this section is to alert the student when to be aware when not to turn to Taylor Series---or even Taylor Polynomials---except possibly with careful modifications. \begin{example}Consider $f(x)=|x|$, which we graph below \end{example}.
{ "alphanum_fraction": 0.6647497626, "avg_line_length": 39.2598509052, "ext": "tex", "hexsha": "6bc6ed92a58f7493c9f0bab5cf96c659559cc696", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2ad971127ae31b88de02b5e85fb8ba2249278e2e", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "UNDL-edu/Calculo-Infinitesimal", "max_forks_repo_path": "michael.dougherty/chapter11.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2ad971127ae31b88de02b5e85fb8ba2249278e2e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "UNDL-edu/Calculo-Infinitesimal", "max_issues_repo_path": "michael.dougherty/chapter11.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "2ad971127ae31b88de02b5e85fb8ba2249278e2e", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "UNDL-edu/Calculo-Infinitesimal", "max_stars_repo_path": "michael.dougherty/chapter11.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 41613, "size": 110595 }
\chapter{Advanced} \label{chp:adv_basic} \todo[inline,color=blue!40]{http://superuser.com/questions/437330/how-do-you-add-a-certificate-authority-ca-to-ubuntu Add a CA certificate to the certificate store} In this section you will find all the topics that I have not really needed until later on, or when setting up more advanced servers, i.e. those with RAID, multiple hard drives etc. \section{Mounting Windows/Samba Shares} \label{sec:mount_samba} There is sometimes a need to mount an external windows-compatible share on our server. In this section we can see how to do that; \subsection{Temporary Mounting} We are going to use \textit{smbclient}, and then \textit{mount} for this. First things first we are going to check if we can see the external server share; \begin{lstlisting} smbclient -L //SERVERNAME/FOLDER \end{lstlisting} If you are successful you should see a list of the shares available to you. Now lets actually connect to our share; \begin{lstlisting} smbclient //SERVERNAME/SHARE -U username \end{lstlisting} You should be prompted for the password, simply enter your password and then, (fingers crossed) you should be presented with the smb environment where you can do various things. The easiest way to connect to a share using our server is to use the mount command, we type the following; \begin{lstlisting} sudo mount -t cifs //SERVERNAME/SHARE /MOUNT/POINT -o username=USERNAME, noexec \end{lstlisting} This will mount the share as if it were a usb drive or similar. You can then interact with it using the ubuntu file system at $/MOUNT/POINT$ \subsection{Permanent Mounting} Once you are happy with mounting the share, then we can add it to the fstab and get it to mount automatically, simply add the following line to your fstab file ($/etc/fstab$) \begin{verbatim} //SERVERNAME/SHARE /MOUNT/POINT cifs credentials=CREDFILE 0 0 \end{verbatim} Where CREDFILE is the path to a file containing the username and password in the form; \begin{verbatim} username=USERNAME password=PASSWORD \end{verbatim} For a more in-depth explanation of fstab see section~\ref{ssec:permanentHDMount}. \section{Monitoring Computer Physical Properties} \label{sec:physicalmon} So if you have a mission critical server, or just an linux box sat under your desk, you may want to know just how hot it is inside the case. We can do this with the sensors package. Simply install using: \begin{lstlisting} sudo apt-get install lm-sensors \end{lstlisting} After this is finished we have to run the set up tool: \begin{lstlisting} sudo sensors-detect \end{lstlisting} There will be a lot of questions asked, its usually fine to just hit enter to all of them (you may be there a while). Now once this has been completed, you need to run: \begin{lstlisting} sudo service kmod start \end{lstlisting} In order to read the changes to the kernel modules, or you could just restart the system! To read the output from the sensors found you can simply type: \begin{lstlisting} sensors \end{lstlisting} and it will give you an instantaneous reading of all the sensors available. If you would like to constantly poll the system for its temperatures you can use the watch function as mentioned in section~\ref{sec:watch}, chapter~\ref{chp:first}. The command for watching the sensors output every two seconds would be: \begin{lstlisting} watch sensors \end{lstlisting} \section{Symbolic Links and bind mounts} \label{sec:symlink} Symbolic links and bind mounts are similar to a short-cut in Windows. From the command line it appears as if it is a file/folder and can be interacted as if the file/folder is located where you have put it. For example, I have a file here; \menu[,]{/, scripts} but for simplicity I want to link a folder in my home directory to this folder so I can get to it quicker... So I want it here; \menu[,]{home,adam,scripts} \subsection{Symbolic Links} Symbolic links come in two flavours, soft or hard.\footnote{For more information I read this site: \url{http://linuxgazette.net/105/pitcher.html}} \subsubsection{Soft Symbolic Links} Soft links reference the path to another file, behaving as if they are a sign post to the os as to which file to open. To create a soft symbolic link between $orig\_file$ and $link\_file$ we type the following; \begin{lstlisting} ln -s /path/to/orig_file /path/to/link_file \end{lstlisting} \subsubsection{Hard Symbolic Links} Hard links reference the actual data, not the path, and so they will share the same permissions and data as the actual file. However hard links have to exist on the same file system as the original file, whereas soft links do not. To create a hard link between the files mentioned in the previous example we use; \begin{lstlisting} ln /path/to/orig_file /path/to/link_file \end{lstlisting} \subsection{Bind mounts} Now bind mounts behave in a similar way to the symbolic links, however there are some advantages/disadvantages to both. Read up and decide on which you need. There is an easy way to think of bind mounts as mounting a directory instead of a hard drive (See section~\ref{sec:mount_hd}. To create a bind mount we simply type; \begin{lstlisting} mount --bind /path/to/orig_file path/to/link_file \end{lstlisting} This works but will the link be deleted on system restart. To get a persistent link we can add the following to $/etc/fstab$; \begin{lstlisting} /path/to/orig_file /path/to/link_file none bind 0 0 \end{lstlisting} \section{Software RAID - \textit{INCOMPLETE}} \section{Unattended Upgrades} Ubuntu servers come packaged with a package based installation/updating system. As wonderful as this is, it may be tedious to type the update commands into the command line interface just to update the system. So there is another package for automatically updating these packages. \textit{Uattended Upgrades} does precisely this. To install this simply type the following: \begin{lstlisting} sudo apt-get install unattended-upgrades \end{lstlisting} We need to then configure the system to perform the upgrades, use the following commands: \begin{lstlisting} sudo nano /etc/apt/apt.conf.d/50unattended-upgrades \end{lstlisting} Then uncomment the upgrades that you would like the system to install. \textit{(Comments are put into this file by using \textbf{'//'}} \subsection{Notifications} If you have set up \textit{ssmtp} or another Mail Delivery Agent \textit{(MTA)}, like in section~\ref{sec:ssmtp}. Then an additional package can be used to send you notifications by email. Simply install the package using the following: \begin{lstlisting} sudo apt-get install apticron \end{lstlisting} Then edit the configuration file using: \begin{lstlisting} sudo nano /etc/apticron/apticron.conf \end{lstlisting} Now set the email address at the top. \begin{verbatim} EMAIL="your_email_address" \end{verbatim} \section{SysRq Key} The SysRq key, normally located at the top of your keyboard, on the same key as Print Screen. Now if you are on your brand new linux computer and it completely locks up, you can use the SysRq key to sort yourself out. We can 'un-stick' our computer by holding~\keys{\Alt + SysRq} then pressing each key in order for a second or two, with a break in-between keys. In order; \begin{table}[!th] \centering \begin{tabular}{cc} \hline Key & Action\\ \hline \keys{r} & Keyboard into \textbf{r}aw mode\\ \keys{e} & Gracefully \textbf{e}nd all processes\\ \keys{i} & Kill all processes \textbf{i}mmediately\\ \keys{u} & Flu\textbf{s}h data to disk\\ \keys{s} & Remo\textbf{u}nts all file systems as read only\\ \keys{b} & Re\textbf{b}oots your computer\\ \hline \end{tabular} \caption{Table of SysRq keys and meaning} \label{tab:sysrq} \end{table} \section{Automatic log on to command line interface} \textbf{\textit{Important}} - It is wise to read the security chapter first and understand the ethos described in that chapter. Automatic log on is very convenient, and if your server is for your home then I am sure it will be fine. However bear in mind that once your account is logged in automatically then there is no password authentication stopping anyone from using your server. Although the sudo password will not be given there are plenty of things that can be achieved with out sudo privileges that you would not want happening to your server... \textit{Be Warned} After disabling the graphical interface we simply edit the configuration file for the first command line interface instance. The config file is stored here $/etc/init/tty1.conf$. In the last line of the file where the command \textit{'exec'} is called we add the following; \begin{lstlisting} -a user 1 \end{lstlisting} So that the line reads; \begin{lstlisting} exec /sbin/getty -a [user1] -8 38400 tty1 \end{lstlisting} And this will log the user in automatically to the command line interface (cli) \section{Disabling Graphical Display for Servers} Now I'm not a big fan of removing the graphical log in because every now and then I like to hook the server up to a monitor and check everything is OK... However to improve the servers performance then disabling it will save a few bytes of RAM... and some CPU! Run the following commands to disable the graphical display. \begin{lstlisting} sudo nano /etc/default/grub \end{lstlisting} Uncomment the line $$GRUB\_TERMINAL=console$$ Then run; \begin{lstlisting} sudo update-grub \end{lstlisting} \section{Disable the Case power button} Sometimes you want to disable the power button, just because it is within reach of naughty fingers... Navigate to; $$ /etc/acpi $$ And run the following; \begin{lstlisting} sudo mv powerbtn.sh powerbtn.sh.orig sudo ln -s /bin/false /etc/acpi/powerbtn.sh \end{lstlisting} When your power button is pressed it calls the $/etc/acpi/powerbtn.sh$ script. By moving the script and creating a symbolic link to the $/bin/false$ file, we tell ubuntu to do nothing when the button is pressed. See Section~\ref{sec:symlink} for more information on symbolic links. \section{De-fragmentation of Hard Drive} Linux has a few tools built in to de-fragment (defrag) the hard drive, but first we should see if it is needed. Windows users will know that occasionally you will need to defrag as your hard drive fills up, as a linux user it is claimed that you will not need to defrag at all, but it always better to check and run it if need be. Run the following as super user to see how fragmented our hard drive is: \begin{lstlisting} e4defrag -c / \end{lstlisting} This command is for a ext4 file system. Once the command has completed it will show you how fragmented the hard drive is by giving a fragmentation score. Using the legend at the bottom of the output, determine if you would benefit from a defrag procedure. If so run the following as root: \begin{lstlisting} efdefrag / \end{lstlisting} \vspace*{0.5cm} \begin{minipage}{0.3\textwidth} \begin{flushleft} \includegraphics[width=0.9\textwidth]{./supportfiles/point.jpg} \end{flushleft} \end{minipage} \begin{minipage}{0.6\textwidth} \begin{flushright} \hrule \vspace*{0.25cm} \textit{For some background information visit this site: \url{http://jsmylinux.no-ip.org/performance/using-e4defrag/}} \vspace{0.25cm} \hrule \end{flushright} \end{minipage} \vspace*{0.5cm}
{ "alphanum_fraction": 0.7719282749, "avg_line_length": 37.8628762542, "ext": "tex", "hexsha": "7d3482c8cd2f72261073dafaeb40425a46320ef3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4e7de689ae536fab65f8e1706a927297f98594d0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamrees89/UbuntuServerGuide", "max_forks_repo_path": "chapters/advanced_basics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4e7de689ae536fab65f8e1706a927297f98594d0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamrees89/UbuntuServerGuide", "max_issues_repo_path": "chapters/advanced_basics.tex", "max_line_length": 576, "max_stars_count": null, "max_stars_repo_head_hexsha": "4e7de689ae536fab65f8e1706a927297f98594d0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamrees89/UbuntuServerGuide", "max_stars_repo_path": "chapters/advanced_basics.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2887, "size": 11321 }
\documentclass[10pt]{article} \usepackage{arxiv} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \usepackage{hyperref} % hyperlinks \usepackage{url} % simple URL typesetting \usepackage{booktabs} % professional-quality tables \usepackage{amsfonts} % blackboard math symbols \usepackage{amsmath} % for \DeclareMathOperator \usepackage{bm} % bold math \usepackage{nicefrac} % compact symbols for 1/2, etc. \usepackage{microtype} % microtypography \usepackage{graphicx} % Required for including images \usepackage{booktabs} % Top and bottom rules for tables \usepackage{lipsum} \newcommand{\real}{\mathbb{R}} \newcommand{\nat}{\mathbb{N}} \newcommand{\relu}{\mathrm{relu}} \newcommand{\der}{\mathrm{d}} \newcommand{\soft}{\mathrm{softmax}} \newcommand{\kldiv}{\mathcal{D}_{KL}} \newcommand{\bernoulli}{\mathrm{Bernoulli}} \newcommand{\poisson}{\mathrm{Poisson}} \newcommand{\E}{\mathrm{E}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \title{Generative Outliers Detector Network} \author{ Jonathan Guymont \\ Montréal Institute of Learning Algorithms\\ Universite de Montréal\\ Montréal, Canada\\ \texttt{[email protected]}\\ } \begin{document} \maketitle \begin{abstract} Spam detection is commonly used for email and text message filtering. We propose an unsupervised method to learn the distribution of text data using a generative model and propose a method to classify spam based on the learned distribution. Our experiment shows that this method can achieve good precision, making it useful for either 1) speeding up the labeling process of a data set or 2) labeling a certain percentage of the data in order to switch to a semi-supervised learning method or 3) iteratively removing the detected spams from the data and and retraining the model to relearn the distribution of the non-spam more efficiently, since there would be less spam in the data. \end{abstract} \keywords{Generative model \and Unsupervised classification \and NLP} \section{Introduction} \label{sec:intro} A standard approach for detecting spams is to use a dataset with labeled spam and non-spam examples. Then a classifier can be trained to discriminate between spam and non-spam messages. This approach requires a lot of time to label the data, thus it would be interesting to see how an unsupervised method would perform since unsupervised methods do not require labeled data. We propose to use a generative model to learn the distribution of a dataset assumed to be \emph{highly} unbalanced, where the underrepresented class is the spam. If we suppose that the spam and the non-spam text are coming from 2 significantly distinct data generating processes, a model that is trained to learn the distribution of the entire data should mostly learn the distribution of the overrepresented class (here the non-spam). We used a variational autoencoder \cite{kingma2013auto} to learn the distribution of the data. Variational Autoencoders are generative models that can be seen as a two parts neural network: an encoder network $Q(z|x)$ that encode the most relevant information of the features into a latent variable $z$, and a decoder network $P(x|z)$ that reconstructs the distribution of the inputs from the latent variable $z$. After the model is trained, the decoder network should be good at generating data that comes from the same distribution as the training set. Thus, if our training data mostly contains non-spam messages, it should be good at reconstructing non-spam examples, but not so good at reconstructing spam. So if $x$ is a spam, its reconstruction density $P(x|z)$ should be low and we should classify it as spam. More formally, we find a threshold $\mathcal{T}$ such that when the density of an example is lower then $\mathcal{T}$, the example is classified as spam. We propose a simple method for determining $\mathcal{T}$ in section \ref{subsection:oultiers}. The following section contains theoretical background on VAE and can be skipped by the reader who is familiar with the subject. Section 3 describes an approach for unsupervised spam detection. Section 4 provides experimental results on the SMS Spam Collection Data Set\footnote{https://archive.ics.uci.edu/ml/datasets/sms+spam+collection} along with details on the experiment. Section 5 contains a discussion including a concluding remark. \section{Theoretical Background} \paragraph{Latent variable models} Variational autoencoders are generative models. Generative models are models that learn the underlying distribution of the data, e.g. $p(\mathbf{x})$ when the task is density estimation or $p(\mathbf{x}, \mathbf{y})$ in supervised classification. It is possible to express the prior $p(\mathbf{x})$ as a function of a latent variable model $\mathbf{z}$ \begin{equation} p(\mathbf{x}) = \int p_\theta (\mathbf{x}|\mathbf{z}) p(\mathbf{z}) \der\mathbf{z} \end{equation} This modelization allows us to generate samples from $p(\mathbf{x})$ by first sampling $\mathbf{z}$ from $p(\mathbf{z})$ and then sampling $\mathbf{x}$ from $p_\theta(\mathbf{x}|\mathbf{z})$. Since the latent space is large, it would be impossible to learn a good representation for $p_\theta(\mathbf{x}|\mathbf{z})$ without a good mapping $f\colon \mathcal{Z}\times \Theta \mapsto \mathcal{X}$, where $\mathcal{Z}$ is the latent space, $\theta\in\Theta$, and $\mathcal{X}$ is the input space. In other words, we need pairs $(\mathbf{z}, \mathbf{x})$ such that $\mathbf{x}$ is likely to be generated by $\mathbf{z}$ in order to find a good parametrization for the posterior of $\mathbf{x}$. It is also possible to modelize the prior of the latent as function of the data $\mathbf{x}$ \begin{equation} p(\mathbf{z}) = \int p(\mathbf{z}|\mathbf{x}) p(\mathbf{x}) \der\mathbf{x} \end{equation} During training, instead of sampling $\mathbf{z}$ from its prior, we can sample $\mathbf{z}$ by first sampling $\mathbf{x}$ from $p(\mathbf{x})$ and then sample $\mathbf{z}$ from $p(\mathbf{z}|\mathbf{x})$. Since $p(\mathbf{x})$ is unknown, we sample uniformly from the training set $D_n$ instead. Also, the posterior of the latent variable $p(\mathbf{z}|\mathbf{x})$ is intractable, so we use an approximation $q_\phi(\mathbf{z}|\mathbf{x})$. Finally, we can approximate the prior of the data by sampling $\mathbf{z}^{(1)}$,...,$\mathbf{z}^{(L)}$ from $q_\phi(\mathbf{z}|\mathbf{x})$ and then using $p(\mathbf{x})= \int p_\theta (\mathbf{x}|\mathbf{z}) p(\mathbf{z}) \der\mathbf{z}\approx \frac{1}{L}\sum_{i=1}^L p_\theta(\mathbf{x}|\mathbf{z}^{(i)})$. \paragraph{Variational lower bound} During training, both $p_\theta(\mathbf{x}|\mathbf{z})$ and $q_\phi(\mathbf{z}|\mathbf{x})$ are trained simultaneously $-$ the objective being the maximization of the marginal likelihood of the prior of the data $\log p(\mathbf{x})$. Let's consider the Kullback-Leibler divergence $\kldiv$ between $p(z|x)$ and $q(z|x)$ \begin{equation} \begin{split} \kldiv[q(z|x)||p(z|x)] =& \E_{z\sim q(z|x)}[ \log q(z|x) - \log p(z|x)]\\ =& \E_{z\sim q(z|x)}[ \log q(z|x) - \log p(x|z) - \log p(z) + \log p(x)]\\ =& \log p(x) + \E_{z\sim q(z|x)}[ \log q(z|x) - \log p(z)] - \E_{z\sim q(z|x)}\log p(x|z)\\ =& \log p(x) - \kldiv[q(z|x)||p(z|x)] - \E_{z\sim q(z|x)}\log p(x|z)\\ \end{split} \end{equation} If we move all the terms on the right except the marginal likelihood to the left we have \begin{equation} \begin{split} \log p(x) = \E_{z\sim q(z|x)}\log p(x|z)- \kldiv[q(z|x)||p(z)] + \kldiv[q(z|x)||\log p(z|x)]\\ \end{split} \end{equation} The term $\kldiv[q(z|x)||\log p(z|x)]$ is intractable, but because of Gibbs' inequality, we know that it is positive and thus we have \[ \log p(x) \geq \E_{z\sim q(z|x)}\log p(x|z)- \kldiv[q(z|x)||p(z)] \] Hence, our loss function is \[ \mathcal{L} = \E_{z\sim q(z|x)}\log p(x|z)- \kldiv[q(z|x)||p(z)] \] If we set the distribution of the latent variable to be a multivariate standard Gaussian, i.e. $\mathbf{z}\sim \mathcal{N}(\bm{0}, \mathbf{I})$, we can express the KL-divergence between $q(z|x)$ and $p(z)$ as \begin{equation} \kldiv[q(z|x)||p(z)] = \frac{1}{2}\sum_{j=1}^J \mu_j^2 + \sigma_j^2 - 1 - \log \sigma_j^2 \end{equation} where $J$ is the dimension of the latent variable (see \cite{kingma2013auto} for details). Thus the loss function that we want to minimize is \begin{equation} \mathcal{L} = -\E_{z\sim q(z|x)}\log p(x|z) + \frac{1}{2}\sum_{j=1}^J \mu_j^2 + \sigma_j^2 - 1 - \log \sigma_j^2 \end{equation} \section{Method} \subsection{Density estimation} We tried 3 different approaches to represent the distribution of the text messages. The first model aims to learn the distribution of the binarized representation of the bag of words (BOW), i.e. each feature is either one or zero depending on whether the word corresponding to the position of the features is present in the sentence or not. The second one is to learn the distribution of the standard BOW representation. The last model learns the distribution of the \emph{bag of characters} in the text message. In this section, we describe the general form of the encoder and the different decoders used in each of these models. All models have 2 neural networks: an encoder network $f(\mathbf{x};\phi)$ that outputs the parameters of the posterior distribution of the latent variable and a decoder network $g(\mathbf{z};\theta)$ that outputs the parameters of the posterior distribution of the input. For all models, the latent variable is a standard Gaussian with a diagonal covariance. A neural network $f(\cdot)$ outputs the mean and the variance of the posterior latent distribution, i.e. we have $\bm{\mu}_z,\log\bm{\sigma}_z^2 = f(\mathbf{x};\phi)$ with $q_\phi(\mathbf{z}|\mathbf{x}) = \mathcal{N}(\mathbf{z};\bm{\mu}_z, \bm{\sigma}_z^2)$. In the simplest case, $f(\mathbf{x};\phi)$ is a one layer MLP \begin{equation} \begin{split} \mathbf{h} =& \relu\left(\mathbf{W}_{xh}\mathbf{x}+\mathbf{b}_{xh}\right)\\ \bm{\mu}_z =& \mathbf{W}^{(1)}_{hz}\mathbf{h}+\mathbf{b}^{(1)}_{hz}\\ \log \bm{\sigma}_z^2 =& \mathbf{W}^{(2)}_{hz}\mathbf{h}+\mathbf{b}^{(2)}_{hz}\\ \phi =& \{\mathbf{W}^{(1)}_{hz}, \mathbf{b}^{(1)}_{hz}, \mathbf{W}^{(2)}_{hz}, \mathbf{b}^{(2)}_{hz}, \mathbf{W}_{xh}, \mathbf{b}_{xh}\} \end{split} \end{equation} We can sample the latent variable using $\mathbf{z}=\bm{\mu}_z + \bm{\sigma}_z\odot \bm{\epsilon}$ where $\bm{\epsilon}\sim \mathcal{N}(\bm{0}, \bm{I})$. When experimenting, we change the form of the encoder by increasing the number of layers used to encode the last hidden layer $\mathbf{h}$ from which the parameters of the latent are read. Now we describe the structure of the different decoders we experimented with. \paragraph{Bernoulli decoder} The first approach is to model the distribution of the binarized bag of words representation of the text messages. Let $\mathcal{V}$ be the size of the vocabulary of the training set and $\mathbf{x}\in \{0, 1\}^{|\mathcal{V}|}$ be the binarized bag of words representation of a text message. In this case, $\bm{\gamma}(\mathbf{z}) = g(\mathbf{z};\theta)$ and \begin{equation} \log p_\theta(\mathbf{x}|\mathbf{z}) = \sum_{i=1}^{|\mathcal{V}|}x_i\log\gamma_i(\mathbf{z}) + (1-x_i)\log(1-\gamma_i(\mathbf{z})) \end{equation} where $\gamma_i(\mathbf{z})$ can be seen as the parameter of $x_i \sim \bernoulli(x;\gamma_i(\mathbf{z}))$. \paragraph{Poisson decoder} Our second approach is to model the distribution of the (non-binarized) BOW representation of the text messages. Let $\mathcal{V}$ be the vocabulary of the training set and $\mathbf{x}\in \nat^{|\mathcal{V}|}$ be the bag of words representation of a text message. Let $\bm{\lambda}(\mathbf{z})=g(\mathbf{z};\theta)$ be the output of the Poisson decoder network, i.e. the parameters of the posterior of the data. In this scheme, the probability that the word $i$ occur $x_i$ time is given by \begin{equation} \poisson(x_i;\lambda_i(\mathbf{z})) =e^{-\lambda_i(\mathbf{z})}\frac{\lambda_i^{x_i}(\mathbf{z})}{x_i!} \end{equation} Thus the log-likelihood is given by \begin{equation} \log p_\theta(\mathbf{x}|\mathbf{z}) = \sum_{i=1}^{|\mathcal{V}|}x_i\log\lambda_i(\mathbf{z}) - \lambda_i(\mathbf{z}) -\log(x_i!) \end{equation} where $\log(x_i!)$ can be approximate by the Stirling approximation $\log(x_i!) \approx x_i \log x_i - x_i + \frac{1}{2} \log(2 \pi x_i)$. When fitting the character distribution, the only difference is that $\mathbf{x}\in \nat^{\mathcal{|C|}}$ where $\mathcal{C}$ is the set of characters in the training corpus. Note that even so the posterior distribution in both decoders assumes independence between the features, the prior distribution does not make this assumption since all the parameters are read from the same latent variable. In other words, the features $x_i$ and $x_j$, for $i$ different than $j$, are conditionally independent given the latent variable $\mathbf{z}$, but they are not independent. \subsection{Outliers detection} \label{subsection:oultiers} A simple approach is to use the $k$-percentile of the learned distribution. When the model is trained, we evaluate the density of each example $p(\bm{x}^{(1)}),...,p(\bm{x}^{(n)})$ using $p(\mathbf{x})\approx\frac{1}{L}\sum_{i=1}^L p_\theta(\mathbf{x}|\mathbf{z}^{(i)})$, where $\mathbf{z}^{(i)}$ is sampled from the posterior distribution of the latent variable. Let $\tau$ be the index of the example such that $|\{\bm{x} : p(\bm{x}) < p(\bm{x}^{(\tau)})\}|=kn$ where $k\in (0, 1)$ should be smaller then the suspected percentage of spam in the data. Then the threshold is set to $\mathcal{T}=p(\bm{x}^{(\tau)})$. \section{Experiment} \subsection{Data} For our experiment, we used the SMS Spam Collection Data Set containing spam and non-spam labeled text messages (labels will be used only for testing the performance of our model). Table \ref{tab:data} shows some examples of the text messages in the data. We selected 10\% of all the spam and all the non-spam examples. We split the data in two: 50\% of the data was used to develop the approach and 50\% was used to test it. \begin{table} \caption{Sample data} \centering \begin{tabular}{ll} \toprule label & text \\ \midrule ham & Tell them the drug dealer's getting impatient \\ ham & Oh... Lk tt den we take e one tt ends at cine lor... Dun wan yogasana oso can... \\ spam & SIX chances to win CASH! From 100 to 20000 pounds txt CSH11 and send to 87575.\\ spam & PRIVATE! Your 2003 Account Statement for 07808247860 shows 800 un-redeemed S.I.M. points. Call...\\ \bottomrule \end{tabular} \label{tab:data} \end{table} \subsection{Preprocessing} First note that the preprocessing was applied for both BOW models, but not for the \textit{bag of characters}. All punctuation was removed except for exclamation marks and question marks. We used regular expressions to replace email address and URL by \texttt{<email>} and \texttt{<URL>} respectively. The regular expressions were not catching every pattern so not all email and URL were replaced. We also tried to lemmatize the words, but as you can see in table \ref{tab:data} the spelling was often bad making it difficult to lemmatize the text. One option would be to use a spell checker, but we decide not to use any since some expressions may not be used as often in spam than in non-spam message or the other way around. For example, it is frequent to see \textit{u} instead of \textit{you} in non-spam texts; maybe this does not happen frequently in the spam text so by correcting this spelling error we may be removing a good feature. \subsection{Architecture and training details} For all models, both the encoder and the decoder were parametrized with 2 fully connected layers of 128 hidden units each. Relu activation function was applied to each hidden layer. We also applied dropout and batch normalization after each hidden layer. The dropout rate was set to 0.5. The output activation function of the Bernoulli decoder is the element wise sigmoid. For the Poisson decoder, the parameter of the posterior is given by \begin{equation} \bm{\lambda}(\mathbf{z}) = \exp(\bm{\alpha} \odot \tanh(\mathbf{o}_{\text{decoder}})) \end{equation} where $\mathbf{o}_{\text{decoder}}$ is the pre-activation output of the decoder and $\bm{\alpha}$ is a parameter that is read from the latent variable. Specifically, \begin{equation} \begin{split} \eta =& \mathbf{W}_{z\alpha}\mathbf{z}+\mathbf{b}_{z\alpha}\\ \bm{\alpha} =& \begin{cases} 0, \eta < 0,\\ \eta, 0<\eta<3\\ 3, \eta > 3 \end{cases} \end{split} \end{equation} The role of the parameter $\bm{\alpha}$ is to control the range of value of $\bm{\lambda}$. For the Bernoulli decoder, applying sigmoid at the end of the network is sufficient because the parameter of a Bernoulli is between 0 and 1. In the case of the Poisson distribution, $\lambda_i(\mathbf{z})$ is the expected value of the feature $x_i$ given $\mathbf{z}$. Thus, it is not as straight forward as in the Bernoulli case to control the range of possible values since $\bm{\lambda}$ is not upper bounded. The size of the latent variable is set to 20. We optimize all models parameters using \emph{Adam} \cite{2014arXiv1412.6980K} with a learning rate chosen among the set $[0.00003, 0.0001, 0.001]$, $\beta_1\in [0.5, 0.9]$ and $\beta_2 \in [0.99, 0.999]$. We used a batch size of 64 and optimize the model for 1000 iterations. The hyperparameters were selected so that the loss on the training set was minimized. When predicting whether an example was a spam or not, the threshold was set to the density corresponding to the $2.5\%$-percentile. \subsection{Results} Figure \ref{fig:result} and table \ref{table:result} summarize the result of the experiment. We used precision and recall as our evaluation metrics because the number of spams being significantly lower than the number of non-spams, the model could achieve 90\% by only predicting non-spam, making the accuracy a not very interesting metric. The precision is defined as the number of true spam detected over the total number of examples classified as spam including the misclassified one. The recall is defined as the number of true spam detected over the total number of spams in the data. We were mostly interested in achieving a good score on the precision since this would allow to remove the real spam of the data and iterate. \begin{figure}[h] \centering \begin{tabular}{cccc} \includegraphics[width=0.4\textwidth]{../figures/kldiv_03.png} & \includegraphics[width=0.4\textwidth]{../figures/logp_03.png}\\ \includegraphics[width=0.4\textwidth]{../figures/precisions_03.png} & \includegraphics[width=0.4\textwidth]{../figures/recall_03.png}\\ \end{tabular} \caption{\textbf{Top left.} KL-divergence over all training iterations for the 3 models. \textbf{Top right.} Negative log-likelihood over all training iterations for the 3 models. \textbf{Bottom left.} Precision of the model defined as the number of true spams detected over the total number of examples classified as spam including the misclassified one. \textbf{Bottom right.} Recall of the model defined as the number of true spams detected over the total number of spams in the data.} \label{fig:result} \end{figure} \begin{table}[h] \caption{Test results of the spam detection and the modelization of the text distribution. The test data represents 50\% of the entire data set. The confidence interval are due to the randomness caused by the sampling of the latent variable $\mathbf{z}$ when evaluating the probability.} \centering \begin{tabular}{lrrrr} \toprule model & precision & recall & $\log p(\mathbf{x}|\mathbf{z})$ & $\kldiv$\\ \midrule BOW & $0.62\pm 0.006$ & $0.16\pm 0.004$ & $-258.62\pm 0.33$ & $8.27\pm 2.580$\\ Binary BOW & $0.64\pm 0.003$ & $0.22\pm 0.003$ & $-48.89\pm 0.03$ & $1.25\pm 0.000$\\ BOC & $0.79\pm0.005$ & $0.26\pm 0.004$ & $-279.95\pm 0.41$ & $14.17\pm 4.560$ \\ \bottomrule \end{tabular} \label{table:result} \end{table} Note that there are confidence intervals on the results in table \ref{tab:result} due to the randomness caused by the sampling of the latent variable $\mathbf{z}$ when evaluating the probabilities. We can see in the top plots of figure \ref{fig:result} that the Bernoulli distribution (binary BOW model) was easier to learn then the Poisson's since both its KL-divergence and its log-likelihood converge to zero very fast. We suspect it is due to the simplicity of the Bernoulli distribution. Also, the precision of the Bernoulli model reach is maximum very fast. The recall seems to stabilize around 0.2 which is close to the percentile threshold of 0.25 that we used. The best precision on the spam detection task was achieved by the bag of characters model. This may be due to the large number of features of the BOW (4588) model when compared to the number of training examples (2671), especially since a large number of features were the same but because of the different spelling they had different representations. \section{Discussion and conclusion} The proposed method for spam detection could potentially be used for any anomaly detection related task. In particular, it is interesting to see that the character level distribution yields the best performance since it does not require preprocessing. Text preprocessing in often language-specific making language models that rely heavily on feature engineering hard to scale across different languages. Since the character level model does not require feature design, it would be interesting to see if it scales across languages. We note that it would be necessary to investigate how sensitive the results are to the choice of threshold. In particular, we think the approach would benefit greatly from a heuristic to find the threshold that is intrinsic to the data and does not depend on a choice of hyperparameter like the percentile we used in this work. Evaluating the performance of our approach is highly expensive since it requires to label a part of the data and if we start searching for a good percentile we can quickly end up doing semi-supervised learning. Ultimately we would be interested to see how the iterative approach would perform; each time the distribution is learned (i.e the training is stopped according to a criterion on the loss) the detected spams would be removed and the model would be retrained on the remaining data. The step would be repeated until no or almost no spams are detected. It would also be interesting to see how effective the approach would be if it was followed by a semi-supervised method. More specifically, one could start by labeling a part of the examples using our approach and then switch to a semi-supervised approach like the one proposed by \cite{NIPS2014_5352}. \bibliographystyle{unsrt} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7391588581, "avg_line_length": 99.7608695652, "ext": "tex", "hexsha": "a7d7cd90880c902657944631866463970ffb3f3d", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2021-11-16T19:07:26.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-24T06:58:35.000Z", "max_forks_repo_head_hexsha": "f3406da47abd2b5db72d0be5f51910143da3fe9e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "szrlee/vae-anomaly-detector", "max_forks_repo_path": "docs/report/report.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "f3406da47abd2b5db72d0be5f51910143da3fe9e", "max_issues_repo_issues_event_max_datetime": "2020-01-27T16:44:04.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-27T09:42:43.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "szrlee/vae-anomaly-detector", "max_issues_repo_path": "docs/report/report.tex", "max_line_length": 1720, "max_stars_count": 61, "max_stars_repo_head_hexsha": "f3406da47abd2b5db72d0be5f51910143da3fe9e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "szrlee/vae-anomaly-detector", "max_stars_repo_path": "docs/report/report.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-07T23:31:45.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-25T02:24:14.000Z", "num_tokens": 6552, "size": 22945 }
\SetAPI{J-C} \section{database.name} \label{configuration:DatabaseName} \ClearAPI The name of the database. If \ref{configuration:DatabaseConnection} is not set, this field is not evaluated. However, it is much more convenient to use single properties than \ref{configuration:DatabaseConnection}. %% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.persistence.pg.PostgresTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.pg.PostgresTestDialect} & \prettyref{module:Persistence} \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \begin{lstlisting}[style=Props,caption={Usage example for \textit{database.name}}] database.name= \end{lstlisting}
{ "alphanum_fraction": 0.7695167286, "avg_line_length": 40.35, "ext": "tex", "hexsha": "fe866db208228e746a295aadbd39b4c890aac357", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/configuration/DatabaseName.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/configuration/DatabaseName.tex", "max_line_length": 214, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/configuration/DatabaseName.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 235, "size": 807 }
%!TEX root = ../thesis.tex %******************************************************************************* %****************************** Conclusion Chapter ********************************** %******************************************************************************* \chapter{Conclusion} % **************************** Define Graphics Path ************************** \ifpdf \graphicspath{{Conclusion/Figs/Raster/}{Conclusion/Figs/PDF/}{Conclusion/Figs/}} \else \graphicspath{{Conclusion/Figs/Vector/}{Conclusion/Figs/}} \fi This study set out to explore the API landscape in the EU public sector. The purpose of the study has been to identify areas where APIs are enablers of governments digital transformation. Areas of specific focus include aspects such as sources of open data, differences between APIs in the private sector and the future trends of APIs. The report provides a useful baseline overview of APIs, considering what they are used for, the different types of API that can be leveraged, and the API standards that exist. A glossary of terms and API types in the appendices provide further resources for the target audience. The report then goes on to consider how APIs are used in the public sector. The findings showed that APIs are used by the public sector to help them achieve their goals in four main ways: \begin{itemize} \item Enabling ecosystems. \item Overcoming complex integration of large systems. \item Supporting open government initiatives. \item Enabling innovation and economic growth. \end{itemize} The use of APIs has its challenges too. This study highlighted security and enhanced EU regulation around privacy as considerations for API owners. An API is another gateway into a computer network and associated data, and requires the security features and ongoing maintenance that such an interface deserves. The lack of standards does in some way hinder interoperability both internally and externally to government agencies. It is forcing organizations to develop their own set of guidelines to ensure alignment, and this is something that the UK Government have recently released to all API developers~\citep{gov_uk_api}. However, the use of API gateways, and the predominance of RESTful architectures is in some way diluting the pressure for a standard. Differences with the private sector were also considered. The report found that to date, government has harnessed the power of the API to make data more open and available to their citizens, and to themselves. The benefits range from increasing transparency, to enhanced efficiency of the existing service models. The private sector has harnessed APIs for a more transformative and disruptive end, giving rise to completely different business models, such as those which have made Netflix and Amazon leaders in their field. Our research also considered the future of government, which will be to some extent built on the API as a key enabler. As the demands of government move forward, it appears that APIs are well positioned to keep pace, and provide the access points needed to enable fast and secure data sharing to support government’s needs from law and order, healthcare and the environment. Our study provides brief examples of governemnt solutions with APIs at their core such as Estonia's X-Road, UK's TfL and Greece's Digital Solemn Declaration/ Authorization issuing system. We truly believe that our study has given some useful insight in how APIs as a technology can contribute to Governments digital transformation.
{ "alphanum_fraction": 0.7470389171, "avg_line_length": 60.1016949153, "ext": "tex", "hexsha": "7be5eff40a201cae20874f893ca1ff40e17c3713", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a3411ea3a95fabb69be50607360346221cac377c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "milouk/Bachelor-thesis", "max_forks_repo_path": "Conclusion/conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a3411ea3a95fabb69be50607360346221cac377c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "milouk/Bachelor-thesis", "max_issues_repo_path": "Conclusion/conclusion.tex", "max_line_length": 332, "max_stars_count": null, "max_stars_repo_head_hexsha": "a3411ea3a95fabb69be50607360346221cac377c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "milouk/Bachelor-thesis", "max_stars_repo_path": "Conclusion/conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 686, "size": 3546 }
\subsection{Dynamic programming} $$ a = \begin{bmatrix} \dot{x_1}\\ \dot{x_2} \end{bmatrix} = \begin{bmatrix} x_2\\ -0.4x_1 -0.2x_2^2 \end{bmatrix} + \begin{bmatrix} 0\\ 1 \end{bmatrix}u $$ $$ \begin{bmatrix} x_1(k+1)\\ x_2(k+1) \end{bmatrix} = \begin{bmatrix} x_2(k)\\ -0.4x_1(k) -0.2x_2^2(k) + u(k) \end{bmatrix} \Delta t + \begin{bmatrix} x_1(k)\\ x_2(k) \end{bmatrix} $$ In MATLAB Code, Control law will save in .mat file and we can use for another initial condition very fast without so much processing. \begin{figure}[H] \caption{Dynamic Programming} \centering \includegraphics[width=11.5cm]{../Figure/Q4/DP.png} \end{figure}
{ "alphanum_fraction": 0.6692426584, "avg_line_length": 20.8709677419, "ext": "tex", "hexsha": "fb5866cb07e18d56cc8620892fa731a5aba2f264", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f384c9e4c5ddc45b2bbab0f0bb9f666f64eece53", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alibaniasad1999/Optimal-Control", "max_forks_repo_path": "HW/HW3/Report/Q4/Q4_c.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f384c9e4c5ddc45b2bbab0f0bb9f666f64eece53", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alibaniasad1999/Optimal-Control", "max_issues_repo_path": "HW/HW3/Report/Q4/Q4_c.tex", "max_line_length": 133, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f384c9e4c5ddc45b2bbab0f0bb9f666f64eece53", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alibaniasad1999/Optimal-Control", "max_stars_repo_path": "HW/HW3/Report/Q4/Q4_c.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-09T13:16:54.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-09T13:16:54.000Z", "num_tokens": 273, "size": 647 }
\section{Radburny ''Burnie" Cinders} \begin{center} \includegraphics[width=80mm]{./content/img/burnie.png} \begin{figure}[h] \end{figure} \end{center} \subsection*{Details} \noindent Race: Human (Dwarf by adoption) \\ Class: Who the fuck can know \\ Age: 60 \\ Status: Active \subsubsection{Background} Burnie is a bookburner who was raised by dwarves \subsubsection{Personality and Traits} He is agrouchy old man but quite freindly really \subsubsection{Relationships} Burnie has a son Crow \subsubsection{Burnies's Story} \begin{DndSidebar}{text} dfhgsfdghfsgh \end{DndSidebar} \smallskip \bigskip %\begin{DndSidebar}[float=!b]{Behold the DndSidebar!} %dfhgsfdghfsgh %\end{DndSidebar} \clearpage
{ "alphanum_fraction": 0.7493074792, "avg_line_length": 15.6956521739, "ext": "tex", "hexsha": "ad0898408a0204f7a8a956dc634d383a894600fb", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-04T09:40:24.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-04T09:40:24.000Z", "max_forks_repo_head_hexsha": "23763424cf31c50618bc6ddeefe2196cdf6be974", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mcgi5sr2/velterraBook", "max_forks_repo_path": "content/chars/burnie.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "23763424cf31c50618bc6ddeefe2196cdf6be974", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mcgi5sr2/velterraBook", "max_issues_repo_path": "content/chars/burnie.tex", "max_line_length": 54, "max_stars_count": null, "max_stars_repo_head_hexsha": "23763424cf31c50618bc6ddeefe2196cdf6be974", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mcgi5sr2/velterraBook", "max_stars_repo_path": "content/chars/burnie.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 231, "size": 722 }
\documentclass[11pt,]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \usepackage{xltxtra,xunicode} \else \usepackage{fontspec} \fi \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \newcommand{\euro}{€} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1.0in]{geometry} \ifxetex \usepackage[setpagesize=false, % page size defined by xetex unicode=false, % unicode breaks when used with xetex xetex]{hyperref} \else \usepackage[unicode=true]{hyperref} \fi \hypersetup{breaklinks=true, bookmarks=true, pdfauthor={}, pdftitle={NAME OF THIS STUDY}, colorlinks=true, citecolor=blue, urlcolor=blue, linkcolor=magenta, pdfborder={0 0 0}} \urlstyle{same} % don't use monospace font for urls \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{\textbf{NAME OF THIS STUDY}} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{} \preauthor{}\postauthor{} \date{} \predate{}\postdate{} \usepackage{helvet} % Helvetica font \renewcommand*\familydefault{\sfdefault} % Use the sans serif version of the font \usepackage[T1]{fontenc} \usepackage[none]{hyphenat} \usepackage{setspace} \doublespacing \setlength{\parskip}{1em} \usepackage{lineno} \usepackage{pdfpages} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi \begin{document} \maketitle \vspace{35mm} Running title: INSERT RUNNING TITLE HERE \vspace{35mm} Your Name Here\^{}1, Joeseph P. Schmo\^{}2, Sally J. Rivers\^{}1, Patrick D. Schloss\textsuperscript{1\(\dagger\)} \vspace{40mm} \(\dagger\) To whom correspondence should be addressed: \href{mailto:[email protected]}{\nolinkurl{[email protected]}} 1. Department of Microbiology and Immunology, University of Michigan, Ann Arbor, MI 48109 2. Other department contact information \newpage \linenumbers \subsection{Abstract}\label{abstract} \newpage \subsection{Introduction}\label{introduction} \subsection{Results and Discussion}\label{results-and-discussion} \subsection{Conclusions}\label{conclusions} \subsection{Materials and Methods}\label{materials-and-methods} \newpage Insert figure legends with the first sentence in bold, for example: \textbf{Figure 1. Number of OTUs sampled among bacterial and archaeal 16S rRNA gene sequences for different OTU definitions and level of sequencing effort.} Rarefaction curves for different OTU definitions of Bacteria (A) and Archaea (B). Rarefaction curves for the coarse environments in Table 1 for Bacteria (C) and Archaea (D). \newpage \subsection*{References}\label{references} \addcontentsline{toc}{subsection}{References} \hypertarget{refs}{} \end{document}
{ "alphanum_fraction": 0.746395806, "avg_line_length": 28.0858895706, "ext": "tex", "hexsha": "f8988e327d82189170b8aa12bc6e8b36544ac933", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6a247f5aa54380a03c35cc8c7bd7f840485b91f9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "luongphekidz07/Kozich_ReAnalysis_AEM_2013_2", "max_forks_repo_path": "submission/manuscript.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "6a247f5aa54380a03c35cc8c7bd7f840485b91f9", "max_issues_repo_issues_event_max_datetime": "2017-05-10T15:36:06.000Z", "max_issues_repo_issues_event_min_datetime": "2017-05-10T15:33:52.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "luongphekidz07/Kozich_ReAnalysis_AEM_2013_2", "max_issues_repo_path": "submission/manuscript.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "6a247f5aa54380a03c35cc8c7bd7f840485b91f9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "luongphekidz07/Kozich_ReAnalysis_AEM_2013_2", "max_stars_repo_path": "submission/manuscript.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1388, "size": 4578 }
\chapter{Existing Methods} % Chapter Introduction \lipsum[2-3] % Section 1 \section{Sub Section 1} \lipsum[2-4] % Section 2 \section{Sub Section 2} % Figure \begin{figure}[H] \centering \includegraphics[width=400pt]{Images/device.png} \caption{Block diagram of the device} \end{figure} % Conclusion \lipsum[2]
{ "alphanum_fraction": 0.5829145729, "avg_line_length": 18.9523809524, "ext": "tex", "hexsha": "11fc0588461d087d9c28ae49142e1d36630fbb0e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "39319644eca5640a679c8db707f6085be8f3b16c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Samanyu13/LaTeX-Templates", "max_forks_repo_path": "BTech_Seminar_Report_Template/Files/Content/chapter2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "39319644eca5640a679c8db707f6085be8f3b16c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Samanyu13/LaTeX-Templates", "max_issues_repo_path": "BTech_Seminar_Report_Template/Files/Content/chapter2.tex", "max_line_length": 56, "max_stars_count": null, "max_stars_repo_head_hexsha": "39319644eca5640a679c8db707f6085be8f3b16c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Samanyu13/LaTeX-Templates", "max_stars_repo_path": "BTech_Seminar_Report_Template/Files/Content/chapter2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 119, "size": 398 }
\documentclass[5p,authoryear]{elsarticle} \makeatletter \def\ps@pprintTitle{% \let\@oddhead\@empty \let\@evenhead\@empty \let\@evenfoot\@oddfoot} % Supprimer le bas de page ELSEVIER \makeatother \usepackage[utf8]{inputenc} % En unicode \usepackage[T1]{fontenc} \usepackage[english]{babel} \usepackage[babel=true]{csquotes} % permet de faire \enquote{a} (« a ») \usepackage[fleqn]{amsmath} % pour certains signes mathématiques \usepackage{amsthm} % Pour \begin{gather} \usepackage{booktabs} % pour \toprule (un style de tableau) \usepackage{multirow} % Pour colonnes multiples des tableaux \usepackage{amssymb} % Pour \leqslant (<=, >=) \usepackage{float} \usepackage{hyperref} % \usepackage[english]{cleveref} % adding Code Blocking \usepackage{listings} \usepackage{color} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=tb, language=Java, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\small\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } %\bibliographystyle{elsarticle-num} \bibliographystyle{elsarticle-harv} \usepackage{fancyhdr} \pagestyle{fancy} \lhead{MSDS 458 - SEC 56} \rhead{Lee, J.} \begin{document} \begin{frontmatter} \title{Focused Web Crawler: \\NBA Team Specific News Articles} \author{Jason Lee} \address{Northwestern University, SPS \\Natural Language Processing \\2020SP MSDS 453-56} \begin{abstract} Information is power in the sports betting industry. When teams' news hits the web, betting syndicates need to be able to react with speed before market prices adjust. A web scraper, or spider, can crawl the internet collecting relevant information. The goal of this project is to develop a spider utilizing the Scrapy and Selenium packages in Python to collect National Basketball Association (NBA) team specific news. The spider will crawl from page to page scraping, parsing, and saving the relevant articles. \end{abstract} \begin{keyword} Natural Language Processing (NLP) \sep NBA \sep Sports Betting \sep Web Scraping \sep Python \end{keyword} \end{frontmatter} %\linenumbers \section{Introduction}\label{introduction} The sports betting markets are presumed to be efficient markets by the time the closing price on each game is locked in \citep{market}. Fortunately for professional sports bettors, there is a difference between the opening line and the closing line for each game where there is opportunity to exploit inefficiencies for financial gain. Many of these lines move when new information about a team is released to the public. There is a delay between when critical news comes out and when sportsbooks adjust their lines. Professional sports bettors are able to use the lag in the shifting lines advantageously by reacting the fastest with new information. Professional sports bettors are able to do this by utilizing automated bots or web crawlers, also known as Spiders. These Spiders scour the internet searching for any relevant sporting news that can add value to the sports bettors. This project focuses on building one of these Spiders to crawl the internet searching for news articles relating to any National Basketball Association (NBA) team. The spider will then parse and save important information that can then be used to inform sports betting decisions. This project will be sponsored by A.I. Sports and the final product will be utilized by their company to consult with various betting syndicates \citep{aisports}. This project will also be the starting point for future natural language processing (NLP) projects. The focused web crawlers will collect hundreds of news articles combined together creating a complete corpus. The corpus will be broken down using document vectorization methods to understand relationships between each article in the corpus. There will be document classification models built on the corpus, as well as unsupervised machine learning techniques and multivariate analysis to better understand themes and sentiments of each article. Another future project will consist of developing a document summarization algorithm that can quickly generate accurate summarized outputs for each article. There are several desired outcomes from this first project: 1 – Create a fully functional and focused Python based web crawler. 2 – Understand how to inspect websites to extract the right information. 3 – Create a complete corpus of NBA team news articles for future Natural Language Processing (NLP) projects. \section{Literature Review} \label{lit_rev} The process of extracting and storing information from the internet in an automated fashion is a highly valuable skill that requires a keen attention to detail. Writing flexible code that is able to handle potential errors is crucial for a successfully programmed focused web crawler. Three researches with ties to IBM dedicated time formulating a list of eight pivotal steps when creating a focused web crawler \citep{focused}: \begin{enumerate} \item Canonical Taxonomy Creation \item Example Collection \item Taxonomy Selection and Refinement \item Interactive Exploration \item Classifier Training \item Resource Discovery \item Distiller Development \item Evaluation and Feedback \end{enumerate} \\ The beginning step is obvious, create a topic for the focused web crawler to search. The next step is to collect example websites containing information about the chosen topic. During the example collection process, the following step naturally takes shape as the topic becomes refined. Certain websites and links may be marked as "good" or "bad" during both the example collection and refinement steps \citep{focused}. Once the topic has been narrowed down, an interactive exploration of the websites begins. A key feature to any focused web crawler is a specific starting (or multiple starting points) and a designated stopping point \citep{crawl}. A focused web crawler must have a purpose, which requires the user to perform preparatory work by traversing the internet to find key domains with relevant information. Once these starting domains are selected, the Spiders will be able crawl through each link, moving from webpage to webpage. However, there are many links on relevant websites that might lead to irrelevant material or advertisements. If the web crawler is left on its own, it may end up falling down the endless black hole that is the internet. There are many ways to control the web crawler by setting program hyperparameters. The Spiders can be given strict instruction to remain only on specific domains. The Spiders can also be stopped after a predetermined number of pages deep it moves. Another program hyperparameter is the total number of pages to download, forcing the web crawler to immediately stop when it reaches the maximum limit \citep{crawl}. Step 5 in the IBM researchers list, training the classifier, helps the web crawler remain on a focused path. A classifier is a model that determines how relevant a particular website is to the chosen topic \citep{focused}. The classifier will allow for advanced filtering through websites that would not be feasible manually on larger projects. The classifier training takes into account the websites and domains manually marked as "good" and "bad" from the previous steps. \begin{figure}[!h] \centering \includegraphics[width=3.4in]{figures/focused_webcrawler_diagram.png} \caption[]{Block diagram of the focused crawler showing how the web crawler, classifier, and distiller are integrated.} \label{web_flow} \end{figure} The final steps bring the entire project together. The user discovers the resources that were collected by the focused web crawler as it scrapes the internet. The user will be able to provide feedback that is cycled back into the classifier retraining and improving the outputs. A distillation algorithm can also be run after the focus web crawler has been working for some time. The distiller will be able to find key domains that are known as the authorities on the given topic. Figure \ref{web_flow} visually depicts the eight steps seamlessly flowing together in a diagram that was used by the IBM researchers, with the classifier, distiller, and crawler as the three focal components \citep{focused}. \subsection{Scrapy}\label{Scrapy} Scrapy is a Pyhonic program designed specifically for web crawling and scraping \citep{scrapy}. Figure \ref{Scrapy_Framework} shows how Scrapy behaves, which has some similarities to the IBM researchers' Framework in Figure \ref{web_flow}. The Spiders send the initial request to the Scrapy Engine, where the Engine checks the scheduler before dispatching the Spiders to the internet. The Spiders send their requests to the internet and receive a response. The Spiders have predetermined items they are sent to collect. The Spiders will internally scan the response sending each item it finds through the Item Pipeline where it is stored. The Item Pipeline is where the items can be parsed, transformed, and formatted before they are saved to the desired database or file. \begin{figure}[!htb] \centering \includegraphics[width=3.7in]{figures/Scrapy_Framework.png} \caption[]{Illustration of the Scrapy Framework} \label{Scrapy_Framework} \end{figure} \subsection{Selenium}\label{Selenium} Many newer websites are written with JavaScript and Angular applications embedded into the HTML code to allow the page to operate in a smooth, reactive fashion \citep{angular}. However, while the user experience is enhanced, these websites cause serious issues for most web crawlers as they try to extract the page contents. From the perspective of the Spiders, there are only empty containers where the contents should be because the webpage does not actually queried the server and load. Selenium is a tool that launches a WebDriver with a designated browser to allow the Spider to view a website the same way a user would view it. Figure \ref{Selenium_Framework} shows the Framework that Selenium operates within. There are many different types of browsers that can be used and the HTTP requests to the server and JavaScript code will run within Selenium. This allows the spiders to "click" buttons, scroll, and extract the information on these pages. \begin{figure}[!htb] \centering \includegraphics[width=3.4in]{figures/Selenium_Framework.png} \caption[]{Illustration of the Selenium Framework} \label{Selenium_Framework} \end{figure} \section{Methodology}\label{meth} The methodology implemented during this project is as follows. \subsection{Topic Selection and Refinement}\label{chat} The canonical topic chosen for this project is the National Basketball Association (NBA). More specifically, news articles discussing team match-ups, injuries, and general news that might give insight into how the team will perform in their upcoming games. Gathering relevant examples was an easy task, almost too easy. There are local beat writers, national news coverage, bloggers, and the occasional deep dive investigative reporting from media companies. Unfortunately, the main sports news companies, like ESPN, FOX Sports, Turner, Bleacher Report, etc., do not cover teams equally or fairly. There are 30 NBA teams and each team has dedicated team website on NBA.com. An assumption by using the official team specific websites is that each team's news will be biased towards their own team. This is an acceptable bias for this project because of the consistency across all of the teams. The web crawlers will begin on each team's official NBA website and move from post to post scraping each article. \subsection{Interactive exploration of the web}\label{exploration} \begin{figure}[!htb] \centering \includegraphics[width=3.4in]{figures/Hawks_News.png} \caption[]{Atlanta Hawks News homepage} \label{hawks news} \end{figure} This portion of the project was extremely tedious. There were many issues trying to program the web crawlers. The Atlanta Hawks are the first alphabetical team in the NBA making them the experimental guinea pig. Their web page is shown in Figure \ref{hawks news}. \begin{figure}[!htb] \centering \includegraphics[width=3.0in]{figures/post_title.png} \caption[]{HTML Class = 'post\_\_title' showing where the web crawler can extract the information on the news article's title} \label{Post Title} \end{figure} The first step is to open the developer tools and inspect the raw HTML code searching for the XPATH or CSS class name, or id tag, for the section on the website that will be scraped. Figure \ref{Post Title} shows an example of the article's title class for the web crawler to find. Looking over the website's code and finding the class names, this appears to be a fairly simple and straightforward task for the focused web crawler to complete. \begin{figure}[!htb] \centering \includegraphics[width=3.0in]{figures/post_link.png} \caption[]{The HTML Hyperlink is contained within the 'post\_\_title' div allowing the spider to crawl to the actual new article's page to scrape the entire document} \label{Post Link} \end{figure} However, this did not work at all. In Figure \ref{Post Link}, there is a tag "\_ngcontent" meaning that this information is contained inside an Angular JavaScript application. This issue and coming up with a workable solution dominated the majority of the allotted time to complete this project. Eventually, the Spiders were able to collect the necessary information by using both Selenium and Scrapy together. \section{Computational Experiment} The entire Python code for this project's focused web crawler program will be attached to this paper, or can be reproduced by cloning the project's \href{https://github.com/papagorgio23/NBA_News_Spiders}{Github Repository} at this url:\\ \href{https://github.com/papagorgio23/NBA_News_Spiders}{github.com/papagorgio23/NBA\_News\_Spiders}\\ To run the program, simply type the following line of code into the command line from the "NBA\_News\_Spiders" project home directory: \begin{lstlisting} cd NBA_News python release_spiders.py \end{lstlisting} This script will unleash the spiders onto hundreds of NBA news articles where they can scrape relevant information. This program will output the articles in an itemized JSON line file, where each line in the file contains a single news article and key associated details. Those details include the team, url, date, tags, and the article. \subsection{Release Spiders Python Script}\label{release} The "release\_spiders.py" file is an adapted version of the "run-articles-spider.py" file from the "WebFocusedCrawlWorkV001" directory \citep{sample-code}. The file begins by creating a folder called "nba\_articles" where the downloaded html web pages will be saved by the Spiders as they crawl from site to site. The structure of the directory is then outputted to the terminal ensuring that everything is in order before the spiders are released into the wild. The final lines of code call on Scrapy to launch the focused web crawlers and save the items in a JSON line formatted file. \subsection{Scrapy with Selenium} The workhorse script for this focused web crawler program is found in the spiders folder with the title "articles\_spider.py". The Spider named "ArticlesSpider" is created with instructions to perform two sequential tasks. Typically, when a Scrapy Spider is released, it is given a list of URLs to start their crawl. What is unique about this particular Spider is that the starting URLs is a function call instead of a list of URLs. This function initializes a Selenium webdriver with a headless Chrome browser. Through Selenium, the Spider loops through a list of each team's news website clicking the "load more" button eight times to expand the number of news articles displayed on the web page to a maximum 96 articles. Once the articles are all loaded into the browser, the Spider collects the URLs for every single news article. This complete list with hundreds of URLs from every team is then passed back to the Scrapy Spider and the Selenium browser closes. The Scrapy Spiders will then use this returned list of URLs as starting points to crawl each article, parsing and storing the required information. \subsubsection{Items} The Scrapy items Python script is used for two purposes. First, the items that the Spiders are searching for are defined. The Spiders will collect six important pieces of information from every article: \begin{enumerate} \item team = The name of the NBA team \item url = The URL where the article is found \item tags = The topic tags for the article \item title = The title of the article \item date = The date the article was posted \item article = The complete article \end{enumerate} \\ The second purpose of the items script is to preprocess, or clean, some of the fields before they are stored. The raw date field would return a character string similar to this: "Posted on: Mar 14, 2020" This is not a workable format for a computer to understand. A custom function is written to trim "Posted on: " from the string and then "Mar 14, 2020" is converted into a Python Datetime field. The result of this function is then stored as the date item for each article. The other field that requires preprocessing is the article field. The articles are comprised of many elements, such as links, special characters, and lingering HTML tags. The Spiders also capture leading and trailing text that is not actually apart of the story. The first step cleaning this field comes by utilizing the remove\_tags function from the w3lib.html library. The trailing text always beings with "Copyright", making it a convenient word to split at to remove the irrelevant text. The leading text is consistent across the articles making it simple to remove as well. Finally, paragraph breaks and excess spaces are trimmed and the cleaned article string can be stored. All of these steps can found in the "items.py" script in the NBA\_Scrapy folder. \section{Results} The Scrapy program logs are shown in Figure \ref{Scrapy Results}. The Scrapy spiders crawled 771 links, extracting 768 itemized articles. Unfortunately, there were consistency issues between many of the NBA team's homepages. There were seven NBA teams that returned zero news articles, resulting in a total of 23 NBA teams represented in our corpus. While this is not optimal, it is an acceptable corpus size to meet the requirements. \begin{figure}[!htb] \centering \includegraphics[width=3.0in]{figures/Scrapy_Results.png} \caption[]{Scrapy output logs from the focused web crawler program} \label{Scrapy Results} \end{figure} Roughly 23 minutes elapsed to completely run the focused web crawler program, with the Selenium portion absorbing a disproportionate amount of the time. Selenium takes up so much time because each time a spider lands on a team's homepage, there is a programmed delay to allow the Angular application and JavaScript portions of the page to load. The spider then tries to click eight buttons, waiting a few seconds in between each click for the news articles to populate before the contents can be extracted. Once the Selenium portion finished collecting the 771 usable links to news articles, the Scrapy spiders were able to scrape each article in under two minutes. The Scrapy Spiders saved the itemized JSON file, as well as the full HTML webpage for all 768 articles. \section{Discussion and Conclusions} Potential future work to improve the performance of this focused web crawler and scraper would include adding additional error handling steps to hopefully collect news articles from all 30 teams. There were several NBA teams who have not updated their websites, meaning that they do not follow the same standard design template. Because of the time constraints for this project, there was not enough time to manually investigate the websites of each of the teams that through an error. Fixing this code will allow the spiders to scrape hundreds of additional articles. Another slight programming issue came trying to incorporate the Selenium driver within the Scrapy framework. The initial plan was to use Selenium on each team's website by placing a Selenium driver within the "start\_requests" function in the Spider's Class. However, this generated issues managing the number of drivers opened and transitioning the request from using Selenium on the starting JavaScript team page to using the standard Scrapy request and response on the static article page. The ideal plan would utilize the middlewares.py Scrapy functionality by adding Selenium into the "process\_request" function (Step 4 in the diagram in Figure \ref{Scrapy_Framework}). Ultimately, the attempts to implement this failed under the given time frame but future work should make this adjustment. In conclusion, this project was able to accomplish each of the goals originally laid out. A fully automated and narrowly focused web crawler was built in Python utilizing both Scrapy and Selenium that A.I. Sports will be able to use in the future. There has been a deeper understanding of website structure and tools need to extract the right information over the life cycle of the project. And finally, an organized corpus of NBA team news articles has been created for future natural language processing (NLP) projects. \clearpage \bibliography{bibliographie.bib} \bibliographystyle{newapa} \end{document}
{ "alphanum_fraction": 0.7990989334, "avg_line_length": 71.7887788779, "ext": "tex", "hexsha": "bc8789bb70ec48dcd8baff842fb9d7656d47869f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca5c12bf50e1a8b422b0afc315a6b61ba3b67588", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "papagorgio23/NBA_News_Spiders", "max_forks_repo_path": "Academic Paper/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca5c12bf50e1a8b422b0afc315a6b61ba3b67588", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "papagorgio23/NBA_News_Spiders", "max_issues_repo_path": "Academic Paper/main.tex", "max_line_length": 884, "max_stars_count": 3, "max_stars_repo_head_hexsha": "ca5c12bf50e1a8b422b0afc315a6b61ba3b67588", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "papagorgio23/NBA_News_Spiders", "max_stars_repo_path": "Academic Paper/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-09T22:04:37.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-20T22:10:02.000Z", "num_tokens": 4751, "size": 21752 }
\documentclass[11pt]{article} \usepackage{url} \usepackage{syntax} \usepackage{listings} \usepackage{amsmath} \lstset{aboveskip=1.5ex, belowskip=1.2ex, showstringspaces=false, mathescape=true, flexiblecolumns=false, xleftmargin=2ex, basewidth=0.52em, basicstyle=\small\ttfamily} \lstset{literate={->}{{$\rightarrow{}\!\!\!$}}3 } \renewcommand{\tt}{\usefont{OT1}{cmtt}{m}{n}\selectfont} \newcommand{\codefont}{\small\tt} \newcommand{\code}[1]{\mbox{\codefont #1}} \newcommand{\ccode}[1]{``\code{#1}''} % The layout of this manual is adapted from the KiCS2 manual. %%% ------------------------------------------------------------------ \usepackage[colorlinks,linkcolor=blue]{hyperref} \hypersetup{bookmarksopen=true} \hypersetup{bookmarksopenlevel=0} \hypersetup{pdfstartview=FitH} \usepackage{thumbpdf} %%% ------------------------------------------------------------------ \setlength{\textwidth}{16.5cm} \setlength{\textheight}{23cm} \renewcommand{\baselinestretch}{1.1} \setlength{\topmargin}{-1cm} \setlength{\oddsidemargin}{0cm} \setlength{\evensidemargin}{0cm} \setlength{\marginparwidth}{0.0cm} \setlength{\marginparsep}{0.0cm} \begin{document} \title{CPM User's Manual} \author{Jonas Oberschweiber \qquad Michael Hanus\\[1ex] {\small Institut f\"ur Informatik, CAU Kiel, Germany}\\[1ex] {\small\texttt{[email protected]}} } \maketitle \tableofcontents \clearpage \section{Introduction} This document describes the Curry package manager (CPM), a tool to distribute and install Curry libraries and manage version dependencies between these libraries. A distinguishing feature of CPM is its ability to perform \emph{semantic versioning checking}, i.e., CPM provides a command to check the semantics of a new package version against an older version of the same package. \bigskip\bigskip \section{Installing the Curry Package Manager} \subsection{Requirements} CPM requires \emph{Git}\footnote{\url{http://www.git-scm.com}}, \emph{curl}\footnote{\url{https://curl.haxx.se}}, \emph{tar}, and \emph{unzip} to be available on the \code{PATH} during installation and operation. It is strongly recommended that SQLite\footnote{\url{https://www.sqlite.org}} is installed so that the executable \code{sqlite3} is in your path. In this case, CPM uses a SQLite database for caching the central package index (see Section~\ref{sec:internals}). This yields faster response times of various CPM commands. CPM is part of recent distributions of the Curry systems PAKCS\footnote{\url{https://www.informatik.uni-kiel.de/~pakcs/}} (since version 1.15.0) and KiCS2\footnote{\url{https://www-ps.informatik.uni-kiel.de/kics2/}} (since version 0.6.0) so that it can directly be used with these Curry systems. If you use an older release of PAKCS or KiCS2 or you want to install some CPM version from the source repository, the following section contains some hints about the installation of CPM. \subsection{Source Code Installation} To install and use CPM, a working installation of either PAKCS in version 1.14.1 or greater, or KiCS2 in version 0.5.1 or greater is required. You need to ensure that your Haskell installation reads files using UTF-8 encoding by default. Haskell uses the system locale charmap for its default encoding. You can check the current value using \code{System.IO.localeEncoding} inside a \code{ghci} session. To install CPM from the sources, enter the root directory of the CPM source distribution. The main executable \code{curry} of your Curry system must be in your path (otherwise, you can also specify the root location of your Curry system by modifying the definition of \code{CURRYROOT} in the \code{Makefile}). Then type \code{make} to compile CPM which generates a binary called \code{cypm} in the \code{bin} subdirectory. Put this binary somewhere on your path. \clearpage \section{Starting the Curry Package Manager} If the binary \code{cypm} is on your path, execute the command % \begin{lstlisting} > cypm update \end{lstlisting} % to pull down a copy of the central package index to your system. You can use the same command to update later your copy of the central package index to the newest version. Afterwards, you can show a list of all packages in this index by % \begin{lstlisting} > cypm list \end{lstlisting} % The command % \begin{lstlisting} > cypm info PACKAGE \end{lstlisting} % can be used to show more information about a package. There is also a command % \begin{lstlisting} > cypm search QUERY \end{lstlisting} % to search inside the central package index. Section~\ref{sec:cmd-reference} contains a complete list of all available CPM commands. \clearpage \section{Package Basics} \label{sec:package-basics} Essentially, a Curry package is nothing more than a directory structure containing a \code{package.json} file and a \code{src} directory at its root. The \code{package.json} file is a JSON file containing package metadata, the \code{src} directory contains the Curry modules that make up the package. We assume familiarity with the JSON file format. A good introduction can be found at \url{http://json.org}. The package specification file must contain a top-level JSON object with at least the keys \code{name}, \code{author}, \code{version}, \code{synopsis} and \code{dependencies}. More possible fields are described in Section~\ref{sec:reference}. A package's name may contain any ASCII alphanumeric character as well as dashes (\code{-}) and underscores (\code{_}). It must start with an alphanumeric character. The author field is a free-form field, but the suggested format is either a name (\code{John Doe}), or a name followed by an email address in angle brackets (\code{John Doe <[email protected]>}). Multiple authors can either be separated by commas or written as a list of strings. Versions must be specified in the format laid out in the semantic versioning standard:\footnote{\url{http://www.semver.org}} each version number consists of numeric major, minor and patch versions separated by dot characters as well as an optional pre-release specifier consisting of ASCII alphanumerics and hyphens, e.g. \code{1.2.3} and \code{1.2.3-beta5}. Please note that build metadata as specified in the standard is not supported. The synopsis should be a short summary of what the package does. Use the \code{description} field for longer form explanations. Dependencies are specified as a nested JSON object with package names as keys and dependency constraints as values. A dependency constraint restricts the range of versions of the dependency that a package is compatible to. Constraints consist of elementary comparisons that can be combined into conjunctions, which can then be combined into one large disjunction---essentially a disjunctive normal form. The supported comparison operators are \code{<}, \code{<=}, \code{>}, \code{>=}, \code{=}, \code{\char126}, and \code{\char94}. The first five are interpreted according to the rules for comparing version numbers laid out in the semantic versioning standard. \code{\char126} requires\footnote{% In previous versions of CPM this was denoted as \code{{\char126}>} and called \emph{semantic versioning arrow}.} that the package version be at least as large as its argument but still within the same minor version, i.e., \code{{\char126}1.2.3} would match \code{1.2.3}, \code{1.2.9}, and \code{1.2.55} but not \code{1.2.2}, \code{1.3.0}, or \code{2.1.0}. Analogously, \code{\char94} requires that the package version be at least as large as its argument but still within the same major version, i.e., \code{{\char94}1.2.3} would match \code{1.2.3} and \code{1.4.3} but not \code{2.1.0}. To combine multiple comparisons into a conjunction, separate them by commas, e.g., \begin{lstlisting} >= 2.0.0, < 3.0.0 \end{lstlisting} would match all versions with major version \code{2}. Note that it would not match \code{2.1.3-beta5} for example, since pre-release versions are only matched if the comparison is explicitly made to a pre-release version, e.g., \code{= 2.1.3-beta5} or \code{> 2.1.3-beta2}. Conjunctions can be combined into a disjunction via the \code{||} characters, e.g., \begin{lstlisting} >= 2.0.0, < 3.0.0 || >= 4.0.0 \end{lstlisting} would match any version within major version \code{2} and from major version \code{4} onwards, but no version within major version \code{3}. \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Using Packages} Curry packages can be used as dependencies of other Curry packages or to install applications implemented with a package. In the following we describe both possibilities of using packages. \subsection{Creating New Packages} Creating a new Curry package is easy. To use a Curry package in your project, create a \code{package.json} file in the root, fill it with the minimum amount of information discussed in the previous session, and move your Curry code to a \code{src} directory inside your project's directory. Alternatively, if you are starting a new project, use the \code{cypm new <project-name>} command, which creates a new project directory with a \code{package.json} file for you.\footnote{The \code{new} command also creates some other useful template files. Look into the output of this command.} Then declare the dependencies inside the new \code{package.json} file, e.g.: \begin{lstlisting} { ..., "dependencies": { "base": "^1.0.0", "html": ">= 2.0.0, < 2.2.0" "json": "~1.1.0" } } \end{lstlisting} % Then run \code{cypm install} to install all dependencies of the current package and start your interactive Curry environment with \code{cypm curry}. You will be able to load the JSON package's modules in your Curry session. \subsection{Installing and Updating Dependencies} To install the current package's dependencies, run \code{cypm install}. This will install the most recent version of all dependencies that are compatible to the package's dependency constraints. Note that a subsequent run of \code{cypm install} will always prefer the versions it installed on a previous run, if they are still compatible to the package's dependencies. If you want to explicitly install the newest compatible version regardless of what was installed on previous runs of \code{cypm install}, you can use the \code{cypm upgrade} command to upgrade all dependencies to their newest compatible versions, or \code{cypm upgrade <package>} to update a specific package and all its transitive dependencies to the newest compatible version. If the package also contains an implementation of a complete executable, e.g., some useful tool, which can be specifed in the \code{package.json} file (see Section~\ref{sec:reference}), then the command \code{cypm install} also compiles the application and installs the executable in the \code{bin} install directory of CPM (see Section~\ref{sec:config} for details). The installation of executables can be suppressed by the \code{cypm install} option \code{-n} or \code{--noexec}. \subsection{Checking out Packages} \label{sec:checkout} In order to use, experiment with or modify an existing package, one can use the command \begin{lstlisting} cypm checkout <package> \end{lstlisting} to install a local copy of a package. This is also useful to install some tool distributed as a package. For instance, to install \code{curry-check}, a property-testing tool for Curry, one can check out the most recent version and install the tool: % \begin{lstlisting} > cypm checkout currycheck $\ldots$ Package 'currycheck-1.0.1' checked out into directory 'currycheck'. > cd currycheck > cypm install $\ldots$ INFO Installing executable 'curry-check into '/home/joe/.cpm/bin' \end{lstlisting} % Now, the tool \code{curry-check} is ready to use if \code{\$HOME/.cpm/bin} is in your path (see Section~\ref{sec:config} for details about changing the location of this default path). \subsection{Installing Applications of Packages} \label{sec:installapp} Some packages do not contain only useful libraries but also application programs or tools. In order to install the executables of such applications without explicitly checking out the package in some local directory, one can use the command \begin{lstlisting} cypm install <package> \end{lstlisting} This command checks out the package in some internal directory (default: \code{\$HOME/.cpm/apps_$Curry system$}, see Section~\ref{sec:config}) and installs the binary of the application provided by the package in \code{\$HOME/.cpm/bin} (see also Section~\ref{sec:checkout}). For instance, the most recent version of the web framework Spicey can be installed by the following command: % \begin{lstlisting} > cypm install spicey $\ldots$ Package 'spicey-xxx' checked out $\ldots$ $\ldots$ INFO Installing executable 'spiceup' into '/home/joe/.cpm/bin' \end{lstlisting} % Now, the binary \code{spiceup} of Spicey can be used if \code{\$HOME/.cpm/bin} is in your path (see Section~\ref{sec:config} for details about changing the location of this default path). \subsection{Executing the Curry System in a Package} To use the dependencies of a package, the Curry system needs to be started via CPM so that it will know where to search for the modules provided. You can use the command \ccode{cypm curry} to start the Curry system (which is either the compiler used to install CPM or specified with the configuration option \code{CURRY_BIN}, see Section~\ref{sec:config}). Any parameters given to \ccode{cypm curry} will be passed along verbatim to the Curry system. For example, the following will start the Curry system, print the result of evaluating the expression \code{39+3} and then quit. \begin{lstlisting} > cypm curry :eval "39+3" :quit \end{lstlisting} % To execute other Curry commands, such as \ccode{curry check}, with the package's dependencies available, you can use the \ccode{cypm exec} command. This command will set the \code{CURRYPATH} environment variable and then execute the command given after \ccode{exec}. \subsection{Using Packages Outside a Package} \label{sec:meta-package} In principle, packages can be used only inside another package by declaring dependencies in the package specification file \code{package.json}. If you invoke \code{cypm} in a directory which contains no package specification file, CPM searches for such a file from the current directory to the parent directories (up to the root of the file system). Thus, if you are outside a package, such a file is not available. In order to support the use other packages outside package, CPM provides a meta-package which is usually stored in your home directory at \code{\char126/.cpm/$Curry system$-homepackage}.\footnote{% Use \code{cypm config} and look at \code{HOME_PACKAGE_PATH} to see the current location of this meta-package.} This meta-package is used when your are not inside another package. Hence, if you write some Curry program which is not a package but you want to use some package \code{P}, you have to add a dependency to \code{P} to this meta-package. CPM does this automatically for you with the CPM command \code{cypm add --dependency} (short: \code{cypm add -d}). For instance, to use the libraries of the JSON package in your application, one can use the following commands: % \begin{lstlisting} > cypm add -d json # add 'json' dependency to meta-package > cypm install # download and install all dependencies > cypm curry # start Curry system with JSON libraries in load path ... Prelude> :load JSON.Data JSON.Data> \end{lstlisting} % The default behavior of the \code{add} command is to add the dependency \emph{and} install all dependencies, i.e., the previous commands can be reduced as follows: % \begin{lstlisting} > cypm add json > cypm curry ... Prelude> :load JSON.Data JSON.Data> \end{lstlisting} \subsection{Replacing Dependencies with Local Versions} \label{sec:cpm-link} During development of your applications, situations may arise in which you want to temporarily replace one of your package's dependencies with a local copy, without having to publish a copy of that dependency somewhere or increasing the dependency's version number. One such situation is a bug in a dependency not controlled by you: if your own package depends on package $A$ and $A$'s current version is $1.0.3$ and you encounter a bug in this version, then you might be able to investigate, find and fix the bug. Since you are not the the author of $A$, however, you cannot release a new version with the bug fixed. So you send off your patch to $A$'s maintainer and wait for $1.0.4$ to be released. In the meantime, you want to use your local, fixed copy of version $1.0.3$ from your package. The \code{cypm link} command allows you to replace a dependency with your own local copy. \code{cypm link} takes a directory containing a copy of one of the current package's dependencies as its argument. It creates a symbolic link from that directory the the current package's local package cache. If you had a copy of \code{A-1.0.3} in the \code{\char126/src/A-1.0.3} directory, you could use \code{cypm link \char126/src/A-1.0.3} to ensure that any time \code{A-1.0.3} is used from the current package, your local copy is used instead of the one from the global package cache. To remove any links, use \code{cypm upgrade} without any arguments, which will clear the local package cache. See Section~\ref{sec:internals} for more information on the global and local package caches. \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Authoring Packages} If you want to create packages for other people to use, you should consider filling out more metadata fields than the bare minimum. See Section~\ref{sec:reference} for a reference of all available fields. \subsection{Semantic Versioning} \label{sec:semantic-versioning} The versions of published packages should adhere to the semantic versioning standard, which lays out rules for which components of a version number must change if the public API of a package changes. Recall that a semantic versioning version number consists of a major, minor and patch version as well as an optional pre-release specifier. In short, semantic versioning defines the following rules: \begin{itemize} \item If the type of any public API is changed or removed or the expected behavior of a public API is changed, you must increase the major version number and reset the minor and patch version numbers to $0$. \item If a public API is added, you must increase at least the minor version number and reset the patch version number to $0$. \item If only bug fixes are introduced, i.e. nothing is added or removed and behavior is only changed to removed deviations from the expected behavior, then it is sufficient to increase the patch version number. \item Once a version is published, it must not be changed. \item For pre-releases, sticking to these rules is encouraged but not required. \item If the major version number is $0$, the package is still considered under development and thus unstable. In this case, the rules do not apply, although following them as much as possible as still encouraged. Release $1.0.0$ is considered to be the first stable version. \end{itemize} % To aid you in following these rules, CPM provides the \code{diff} command. \code{diff} can be used to compare the types and behavior of a package's public API between two versions of that package. If it finds any differences, it checks whether they are acceptable under semantic versioning for the difference in version numbers between the two package versions. To use \code{diff}, you need to be in the directory of one of the versions, i.e., your copy for development, and have the other version installed in CPM's global package cache (see the \code{cypm install} command). For example, if you are developing version $1.3.0$ of the JSON package and want to make sure you have not introduced any breaking changes when compared to the previous version $1.2.6$, you can use the \code{cypm diff 1.2.6} command while in the directory of version $1.3.0$. CPM will then check the types of all public functions and data types in all exported modules of both versions (see the \code{exportedModules} field of the package specification) and report any differences and whether they violate semantic versioning. CPM will also compare the behavior of all exported functions in all exported modules whose types have not changed. Actually, this part is performed by CurryCheck \cite{Hanus16LOPSTR}, a property-based test tool for Curry. For this purpose, CPM generates a Curry program containing properties stating the equivalence of two operations with the same name but defined in two different versions of a package. The ideas and scheme of this generation process are described in \cite{Hanus17ICLP}. Note that not all functions can be compared via CurryCheck. In particular, functions taking other functions as arguments (there are a few other minor restrictions) can not be checked so that CPM automatically excludes them from checking. Note that the results of non-terminating operations, like \code{Prelude.repeat}, cannot be compared in a finite amount of time. To avoid the execution of possibly non-terminating check programs, CPM compares the behavior of operations only if it can prove the termination or productivity\footnote{% An operation is productive if it always produces outermost constructors, i.e., it cannot run forever without producing constructors.} of these operations. Since CPM uses simple criteria to approximate these properties, there might be operations that are terminating or productive but CPM cannot show it. In these cases you can use the compiler pragmas \verb|{-# TERMINATE -#}| or \verb|{-# PRODUCTIVE -#}| to annotate such functions. Then CPM will trust these annotations and treat the annotated operations as terminating or productive, respectively. For instance, CPM will check the following operation although it cannot show its termination: \begin{lstlisting} {-# TERMINATE -#} mcCarthy :: Int -> Int mcCarthy n | n<=100 = mcCarthy (mcCarthy (n+11)) | n>100 = n-10 \end{lstlisting} % As another example, consider the following operation defining an infinite list: \begin{lstlisting} ones :: [Int] ones = 1 : ones \end{lstlisting} % Although this operation is not terminating, it is productive since with every step a new constructor is produced. CPM compares such operations by comparing their results up to some depth. On the other hand, the following operation is not classified as productive by CPM (note that it would not be productive if the filter condition is changed to \code{(>1)}): \begin{lstlisting} {-# PRODUCTIVE -#} anotherOnes :: [Int] anotherOnes = filter (>0) ones \end{lstlisting} % Due to the pragma, CPM will compare this operation as other productive operations. There might be situations when operations should not be compared, e.g., if the previous version of the operation was buggy. In this case, one can mark those functions with the compiler pragma \verb|{-# NOCOMPARE -#}| so that CPM will not generate tests for them. Note that there are different ways to state the equivalence of operations (e.g., see the discussion in \cite{BacciEtAl12}). CPM offers two kinds of equivalence tests: \begin{itemize} \item \emph{Ground equivalence} means that two operations are considered as equivalent if they yield identical values for identical input values. \item \emph{Contextual or full equivalence} means that two operations are considered as equivalent if they produce the same results in all possible contexts. \end{itemize} % Since contextual equivalence is more meaningful in the context of semantic versioning, CPM tests this kind of equivalence in the default case, based on the techniques described in \cite{AntoyHanus18FLOPS}. However, using the option \code{--ground} of the \code{diff} command, one can also enfore the checking of ground equivalence as described in \cite{Hanus17ICLP}. \subsection{Adding Packages to the Central Package Index} \label{sec:adding-a-package} When you have your package ready and want to use it in other packages, it must be added to the central package index so that CPM can find it when searching for packages. For this purpose, you can use the \ccode{cypm add} command: % \begin{lstlisting} > cypm add --package mypackage \end{lstlisting} % In this case, \code{mypackage} is the name of the directory containing you package. In particular, the JSON file \ccode{mypackage/package.json} must contain the metadata of the package (see also Section~\ref{sec:reference}). This command copies your package into your local copy of the central package index so that it can be used in other packages. If you want to replace this copy by an improved version of the same package, you have to provide the option \code{-f} or \code{--force}. Note that this command makes your package only available on your local system. If you want to publish your package so that it can be used by other CPM users, follow the instruction described next. \subsection{Publishing a Package} \label{sec:publishing-a-package} There are three things that need to be done to publish a package: make the package accessible somewhere, add the location to the package specification, and add the package specification to the central package index. CPM supports ZIP (suffix \ccode{.zip}) or compressed TAR (suffix \ccode{.tar.gz}) files accessible over HTTP as well as Git repositories as package sources. You are free to choose one of those, but a publicly accessible Git repository is preferred. To add the location to the package specification, use the \code{source} key. For a HTTP source, use: % \begin{lstlisting} { ..., "source": { "http": "http://example.com/package-1.0.3.zip" } } \end{lstlisting} % For a Git source, you have to specify both the repository as well as the revision that represents the version: % \begin{lstlisting} { ..., "source": { "git": "git+ssh://[email protected]:john-doe/package.git", "tag": "v1.2.3" } } \end{lstlisting} % There is also a shorthand, \code{\$version}, available to automatically use a tag consisting of the letter \code{v} followed by the current version number, as in the example above. Specifying \code{\$version} as the tag and then tagging each version in the format \code{v1.2.3} is preferred, since it does not require changing the source location in the \code{package.json} file every time a new version is released. If one already has a repository with another tagging scheme, one can also place the string \code{\$version\$} in the tag, which will be automatically replaced by the current version number. Thus, the tag \ccode{\$version} is equivalent to the tag \ccode{v\$version\$}. After you have published the files for your new package version, you have to add the corresponding package specification to the central package index. This can be done with the \ccode{cypm add} command (see Section~\ref{sec:adding-a-package}). If you have access to the Git repository containing the central package index, then you can push the modified version of this Git repository. Otherwise, send your package specification file to \url{[email protected]} in order to publish it. \clearpage \section{The \texttt{base} Package} \label{sec:basepackage} Every Curry distribution comes with some libraries to support basic operations, like list operations, actions to read and write files, etc. The most prominent library is the \code{Prelude} which is implicitly added to the import list of a Curry module. In order to manage different versions of these base libraries, there is a distinguished package \code{base} containing these libraries. Thus, one can include a dependency to a specific version of the \code{base} package in the \code{package.json} file, e.g.: % \begin{lstlisting} { ..., "dependencies": { "base": ">= 2.1.0, < 3.0.0", "html": ">= 2.0.0, < 2.2.0" "json": "~1.1.0" } ... } \end{lstlisting} % Each Curry system comes with a specific version of the \code{base} package. This version can be queried with the option \code{--base-version}, e.g., % \begin{lstlisting} > pakcs --base-version 3.0.0 \end{lstlisting} % If CPM tries to install a package, it adds the condition that the version of package \code{base} must be identical to the \code{base} version of the compiler used by CPM. For instance, if \code{pakcs} is used by CPM (see Section~\ref{sec:config} how to change the default compiler of CPM), the dependencies shown above are changed to % \begin{lstlisting} { ..., "dependencies": { "base": "= 3.0.0, >= 2.1.0, < 3.0.0", "html": ">= 2.0.0, < 2.2.0" "json": "~1.1.0" } ... } \end{lstlisting} % before resolving all constraints. Of course, in this case, the constraints are not solvable so that one has to choose another Curry compiler for this package. \clearpage \section{Configuration} \label{sec:config} CPM can be configured via the \code{\$HOME/.cpmrc} configuration file. The following list shows all configuration options and their default values. \begin{description} \item[\fbox{\code{CURRY_BIN}}] The name of the executable of the Curry system used to compile and test packages. The default value is the binary of the Curry system which has been used to compile CPM. \item[\fbox{\code{REPOSITORY_PATH}}] The path to the index of all packages. Default value: \code{\$HOME/.cpm/index}. \item[\fbox{\code{PACKAGE_INSTALL_PATH}}] The path to the global package cache. This is where all downloaded packages are stored. Default value: \code{\$HOME/.cpm/packages} \item[\fbox{\code{BIN_INSTALL_PATH}}] The path to the executables of packages. This is the location where the compiled executables of packages containing full applications are stored. Hence, in order to use such applications, one should have this path in the personal load path (environment variable \code{PATH}). Default value: \code{\$HOME/.cpm/bin} \item[\fbox{\code{APP_PACKAGE_PATH}}] The path to the package cache where packages are checked out if only their binaries are installed (see Section~\ref{sec:installapp}). Default value: \code{\$HOME/.cpm/apps_$Curry system$}. \item[\fbox{\code{HOME_PACKAGE_PATH}}] The path to the meta-package which is used if you are outside another package (see Section~\ref{sec:meta-package}). Default value: \code{\$HOME/.cpm/$Curry system$-homepackage}. \item[\fbox{\code{PACKAGE_INDEX_URL}}] The URL of the central package index which is used by the \code{update} command to download the index of all repositories. Default value: \begin{lstlisting} https://www-ps.informatik.uni-kiel.de/~cpm/PACKAGES/INDEX.tar.gz \end{lstlisting} One can also provide more than one URL which are tried in sequential order until one index could be downloaded. In this case, the URLs should be separated by a vertical bar (\ccode{|}). \item[\fbox{\code{PACKAGE_TARFILES_URL}}] The URL prefix to the directory containing the ``tar'' files of all packages. If a package $p$ with version $v$ is downloaded (via the \code{install} or \code{checkout} command), the source of the package is downloaded from this location as file \code{$p$-$v$.tar.gz}. Default value: \begin{lstlisting} https://www-ps.informatik.uni-kiel.de/~cpm/PACKAGES/ \end{lstlisting} For instance, the package \code{cpm} with version \code{2.2.0} is downloaded from \begin{lstlisting} https://www-ps.informatik.uni-kiel.de/~cpm/PACKAGES/cpm-2.2.0.tar.gz \end{lstlisting} One can also provide more than one URL prefix which are tried in sequential order until the package could be downloaded from one of them. In this case, the URL prefixes should be separated by a vertical bar (\ccode{|}). \end{description} % The CPM command \begin{lstlisting} > cypm config \end{lstlisting} shows the current values of the configuration options. Note that one write the option names also in lowercase or omit the underscores. For instance, one can write \code{currybin} instead of \code{CURRY_BIN}. Moreover, one can override the values of these configuration options by the CPM options \code{-d} or \code{--define}. For instance, to install the binary of the package \code{spicey} in the directory \code{\$HOME/bin}, one can execute the command \begin{lstlisting} > cypm --define bin_install_path=$\$$HOME/bin install spicey \end{lstlisting} \clearpage \section{Some CPM Internals} \label{sec:internals} CPM's central package index contains all package specification files. It is stored at a central server where the actual location is defined by CPM's configuration variable \code{PACKAGE_INDEX_URL}, see Section~\ref{sec:config}. When the command \begin{lstlisting} > cypm update \end{lstlisting} is executed, a copy of this index is downloaded and stored on your local system in the directory \code{\$HOME/.cpm/index}, unless you changed the location using the \code{REPOSITORY_PATH} setting. CPM uses the package index when searching for and installing packages and during dependency resolution. This index contains a directory for each package, which contain subdirectories for all versions of that package which in turn contain the package specification files. So the specification for version $1.0.5$ of the \code{json} package would be located in \code{json/1.0.5/package.json}. When a package is installed on the system, it is stored in the \emph{global package cache}. By default, the global package cache is located in \code{\$HOME/.cpm/packages}. When a package \emph{foo}, stored in directory \code{foo}, depends on a package \emph{bar}, a link to \emph{bar's} directory in the global package cache is added to \emph{foo's} local package cache when dependencies are resolved for \emph{foo}. The \emph{local package cache} is stored in \code{foo/.cpm/package_cache}. Whenever dependencies are resolved, package versions already in the local package cache are preferred over those from the central package index or the global package cache. When a module inside a package is compiled, packages are first copied from the local package cache to the \emph{run-time cache}, which is stored in \code{foo/.cpm/packages}. Ultimately, the Curry compiler only sees the package copies in the run-time cache, and never those from the local or global package caches. \clearpage \section{Command Reference} \label{sec:cmd-reference} This section gives a short description of all available CPM commands. In addition to the commands listed below, there are some global options which can be placed in front of the CPM command: \begin{description} \item[\code{-d$\,|\,$--define $option$=$value$}:] This option overrides the configuration options of CPM, see Section~\ref{sec:config}. \item[\code{-v$\,|\,$--verbosity [info|debug]}:] The default value is \code{info}. The value \code{debug} provides more output messages in order to see what CPM is doing. \item[\code{-t$\,|\,$--time}:] This option adds the elapsed time to every info or debug output line. \end{description} % The available commands of CPM are: \begin{description} \item[\fbox{\code{config}}] Shows the current configuration of CPM (see also Section~\ref{sec:config}). The option \code{--all} shows also the names and version of the packages installed in the global package cache. \item[\fbox{\code{info}}] Gives information on the current package, e.g. the package's name, author, synopsis and its dependency specifications. \item[\fbox{\code{info $package$ [--all]}}] Prints information on the newest known version (compatible to the current compiler) of the given package. The option \code{--all} shows more information. \item[\fbox{\code{info $package$ $version$ [--all]}}] Prints basic information on the given package version. The option \code{--all} shows more information. \item[\fbox{\code{list [--system] [--versions] [--csv]}}] List the names and synopses of all packages of the central package index. Unless the option \code{--versions} is set, only the newest version of a package is shown. The option \code{--system} restricts the list to those packages which are compatible with the current compiler (if it is explicitly specified in the package). The option \code{--versions} shows all versions of the packages. If a package is not compatible to the current compiler, then the package version is shown in brackets (e.g., \ccode{(1.5.4)}). The option \code{--csv} shows the information in CSV format. \item[\fbox{\code{list --category [--csv]}}] List the category names together with the packages belonging to this category (see Section~\ref{sec:reference}) of the central package index. The option \code{--csv} shows the information in CSV format. \item[\fbox{\code{search [--module|--exec] $query$}}] Searches the names, synopses, and exported module names of all packages of the central package index for occurrences of the given search term. If the option \code{--module} is set, then the given name is searched in the list of exported modules. Thus, the package exporting the module \code{JSON.Data} can be found by the command % \begin{lstlisting} > cypm search --module JSON.Data \end{lstlisting} % If the option \code{--exec} is set, then the search is restricted to the name of the executable provided by the package. For instance, the command % \begin{lstlisting} > cypm search --exec show \end{lstlisting} % lists all packages where the name of the executable contains the string \ccode{show}. \item[\fbox{\code{update}}] Updates the local copy of the central package index to the newest available version. This command also cleans the global package cache in order to support the download of fresh package versions. Note that this also removes local copies of packages installed by the command \ccode{add --package}. The option \code{--url} allows to specify a different URL for the central package index (might be useful for experimental purposes). \item[\fbox{\code{install}}] Installs all dependencies of the current package. Furthermore, if the current package contains also executables for an application, the executables specified in the package are compiled and installed (unless the option \code{-n} or \code{--noexec} is set). With the option \code{-x} or \code{--exec}, the executables are installed without installing all dependencies again. This is useful to speed up the re-installation of a previously installed application. If a name is added to the option \code{-x} or \code{--exec}, e.g., % \begin{lstlisting} > cypm install --exec compiler \end{lstlisting} % then only the executable with this name is installed. \item[\fbox{\code{install $package$ [--$pre$]}}] Installs the application provided by the newest version (compatible to the current compiler) of a package. The binary of the application is installed into the directory \code{\$HOME/.cpm/bin} (this location can be changed via the \code{\$HOME/.cpmrc} configuration file or by the CPM option \code{--define}, see Section~\ref{sec:config}). \code{--$pre$} enables the installation of pre-release versions. \item[\fbox{\code{install $package$ $version$}}] Installs the application provided by a specific version of a package. The binary of the application is installed into the directory \code{\$HOME/.cpm/bin} (this location can be changed via the \code{\$HOME/.cpmrc} configuration file or by the CPM option \code{--define}, see Section~\ref{sec:config}). \item[\fbox{\code{install $package$.zip}}] Installs a package from a ZIP file to the global package cache. The ZIP file must contain at least the package description file \code{package.json} and the directory \code{src} containing the Curry source files. \item[\fbox{\code{uninstall}}] Uninstalls the executable installed for the current package. \item[\fbox{\code{uninstall $package$}}] Uninstalls the executable and the cached copy of a package which has been installed by the \code{install} command. \item[\fbox{\code{uninstall $package$ $version$}}] Uninstalls a specific version of a package from the global package cache. \item[\fbox{\code{checkout $package$ [--$pre$]}}] Checks out the newest version (compatible to the current compiler) of a package into the local directory \code{$package$} in order to test its operations or install a binary of the package. \code{--$pre$} enables the installation of pre-release versions. \item[\fbox{\code{checkout $package$ $version$}}] Checks out a specific version of a package into the local directory \code{$package$} in order to test its operations or install a binary of the package.. \item[\fbox{\code{upgrade}}] Upgrades all dependencies of the current package to the newest compatible version. \item[\fbox{\code{upgrade $package$}}] Upgrades a specific dependency of the current package and all its transitive dependencies to their newest compatible versions. \item[\fbox{\code{deps}}] Does a dependency resolution run for the current package and prints the results. The result is either a list of all package versions chosen or a description of the conflict encountered during dependency resolution. Using the option \code{--path}, only the value of \code{CURRYPATH} required to load modules of this package is shown. If the option \code{--graph} is provided, then the dependency graph is translated into a graph in GraphViz\footnote{\url{http://www.graphviz.org/}} format. With the option \code{--viewgraph}, the dependency graph is visualized with the command defined in the \code{dotviewcommand} field of the rc file of the Curry system (e.g., \code{~/.pakcsrc} in case of PAKCS). \item[\fbox{\code{test [--modules $\mathit{modules}$] [--compile]}}] Tests the current package with CurryCheck. If the package specification contains a definition of a test suite (entry \code{testsuite}, see Section~\ref{sec:reference}), then the modules defined there are tested. If there is no test suite defined, the list of exported modules are tested, if they are explicitly specified (field \code{exportedModules} of the package specification), otherwise all modules in the directory \code{src} (including hierarchical modules stored in its subdirectories) are tested. Using the option \code{--modules}, one can also specify a comma-separated list of module names to be tested. If the executable \code{curry-check} is not installed or the option \code{--compile} is provided, then the modules are only compiled and not tested. Hence, a package can be checked for compilation errors by the command \begin{lstlisting} cypm test --compile \end{lstlisting} \item[\fbox{\code{doc}}] Generates the documentation of the current package. The documentation consists of the API documentation (in HTML format) and the manual (if provided) in PDF format. The options \code{--programs} and \code{--text} forces to generate only the API documentation and the manual, respectively. Using the option \code{--docdir}, one can specify the target directory where the documentation should be stored. If this option is not provided, \ccode{cdoc} is used as the documentation directory. The actual documentation will be stored in the subdirectory \code{$name$-$version$} of the documentation directory. The API documentation in HTML format is generated with CurryDoc. If the package specification contains a list of exported modules (see Section~\ref{sec:reference}), then these modules are documented. Otherwise, the main module (if the package specification contains the entry \code{executable}, see Section~\ref{sec:reference}) or all modules in the directory \code{src} (including hierarchical modules stored in its subdirectories) are documented. Using the option \code{--modules}, one can also specify a comma-separated list of module names to be documented. In the default case, modules contained in packages used by the current package are not documented. Instead, it is assumed that these packages are already documented\footnote{See \url{http://www.informatik.uni-kiel.de/~curry/cpm/} for the documentation of all packages. This default location can be changed with the option \code{--url}.} so that links to these package documentations are generated. Using the option \code{--full}, one can generate also the documentation of packages used by the current package. This might be reasonable if one uses packages which are only locally installed. The manual is generated only if the package specification contains a field \code{documentation} where the main file of the manual is specified (see Section~\ref{sec:reference} for more details). \item[\fbox{\code{diff [$version$]}}] Compares the API and behavior of the current package to another version of the same package. If the version option is missing, the latest version of the current package found in the repository is used for comparison. If the options \code{--api-only} or \code{--behavior-only} are added, then only the API or the behavior are compared, respectively. In the default case, all modules commonly exported by both versions of the package are compared. Using the option \code{--modules}, one can restrict this comparison to a list of modules specified by a comma-separated list of module names. As described in Section~\ref{sec:semantic-versioning}, CPM uses property tests to compare the behavior of different package versions. In order to avoid infinite loops durings these tests, CPM analyzes the termination behavior of the involved operations. Using the operation \code{--unsafe}, CPM omits this program analysis but then you have to ensure that all operations are terminating (or you can annotate them by pragmas, see Section~\ref{sec:semantic-versioning}). In the default case, CPM tests the contextual equivalence of operations (see Section~\ref{sec:semantic-versioning}). With the option \code{--ground}, the ground equivalence of operations is tested. \item[\fbox{\code{exec $command$}}] Executes an arbitrary command with the \code{CURRYPATH} environment variable set to the paths of all dependencies of the current package. For example, it can be used to execute \ccode{curry check} or \ccode{curry analyze} with correct dependencies available. \item[\fbox{\code{curry $args$}}] Executes the Curry compiler with the dependencies of the current package available. Any arguments are passed verbatim to the compiler. \item[\fbox{\code{link $source$}}] Can be used to replace a dependency of the current package using a local copy, see Section~\ref{sec:cpm-link} for details. \item[\fbox{\code{add --package $dir$ [--force]}}] Copies the package contained in directory $dir$ into the local copy of the central package index so that it can be used by other packages in the local environment (see Section~\ref{sec:adding-a-package} for details). The option \ccode{--force} allows to overwrite existing copies in the central package index. \item[\fbox{\code{add --dependency $package$ [--force]}}] Adds the package $package$ as a new dependency. This command adds a dependency to the given package either in the package description file (\code{package.json}) of the current package or in the meta-package (see Section~\ref{sec:meta-package}). The option \ccode{--force} allows to overwrite existing dependencies in the package description file. \item[\fbox{\code{add $package$ [--force]}}] Adds the package $package$ as a new dependency and install the new dependencies. Thus, this abbreviates the two commands \begin{lstlisting} cypm add $package$ && cypm install \end{lstlisting} \item[\fbox{\code{upload [--notagging] [--force]}}] Uploads the current package to the central package index so that it can be used by other developers via CPM (if they update their local copy of the central package index by \ccode{cypm update}). For security reasons (this will be weakened in the future), the package must have a source specification (see Section~\ref{sec:publishing-a-package}) of the following form: % \begin{lstlisting} { ..., "source": { "git": "$\ldots$git.ps.informatik.uni-kiel.de/curry-packages/$\ldots$.git", "tag": "$\$$version" } } \end{lstlisting} % Thus, the source is managed as a Git repository which is stored at the server \code{git.ps.informatik.uni-kiel.de} in group \code{curry-packages} and has an automatic version tag. Unless the option \code{--nottagging} is given, the version tag wil be automatically set in the local repository (and pushed to the remote repository, i.e., one should have write access to the remote repository). Then the remote repository will be cloned and tested (by \ccode{cypm test}). If this is successful, the package specification of the repository will be added to the central package index (by a web service of the central package index). The option \ccode{--force} allows to overwrite an existing version in the central package index. \item[\fbox{\code{clean}}] Cleans the current package from the generated auxiliary files, e.g., intermediate Curry files, installed dependent packages, etc. Note that a binary installed in the CPM \code{bin} directory (by the \code{install} command) will not be removed. Hence, this command can be used to clean an application package after installing the application. \item[\fbox{\code{new $project$}}] Creates a new project package with the given name and some template files. \end{description} \clearpage \section{Package Specification Reference} \label{sec:reference} This section describes all metadata fields available in a CPM package specification. Mandatory fields are marked with a \code{*} character. \begin{description} \item[\fbox{\code{name*}}] The name of the package. Must only contain ASCII letters, digits, hyphens and underscores. Must start with a letter. \item[\fbox{\code{version*}}] The version of the package. Must follow the format for semantic versioning version numbers. \item[\fbox{\code{author*}}] The package's author. This is a free-form field, the suggested format is either a name or a name followed by an email address in angle brackets, e.g., \begin{lstlisting} John Doe <[email protected]> \end{lstlisting} Multiple authors can either be separated by commas or written as a list of strings. \item[\fbox{\code{maintainer}}] The current maintainers of the package, if different from the original authors. This field allows the current maintainers to indicate the best person or persons to contact about the package while attributing the original authors. The suggested format is similarly to the authors, i.e., a name followed by an email address in angle brackets, e.g., \begin{lstlisting} John Doe <[email protected]> \end{lstlisting} Multiple maintainers can either be separated by commas or written as a list of strings. \item[\fbox{\code{synopsis*}}] A short form summary of the package's purpose. It should be kept as short as possible (ideally, less than 100 characters). \item[\fbox{\code{description}}] A longer form description of what the package does. \item[\fbox{\code{category}}] A list of keywords that characterize the main area where the package can be used, e.g., \code{Data}, \code{Numeric}, \code{GUI}, \code{Web}, etc. \item[\fbox{\code{license}}] The license under which the package is distributed. This is a free-form field. In case of a well-known license such as the GNU General Public License\footnote{\url{https://www.gnu.org/licenses/gpl-3.0.en.html}}, the SPDX license identifier\footnote{\url{https://spdx.org/licenses/}} should be specified. If a custom license is used, this field should be left blank in favor of the license file field. \item[\fbox{\code{licenseFile}}] The name of a file in the root directory of the package containing explanations regarding the license of the package or the full text of the license. The suggested name for this file is \code{LICENSE}. \item[\fbox{\code{copyright}}] Copyright information regarding the package. \item[\fbox{\code{homepage}}] The package's web site. This field should contain a valid URL. \item[\fbox{\code{bugReports}}] A place to report bugs found in the package. The suggested formats are either a valid URL to a bug tracker or an email address. \item[\fbox{\code{repository}}] The location of a SCM repository containing the package's source code. Should be a valid URL to either a repository (e.g. a Git URL), or a website representing the repository. \item[\fbox{\code{dependencies*}}] The package's dependencies. This must be JSON object where the keys are package names and the values are version constraints. See Section~\ref{sec:package-basics} for more details. \item[\fbox{\code{compilerCompatibility}}] The package's compatibility to different Curry compilers. Expects a JSON object where the keys are compiler names and the values are version constraints. Currently, the supported compiler names are \code{pakcs} and \code{kics2}. If this field is missing or contains an empty JSON object, the package is assumed to be compatible to all compilers in all versions. The compiler compatibility of a package is also relevant when some version of a package should be examined or installed (with CPM commands \code{info}, \code{checkout}, \code{install}). If a newest package should be installed, i.e., no specific version number is provided, then only the newest version which is compatible to the current Curry compiler (see also Section~\ref{sec:config} for configuration option \code{CURRY_BIN}) is considered. Similarly, the current package is executed (CPM commands \code{curry} and \code{test}) only if the current Curry compiler is compatible to this package. \item[\fbox{\code{source}}] A JSON object specifying where the version of the package described in the specification can be obtained. See Section~\ref{sec:publishing-a-package} for details. \item[\fbox{\code{sourceDirs}}] A list of directories inside this package where the source code is located. When the package is compiled, these directories are put at the front of the Curry load path. If this field is not specified, \code{src} is used as the single source directory. \item[\fbox{\code{exportedModules}}] A list of modules intended for use by consumers of the package. These are the modules compared by the \code{cypm diff} command (and tested by the \code{cypm test} command if a list of test modules is not provided). Note that modules not in this list are still accessible to consumers of the package. \item[\fbox{\code{configModule}}] A module name into which some information about the package configuration (location of the package directory, name of the executable, see below) is written when the package is installed. This could be useful if the package needs some data files stored in this package during run time. For instance, a possible specification could be as follows: % \begin{lstlisting} { ..., "configModule": "CPM.PackageConfig", ... } \end{lstlisting} % In this case, the package configuration is written into the Curry file \code{src/CPM/PackageConfig.curry}. \item[\fbox{\code{executable}}] A JSON object specifying the name of the executable and the main module if this package contains also an executable application. The name of the executable must be defined (with key \code{name}) whereas the name of the main module (key \code{main}) is optional. If the latter is missing, CPM assumes that the main module is \code{Main}. Furthermore, the executable specification can also contain options for various Curry compilers. The options must be a JSON object consisting of compiler names as keys and an option string for the compiler. For instance, a possible specification could be as follows: % \begin{lstlisting} { ..., "executable": { "name": "cypm", "main": "CPM.Main", "options": { "kics2" : ":set rts -T" } } } \end{lstlisting} % If a package contains an \code{executable} specification, the command \code{cypm install} also compiles the main module and installs the executable in the \code{bin} install directory of CPM (see Section~\ref{sec:config} for details). \item[\fbox{\code{executables}}] It is also possible to specify more than one executable in package which will be installed by the command \code{cypm install}. In this case, one can use \code{executables} instead of \code{executable}. The value of \code{executables} is an array of JSON objects as used as \code{executable} values. For instance, a possible specification could be % \begin{lstlisting} { ..., "executables": [ { "name": "compiler", "main": "Compiler.Main" }, { "name": "repl", "main": "REPL.Main" } ] } \end{lstlisting} \item[\fbox{\code{testsuite}}] A JSON object specifying a test suite for this package. This object contains a directory (with key \code{src-dir}) in which the tests are executed. Furthermore, the test suite must also define a list of modules to be tested (with key \code{modules}). For instance, a possible test suite specification could be as follows: % \begin{lstlisting} { ..., "testsuite": { "src-dir": "test", "modules": [ "testDataConversion", "testIO" ] } } \end{lstlisting} % All these modules are tested with CurryCheck by the command \code{cypm test}. If no test suite is defined, all (exported) modules are tested in the directory \code{src}. A test suite can also contain a field \code{options} which defines a string of options passed to the call to CurryCheck. If a test suite contains a specific test script instead modules to be tested with CurryCheck, then one can specify the name of this test script in the field \code{script}. In this case, this script is executed in the test directory (with the possible \code{options} value added). The script should return the exit code \code{0} if the test is successful, otherwise a non-zero exit code. Note that one has to specify either a (non-empty) list of modules or a test script name in a test suite, but not both. One can also specify several test suites for a package. In this case, the \code{testsuite} value is an array of JSON objects as described above. For instance, a test suite specification for tests in the directories \code{test} and \code{examples} could be as follows: % \begin{lstlisting} { ..., "testsuite": [ { "src-dir": "test", "options": "-v", "script": "test.sh" }, { "src-dir": "examples", "options": "-m80", "modules": [ "Nats", "Ints" ] } ] } \end{lstlisting} \item[\fbox{\code{documentation}}] A JSON object specifying the name of the directory which contains the sources of the documentation (e.g., a manual) of the package, the main file of the documentation, and an optional command to generate the documentation. For instance, a possible specification could be as follows: % \begin{lstlisting} { ..., "documentation": { "src-dir": "docs", "main" : "manual.tex", "command": "pfdlatex -output-directory=OUTDIR manual.tex" } ... } \end{lstlisting} % In this case, the directory \code{docs} contains the sources of the manual and \code{manual.tex} is its main file which will be processed with the specified command. Occurrences of the string \code{OUTDIR} in the command string will be replaced by the actual documentation directory (see description of the command \code{cypm doc}). If the command is omitted, the following commands are used (and you have to ensure that these programs are installed): \begin{itemize} \item If the main file has the extension \code{.tex}, e.g., \code{manual.tex}, the command is \begin{lstlisting} pdflatex -output-directory=OUTDIR manual.tex \end{lstlisting} and it will be executed twice. \item If the main file has the extension \code{.md}, e.g., \code{manual.md}, the command is \begin{lstlisting} pandoc manual.md -o OUTDIR/manual.pdf \end{lstlisting} \end{itemize} \end{description} % In order to get a compact overview over all metadata fields, we show an example of a package specification where all fields are used: % \begin{lstlisting} { "name": "PACKAGE_NAME", "version": "0.0.1", "author": "YOUR NAME <YOUR EMAIL ADDRESS>", "maintainer": [ "ANOTHER NAME <ANOTHER EMAIL ADDRESS>", "FURTHER NAME <FURTHER EMAIL ADDRESS>" ], "synopsis": "A ONE-LINE SUMMARY ABOUT THE PACKAGE", "description": "A MORE DETAILED SUMMARY ABOUT THE PACKAGE", "category": [ "Category1", "Category2" ], "license": "BSD-3-Clause", "licenseFile": "LICENSE", "copyright": "COPYRIGHT INFORMATION", "homepage": "THE URL OF THE WEB SITE OF THE PACKAGE", "bugReports": "EMAIL OR BUG TRACKER URL FOR REPORTING BUGS", "repository": "THE (GIT) URL OF THE WEB SITE REPOSITORY", "dependencies": { "PACKAGE1" : ">= 0.0.1, < 1.5.0", "PACKAGE2" : "~1.2.3", "PACKAGE3" : ">= 2.1.4, < 3.0.0 || >= 4.0.0", "PACKAGE2" : "^2.1.3", }, "compilerCompatibility": { "pakcs": ">= 1.14.0, < 2.0.0", "kics2": ">= 0.5.0, < 2.0.0" }, "sourceDirs" : [ "src", "include" ], "exportedModules": [ "Module1", "Module2" ], "configModule": "ConfigPackage", "executable": { "name": "NAME_OF_BINARY", "main": "Main", "options": { "kics2" : ":set rts -T", "pakcs" : ":set printdepth 100" } }, "testsuite": [ { "src-dir": "src", "options": "-m80", "modules": [ "Module1", "Module2" ] }, { "src-dir": "examples", "options": "-v", "script" : "test.sh" } ], "documentation": { "src-dir": "docs", "main" : "manual.tex", "command": "pfdlatex -output-directory=OUTDIR manual.tex" }, "source": { "git": "URL OF THE GIT REPOSITORY", "tag": "$\$$version" } } \end{lstlisting} \clearpage \section{Error Recovery} \label{sec:recovery} There might occur situations when your package or repository is in an inconsistent state, e.g., when you manually changed some internal files or such files have been inadvertently changed or deleted, or a package is broken due to an incomplete download. Since CPM checks these files, CPM might exit with an error message that something is wrong. In such cases, it might be a good idea to clean up your package file system. Here are some suggestions how to do this: \begin{description} \item[\code{cypm clean}]~\\ This command cleans the current package from generated auxiliary files (see Section~\ref{sec:cmd-reference}). Then you can re-install the package and packages on which it depends by the command \code{cypm install}. \item[\code{rm -rf \$HOME/.cpm/packages}] ~\\ This cleans all packages which have been previously installed in the global package cache (see Section~\ref{sec:internals}). Such an action might be reasonable in case of some download failure. After clearing the global package cache, all necessary packages are downloaded again when they are needed. \item[\code{rm -rf \$HOME/.cpm/index}] ~\\ This removes the central package index of CPM (see Section~\ref{sec:internals}). You can simply re-install the newest version of this index by the command \code{cypm update}. \end{description} \newpage \begin{thebibliography}{1} \bibitem{AntoyHanus18FLOPS} S.~Antoy and M.~Hanus. \newblock Equivalence Checking of Non-deterministic Operations. \newblock In {\em Proc. of the 14th International Symposium on Functional and Logic Programming (FLOPS 2018)}, pp. 149--165. Springer LNCS 10818, 2018. \bibitem{BacciEtAl12} G.~Bacci, M.~Comini, M.A. Feli{\'u}, and A.~Villanueva. \newblock Automatic Synthesis of Specifications for First Order {Curry}. \newblock In {\em Principles and Practice of Declarative Programming (PPDP'12)}, pp. 25--34. ACM Press, 2012. \bibitem{Hanus16LOPSTR} M.~Hanus. \newblock {CurryCheck}: Checking Properties of {Curry} Programs. \newblock In {\em Proceedings of the 26th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2016)}, pp. 222--239. Springer LNCS 10184, 2017. \bibitem{Hanus17ICLP} M.~Hanus. \newblock Semantic Versioning Checking in a Declarative Package Manager. \newblock In {\em Technical Communications of the 33rd International Conference on Logic Programming (ICLP 2017)}, OpenAccess Series in Informatics (OASIcs), pp. 6:1--6:16. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2017. \end{thebibliography} \end{document} % LocalWords: CPM versioning
{ "alphanum_fraction": 0.7538008524, "avg_line_length": 38.6377245509, "ext": "tex", "hexsha": "80ca4ba48d0cdbf7620a3819ac86f4b7c0ce4809", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9e3c5db02cd146644eae6b829c0706d7d78c66fd", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "cau-placc/pakcs", "max_forks_repo_path": "docs/src/tooldocs/cpm/manual.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "9e3c5db02cd146644eae6b829c0706d7d78c66fd", "max_issues_repo_issues_event_max_datetime": "2021-02-24T12:41:30.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-21T22:25:13.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "cau-placc/pakcs", "max_issues_repo_path": "docs/src/tooldocs/cpm/manual.tex", "max_line_length": 85, "max_stars_count": 2, "max_stars_repo_head_hexsha": "9e3c5db02cd146644eae6b829c0706d7d78c66fd", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "cau-placc/pakcs", "max_stars_repo_path": "docs/src/tooldocs/cpm/manual.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-21T22:25:28.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-06T18:32:48.000Z", "num_tokens": 16348, "size": 64525 }
% !TEX root = report.tex \section{The Experiment} \label{sec:experiment} \subsection{Experiment set-up} In this section, we evaluate how well our proposed Alloy verification framework performs on various \textsc{drc} database queries. Below are the list of database tables in the schema. \newrobustcmd\Employee{\rel{Employee}} \newrobustcmd\Supervisor{\rel{Supervisor}} \newrobustcmd\Reviewer{\rel{Reviewer}} \begin{itemize}[topsep=0.5pc,itemsep=0.25pc] \item $\Employee(\field{id}, \field{security-level})$:\; each employer ID and their security levels; and \item $\Supervisor(\field{employee-id},\field{boss-id})$:\; stores employee--boss pairs \item $\Reviewer(\field{employee-id},\field{year},\field{reviewer-id})$:\; stores the employee ID of the performance reviewer for each employee in each year \end{itemize} \newrobustcmd{\empX}{\ensuremath{x}} \newrobustcmd{\empY}{\ensuremath{y}} \newrobustcmd{\boss}{\ensuremath{b}} \newrobustcmd{\level}{\ensuremath{\ell}} \newrobustcmd{\yearT}{\ensuremath{t}} \medskip\noindent We also consider the following \textsc{drc} queries. \begin{itemize}[topsep=0.5pc,itemsep=0.25pc] \item $\boldsymbol{Q_1}$\hrsp:\;\marginnote{\textsc{safe}} Pairs of employees who share the same boss, and at least one of them has the same security level as their boss. \begin{align*} Q_1 &= \big\{ \empX, \empY \mid \exists \boss \hrsp\big[ \Supervisor(\empX, \boss) \wedge \Supervisor(\empY, \boss) \\ & \qquad\qquad\qquad\wedge \exists \level \hrsp[\Employee(\boss, \level) \wedge (\Employee(\empX, \level) \vee \Employee(\empY, \level))]\big]\big\} \end{align*} \item $\boldsymbol{Q_2}$\hrsp:\;\marginnote{\textsc{unsafe}} Employees without their own bosses. \begin{align*} Q_2 &= \big\{ \empX \mid \neg \exists \boss \hrsp[\Supervisor(\empX, \boss)] \big\} \end{align*} \item $\boldsymbol{Q_3}$\hrsp:\;\marginnote{\textsc{unsafe}} Pairs of employees, one of which has the same security level as one of their bosses. \begin{align*} Q_3 &= \big\{ \empX, \empY \mid \exists \boss \exists \level \hrsp[\Employee(\boss, \level) \\ & \qquad\qquad\qquad\qquad\wedge (\Employee(\empX, \level) \wedge \Supervisor(x, b) \\ & \qquad\qquad\qquad\qquad\qquad\vee \Employee(\empY, \level) \wedge \Supervisor(y, b))]\big\} \end{align*} \item $\boldsymbol{Q_4}$\hrsp:\;\marginnote{\textsc{safe}} Pairs of employees who review each other within the same year. \begin{align*} Q_4 &= \big\{ \empX, \empY \mid \exists \yearT \hrsp[\Reviewer(\empX, \yearT, \empY) \wedge \Reviewer(\empY, \yearT, \empX)] \big\} \end{align*} \item $\boldsymbol{Q_5}$\hrsp:\;\marginnote{\textsc{unsafe}} \emph{Super} employees who (a) has reviewed everyone else at some point (excluding themselves), and (b) is a boss of at least one employee. \begin{align*} Q_5 &= \big\{ \boss \mid \forall \empX \exists \yearT \hrsp[\Reviewer(\empX, \yearT, \boss)] \wedge \exists \empY \hrsp[ \Supervisor(\empY, \boss) \wedge \neg(\empY = \boss)] \big\} \end{align*} \end{itemize} \begin{lstlisting}[language=alloy,float,basicstyle={\footnotesize\ttfamily},caption={The complete Alloy program which verifies queries {\protect $Q_1$ through $Q_5$} as defined earlier in \sectionref{sec:experiment}. To verify the safety of other query functions, some portions of the code (shown highlighted) needs to be modified.},label={src:experiment},aboveskip=0pc,belowskip=0pc] /* Scalar values */ sig Superparticle {} { Superparticle = Universe.Element } /* Domains */ abstract sig Universe { Element: some Superparticle } one sig UniverseAlpha, UniverseBeta extends Universe {} /* Common domain */ some sig Particle in Superparticle {} { Particle = UniverseAlpha.Element & UniverseBeta.Element } /* Database Instance */ one sig Table { Employee: Particle -> Particle, Supervisor: Particle -> Particle, Reviewer: Particle -> Particle -> Particle } /* Query functions */ fun query1[u: Universe]: Superparticle -> Superparticle { { x, y: u.Element | some b: u.Element | (x -> b in Table.Supervisor) and (y -> b in Table.Supervisor) and (some l: u.Element | (b -> l in Table.Employee) and ((x -> l in Table.Employee) or (y -> l in Table.Employee))) } } fun query2[u: Universe]: set Superparticle { { x: u.Element | not some b: u.Element | x -> b in Table.Supervisor } } fun query3[u: Universe]: Superparticle -> Superparticle { { x, y: u.Element | some b, l: u.Element | (b -> l in Table.Employee) and ((x -> l in Table.Employee) and (x -> b in Table.Supervisor) or (y -> l in Table.Employee) and (y -> b in Table.Supervisor)) } } fun query4[u: Universe]: Superparticle -> Superparticle { { x, y: u.Element | some t: u.Element | (x -> t -> y in Table.Reviewer) or (y -> t -> x in Table.Reviewer) } } fun query5[u: Universe]: set Superparticle { { b: u.Element | (all x: u.Element | some t: u.Element | x -> t -> b in Table.Reviewer) and (some y: u.Element | (y -> b in Table.Supervisor) and not (y = b)) } } /* Safety assertion */ assert queryIsSafe { all u, u': Universe | <|\hll{exp-assert1}|>query1<|\hlr{exp-assert1}|>[u] = <|\hll{exp-assert2}|>query1<|\hlr{exp-assert2}\label{li:replace-assert}|>[u'] } /* Results placeholder */ abstract sig Result { OneColOutput: set Superparticle, TwoColOutput: Superparticle -> Superparticle } one sig ResultAlpha, ResultBeta extends Result {} { ResultAlpha. <|\hll{exp-alpha-o}\llap{\color{Symbol}@}|>TwoColOutput<|\hlr{exp-alpha-o}|> = <|\hll{exp-alpha}|>query1<|\hlr{exp-alpha}\label{li:replace-alpha}|>[UniverseAlpha] ResultBeta. <|\hll{exp-beta-o}\llap{\color{Symbol}@}|>TwoColOutput<|\hlr{exp-beta-o}|> = <|\hll{exp-beta}|>query1<|\hlr{exp-beta}\label{li:replace-beta}|>[UniverseBeta] } /* Invoke the verification on the assertion */ check queryIsSafe for 4 \end{lstlisting} \bigskip \marginhead{How to verify each specific query} The Alloy model for these \textsc{drc} queries are shown in \autoref{src:experiment}. By default the Alloy program verifies the query $Q_1$. If we wish to verify other queries, we need to modify some parts of the code (shown highlighted). Specifically, \begin{itemize}[topsep=0.5pc,itemsep=0.25pc] \item The query function name \alloy{query1} on lines \ref{li:replace-assert}, \ref{li:replace-alpha}, and \ref{li:replace-beta} should be replaced by names of other functions such as \alloy{query2}, \alloy{query3}, etc. \item The result placeholder \alloy{TwoColOutput} may need to be changed to \alloy{OneColOutput} depending of the output signature of the query function. \end{itemize} \subsection{Verification result and analysis} As we expect, Alloy Analyzer has \emph{correctly} determined whether all of these queries are safe. For unsafe queries $Q_2$, $Q_3$, and $Q_5$, Alloy finds the first counterexample in relatively short time (underlined in the output below). The console output for each of these unsafe queries are reproduced here (emphasis added). \begin{lstlisting}[numbers=none,escapeinside={<|}{|>},xleftmargin=6pc] <|\llap{\textbf{query2:~~}}|>Executing "Check queryIsSafe for <|\uline{\textbf{10}}|>" Solver=sat4j Bitwidth=0 MaxSeq=0 SkolemDepth=1 Symmetry=20 4658 vars. 1464 primary vars. 6953 clauses. 15ms. <|\uline{Counterexample} \uline{found}|>. Assertion is invalid. <|\uline{\textbf{17ms}}|>. \end{lstlisting} \begin{lstlisting}[numbers=none,escapeinside={<|}{|>},xleftmargin=6pc] <|\llap{\textbf{query3:~~}}|>Executing "Check queryIsSafe for <|\uline{\textbf{10}}|>" Solver=sat4j Bitwidth=0 MaxSeq=0 SkolemDepth=1 Symmetry=20 38148 vars. 1464 primary vars. 128213 clauses. 2068ms. <|\uline{Counterexample} \uline{found}|>. Assertion is invalid. <|\uline{\textbf{189ms}}|>. \end{lstlisting} \newpage \begin{lstlisting}[numbers=none,escapeinside={<|}{|>},xleftmargin=6pc] <|\llap{\textbf{query5:~~}}|>Executing "Check queryIsSafe for <|\uline{\textbf{10}}|>" Solver=sat4j Bitwidth=0 MaxSeq=0 SkolemDepth=1 Symmetry=20 9498 vars. 1464 primary vars. 24953 clauses. 58ms. <|\uline{Counterexample} \uline{found}|>. Assertion is invalid. <|\uline{\textbf{99ms}}|>. \end{lstlisting} \noindent For safe queries $Q_1$ and $Q_4$, here is the console output (emphasis added). \begin{lstlisting}[numbers=none,escapeinside={<|}{|>},xleftmargin=6pc] <|\llap{\textbf{query1:~~}}|>Executing "Check queryIsSafe for <|\uline{\textbf{10}}|>" Solver=sat4j Bitwidth=0 MaxSeq=0 SkolemDepth=1 Symmetry=20 32858 vars. 1464 primary vars. 105823 clauses. 457ms. <|\uline{No} \uline{counterexample} \uline{found}|>. Assertion may be valid. <|\uline{\textbf{16593ms}}|>. \end{lstlisting} \begin{lstlisting}[numbers=none,escapeinside={<|}{|>},xleftmargin=6pc] <|\llap{\textbf{query4:~~}}|>Executing "Check queryIsSafe for <|\uline{\textbf{10}}|>" Solver=sat4j Bitwidth=0 MaxSeq=0 SkolemDepth=1 Symmetry=20 7898 vars. 1464 primary vars. 17663 clauses. 39ms. <|\uline{No} \uline{counterexample} \uline{found}|>. Assertion may be valid. <|\uline{\textbf{132ms}}|>. \end{lstlisting} \begin{note} In terms of computational cost, \begin{itemize}[topsep=0.5pc,itemsep=0.25pc] \item Queries with more complex syntax tend to take more time to verify (cf.\ \alloy{query1} vs.\ \alloy{query4}) even if the maximum numbers of objects for each type in the search space are the same. \item When comparing queries with similar complexity (such as \alloy{query1} vs.\ \alloy{query3}), unsafe queries take less computation time as it only requires to find one counterexample whereas safe queries need to exhaust the search space. \end{itemize} In addition, if we choose to vary the maximum number of objects for each model type in Alloy verification task, then undoubtedly, as the number increases, the computational cost also increases. This is a normal Alloy behavior so we would not include the result of such experiment here. \end{note}
{ "alphanum_fraction": 0.6809067132, "avg_line_length": 52.1363636364, "ext": "tex", "hexsha": "67a05f1fbac3dfd76e7d877fe61d04f297f6fc8d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1be5a4d76e8c7c6f61ac18a5c987cc2fe1d7216d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "abhabongse/relationalcalculus-alloy", "max_forks_repo_path": "master-project/report/ch04_experiment.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "1be5a4d76e8c7c6f61ac18a5c987cc2fe1d7216d", "max_issues_repo_issues_event_max_datetime": "2021-08-23T20:37:09.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-11T18:47:49.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "abhabongse/relationalcalculus-alloy", "max_issues_repo_path": "master-project/report/ch04_experiment.tex", "max_line_length": 384, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1be5a4d76e8c7c6f61ac18a5c987cc2fe1d7216d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "abhabongse/relationalcalculus-alloy", "max_stars_repo_path": "master-project/report/ch04_experiment.tex", "max_stars_repo_stars_event_max_datetime": "2018-03-08T16:30:55.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-08T16:30:55.000Z", "num_tokens": 3138, "size": 10323 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS630: Database Management Systems % Copyright 2014 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/beacon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 2} Write a PL/SQL procedure that receives as arguments \texttt{pid}, \texttt{sid} and \texttt{quantity} of a prospective order. First, you need to determine if the value (i.e., dollar amount) of that order will be lower or equal than 75\% of the average previous order value for that part. If the answer is yes, go ahead and input the new order into the database. Otherwise, compute the \texttt{price} value that would make the prospective order value be exactly at the 75\% limit above, and then insert a NEW part with that price, and the same attributes as the part given in the \texttt{pid} parameter (except for the \texttt{pid} of course, for which you need to determine a unique value). Then, input in the database an order with the \texttt{sid} \texttt{quantity} given, but for the new \texttt{pid}. \subsection*{Solution} \lstset{language=SQL} \lstinputlisting[firstline=8]{ \topDirectory/src/pls/hw05/hw05q02.pls }
{ "alphanum_fraction": 0.6887871854, "avg_line_length": 57, "ext": "tex", "hexsha": "fc254c8e0c45d1c44e11419e8ea26e95263b82f6", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_path": "umb-cs630-2014f/src/tex/hw05/hw05q02.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_path": "umb-cs630-2014f/src/tex/hw05/hw05q02.tex", "max_line_length": 329, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_path": "umb-cs630-2014f/src/tex/hw05/hw05q02.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "num_tokens": 330, "size": 1311 }
%!TEX root = ../gronskiy_phd_thesis.tex \chapter[Does the Free Energy Define the Model Behavior?]{Does the Free Energy Define \\ the Model Behavior?} \label{ch:smbp_and_rem} \hfill \begin{minipage}[t]{.75\textwidth} \textit{``When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck.''} \\ \hrule \vspace{.2cm} \hfill \textsc{--- James Whitcomb RILEY} (attr.) \end{minipage} \section{Introduction} \subsection{Motivation} Random combinatorial optimization problems exhibit a highly complex structure with a spin glass behavior~\citep{Mezard87,SK:CG:MV:Science1983}. Optimization algorithms for these problems are slowed down by fluctuations in the problem instances when they search for solutions with low costs. Conceptually, we consider an optimization algorithm as a mapping from an input space of random instances to an output space of solutions and such algorithms should sample ``typical'' solutions from appropriate posterior distributions. In this chapter, we concentrate on maximum entropy sampling principles guided by Gibbs distributions to study information theoretic properties of random combinatorial optimization problems and their search landscape. Analytical computation of free energy, entropy and other macroscopic thermodynamical properties enables us to understand the solution structure of a large system, but~--- as already clarified in~Chapter~\ref{ch:free_energy},~--- this goal has been known to be notoriously difficult and challenging from a mathematical standpoint~\citep{talagrand03}. While we solved this problem for specific cases (Chapter~\ref{ch:free_energy}) with the purpose of applying in to robust optimization (Chapters~\ref{ch:gen_appch} and~\ref{ch:mst}), here we will be interested in a more general consequence of such results: a relation between the Random Energy Model (REM; see~\citealp{derrida81}) and the Sparse Minimum Bisection Problem (sMBP; see~Section~\ref{sec:free_mbp-problem}). % Why are we interested in the relations between REM and sMBP? \\ Why are the relations between REM and sMBP of interest? The REM does not introduce any statistical dependencies between solutions. Therefore, optimization algorithms have to exhaustively inspect all exponentially many solutions of REM to find the one with minimal costs. Sparse Minimum Bisection introduces correlations between solutions but they are asymptotically so weak that they do not change the free energy. Since the free energy is the moment generating function of the Gibbs distribution we hypothesize that the Gibbs distributions of both problems are equivalent in terms of Kullback-Leibler divergences. If this claim would hold then we would not be able to efficiently search for low cost solutions of sMBP. \subsection{Contributions and Outline of the Chapter} \label{sec:smbp_and_rem_contribs} In this chapter, we revisit the idea of characterizing structural information in solutions for combinatorial problems by information theoretic properties. More specifically: % \begin{itemize} \item we revisit results on the asymptotic behavior of the free energy~(Chapter~\ref{ch:free_energy}) on obtaining bounds on the free energy of solutions for the sMBP with random edge weights; \item these results reveal a remarkable phenomenon that the free energy of sMBP behaves very similarly to that of REM. Specifically, we show that the free energy of sMBP with random edge weights exhibits phase transitions equivalent to Derrida's REM; \item in order to deeper understand this observation and solution structure for dependent and independent solutions, we then make and prove statements about various ways sMBP and REM can be quantitatively related to each other: we show that the Kullback-Leibler divergence between Gibbs distributions induced by sMBP and REM are bounded, but not zero, which allows to make a conjecture about their complexity relations. \end{itemize} The chapter is organized as follows. As usual, we start with describing some of related work in Section~\ref{sec:smbp_rem_related_work}. We then present and discuss our results about the similar behavior of REM and sMBP in Section~\ref{sec:smbp_rem_similar}. We speculate on the ways to interpret these results in Sections~\ref{sec:mbp_and_rem_how_similar} and~\ref{sec:rem_conclusion}. \section{Background and Related Work Overview} \label{sec:smbp_rem_related_work} Information theory, statistical mechanics and combinatorial optimization in large disordered systems have been disciplines enjoying several waves of intensive research. The first wave, associated exclusively with the statistical mechanics, was marked by the works of~\citet{sk75spinb} or~\citet{derrida81} on mean-field models of spin glasses. For the reasons stated in the introduction, we concentrate on Derrida's solvable REM. In short, REM is the simplest example of a disordered system, whose configurations have i.i.d. energies and, therefore, are not efficiently ``searchable''. It will become important in the rest of the chapter that REM reflects the situation with no stochastic dependencies between solutions. This work inspired several continuations, of which we can mention, e.g., \citep{derrida86} as a generalization of REM, or~\citep{Aizenman1987} as exact solution of Sherrington-Kirkpatrick model~\citep{sk75spin}. \index{Traveling Salesman Problem} \index{TSP|see{Traveling Salesman Problem}} The second wave of interest was associated not exclusively with statistical mechanics, but also researched its connection to combinatorial optimization. Inspired by the work of Derrida, \citet{mezard84tsp} considered the Traveling Salesman Problem (TSP) as a large disordered system which seeks to optimize its energy defined by respective \textit{Hamiltonians}\index{Hamiltonian} (see Section~\ref{sec:background_disordered_systems}). This approach to view a combinatorial optimization problem from the statistical mechanics prospective turned out to be extremely fruitful: we recommend the book~\citep{LUCZAK1994} as a good overview of the results. We should also mention here the work~\citep{Auffinger2014} who studied algorithmic complexity from the statistical mechanics viewpoint. \index{Complexity} \index{Algorithmic complexity} Finally, in the last two decades, many attempts have been made to systematize approaches traditionally used in statistical mechanics and render them rigorous in a mathematical sense. Here, we point out the work by \citet{Bovier2002FreeEnergyFluct}, as well as extensive reviews by \citet{talagrand03,bovier2012statistical}. \section{Comparison of REM and sMBP} \subsection{Random Energy Model (REM)} \index{Random Energy Model} \index{REM|see{Random Energy Model}} The REM introduced by~\citet{derrida81} is a model $\mathcal{P}^\mathrm{rem} = (\mathcal{X}, \C^\mathrm{rem}, R^\mathrm{rem})$ \nomenclature[F, 50]{$(\mathcal{X}, \C^\mathrm{rem}, R^\mathrm{rem})$}{definition of REM\nomnorefeq}% where the following conditions apply: \begin{enumerate} \item Number of solutions (in the original terminology, \textit{configurations}) equals \begin{equation} |\C^\mathrm{rem}| = 2^K. \end{equation} \item Here, the data source $X \in \mathcal{X}$ is a vector of $2^K$ Gaussian random variables (for the notation, see Definition~\ref{def:optimization_problem_definition}), and all solutions $c_i \in \C^\mathrm{rem}$ carry costs (in the original terminology, \textit{Hamiltonians} or \textit{energy levels}) \begin{equation} R^\mathrm{rem}(c_i, X) = X_i, \;\; \text{where} \;\; X_i \sim \mathcal{N}(0, \sigma^2) \end{equation} \item The costs $X_i$ are i.i.d. \end{enumerate} As Derrida noted in his paper, ``the third property is specific to this model. It simplifies the model enough to allow us to solve it exactly''. While it is true for REM, such independence is not characteristic for the most of the models. The next section which discusses dependencies in sMBP. \subsection{Similar Behavior of REM and sMBP} \label{sec:smbp_rem_similar} Earlier, we introduced the sparsity constraint on $d$ in Theorem~\ref{thm:sparse_mbp_tight_bound} because without it, the stochastic dependency between two random solutions for original MBP~\citep{garey79} is very high. Indeed, in original MBP, any two bisections \textit{always} share edges. By introducing $d$ we: a) substantially reduce such dependency, but on the other hand b) do not eliminate it at all, like in REM~\citep{derrida81}. But by introducing sparsity, didn't we essentially \textit{transform} MBP into REM? E.g. one can observe that for classical dependency-free REM, the free energy asymptotics looks just the same, in particular exhibits the same phase transition and same phase shapes: \begin{theorem}[adapted formulation from~\citep{talagrand03}] \label{thm:rem} Assume $m = 2^K$ is the number of configurations for the REM model with Gaussian cost values, with parameters $\mathcal{N}(0, \tau^2)$. Then the free energy rate is (asymptotically in $n \to \infty$) equal to \begin{equation} \label{eq:talagrand_rem_with_tau} \lim_{n \to \infty} \frac{\E[\log Z]}{\log m}= \left\{ \begin{array}{ll} \frac{\beta^2 \tau^2}{2 \log m} + 1 & \beta < \sqrt{2 \log m}/\tau,\\ \frac{\beta \tau \sqrt{2}}{\sqrt{\log m}} & \beta \ge \sqrt{2 \log m}/\tau. \end{array} \right. \end{equation} \end{theorem} We note that Talagrand formulated it in a more general setting~\citep[cf.][Prop. 1.1.3]{talagrand03}, which is adapted here for clarity. Choosing $\tau = \sigma \sqrt{N}$ and applying $\beta$ rescaling from~\ref{def:cts}, we arrive at an equivalent formulation: for Gaussian cost values with parameters $(0, \sigma^2 N)$, we derive \begin{equation} \label{eq:talagrand_rem_adapted} \lim_{n \to \infty} \frac{\E[\log Z]}{\log m}= \left\{ \begin{array}{ll} 1+\frac{\hat \beta^2\sigma^2}{2}, & \hat \beta< \frac{\sqrt{2}}{\sigma},\\ \hat \beta \sigma \sqrt{2}, & \hat \beta \ge \frac{\sqrt{2}}{\sigma} \end{array} \right. \end{equation} \myremark For non-centered cost values, the necessary correction similar to the one of~\eqref{eq:sparse_mbp_tight_bound} should be made on the left-hand side which is trivial. We are now going to sketch an answer to the following question: how much does sMBP look like REM? We give the following result and then discuss it. First, let's make some definitions. \begin{definition} For an sMBP setting stated in Theorem~\ref{thm:sparse_mbp_tight_bound}, we will call equivalent such a REM, for which the number of configurations is equal to $m$ and the cost values are Gaussian with parameters $(\mu N, \sigma^2 N)$. \end{definition} For convenience of the following explanation, let us denote the random source behind such a REM as $Y$ (analogically to $X$ in case of sMBP). Hence for REM, the cost values $R(c, Y) \sim \mathcal{N}(\mu N, \sigma^2 N)$ and all are independent. We denote the respective Gibbs distributions $\psmbp_\beta(c | X)$ and $\prem_\beta(c | Y)$. \index{Kullback-Leibler divergence} \begin{theorem}\label{thm:kl_divergence} The rate of the KL-divergence between configurations' Gibbs distributions for sMBP and equivalent REM is non-zero and exhibits a phase transition. \begin{equation} \frac{\Expct_{X, Y}[\KL(\prem_\beta \| \psmbp_\beta)]}{\log m} = \left\{ \begin{array}{ll} \hat\beta^2 \sigma^2 & \hat \beta< \frac{\sqrt{2}}{\sigma},\\ \hat \beta \sigma \sqrt{2}, & \hat \beta \ge \frac{\sqrt{2}}{\sigma} \end{array} \right. \end{equation} \nomenclature[F, 50a]{$\prem_\beta(c "| X)$}{Gibbs distribution of REM model}% \nomenclature[F, 50c]{$\psmbp_\beta(c "| X)$}{Gibbs distribution of sMBP}% \end{theorem} \paragraph{Proof} In the first part of the proof, we will omit the expectation for the sake of brevity. % \begin{align} \MoveEqLeft \KL(\prem_\beta \| \psmbp_\beta) = \sum_c \prem_\beta(c\vert Y) \log \frac{\prem_\beta(c\vert Y)}{\psmbp_\beta(c\vert X)} = \notag \\ &= \sum_c \prem_\beta(c\vert Y) \left( \log \frac{e^{-\beta \Rrem(c,Y)}}{\Zrem(Y)} \right. \notag \\ &\qquad\qquad\qquad\qquad \left. - \log \frac{e^{-\beta \Rsmbp(c,X)}}{\Zsmbp(X)} \right) \notag \\ &= \sum_c \prem_\beta(c\vert Y) \left( \vphantom{\sum} \log e^{-\beta \Rrem(c,Y)} \right. \notag \\ &\qquad\qquad\qquad - \log e^{-\beta \Rsmbp(c,X)} \notag \\ &\qquad\qquad\qquad \left. - \log \Zrem(Y) + \log \Zsmbp(X) \vphantom{\sum}\right) \notag \\ &= -\beta \sum_c \prem_\beta(c\vert Y) \Bigl( \Rrem(c,Y) - \Rsmbp(c,X) \Bigr) \notag \\ &\qquad\qquad\qquad - \log \Zrem(Y) + \log \Zsmbp(X) \end{align} Returning to the expectation $\Expct_{X,Y}$ and recalling that all the sMBP-related terms depend on $X$ and all the REM-related terms depend on $Y$, we can continue: \begin{align} \MoveEqLeft -\beta \Expct_{X,Y} \biggl[\sum_c \prem_\beta(c\vert Y) \Bigl( \Rrem(c,Y) - \Rsmbp(c,X) \Bigr) \biggr] \notag \\ &\qquad\qquad \underbrace{ {} - \Expct_Y [\log \Zrem(Y) ] + \Expct_X [\log \Zsmbp(X)] }_{ \text{ cancel out due to Thm.~\ref{thm:sparse_mbp_tight_bound} and \ref{thm:rem}} } \notag \\ &= -\beta \Bigl( \Expct_{Y} \Bigl[\sum_c \prem_\beta(c\vert Y) \Rrem(c,Y) \Bigr] \notag \\ &\qquad\qquad - \Expct_Y \underbrace{\sum_c \prem_\beta(c\vert Y)}_{1} \cdot \underbrace{\vphantom{\sum_c}\Expct_X \Rsmbp(c,X)}_{\mu N} \Bigr) \notag \\ &= \beta \mu N + \beta \Expct_Y \Bigl[\frac{d}{d\beta} \log \Zrem(Y)\Bigr]. \end{align} By the argument of dominated convergence theorem, we can under mild conditions interchange expectation and differentiation, which together with~\eqref{eq:talagrand_rem_with_tau} and~\eqref{eq:talagrand_rem_adapted} leads to \begin{equation} \Expct_{X, Y}[\KL(\prem_\beta \| \psmbp_\beta)] = \left\{ \begin{array}{ll} \hat\beta^2 \sigma^2 \log m & \hat \beta< \frac{\sqrt{2}}{\sigma},\\ \hat \beta \sigma \log m \sqrt{2}, & \hat \beta \ge \frac{\sqrt{2}}{\sigma}, \end{array} \right. \end{equation} which completes the proof of theorem. \QEDA \subsection{Consequences of Similar Behavior of REM and sMBP} We now discuss this result. There are several observations to be made about the whole line of research reflected in Theorems~\ref{thm:sparse_mbp_tight_bound},~\ref{thm:rem} and~\ref{thm:kl_divergence}. \begin{figure}[th!] \centering \begin{subfigure}[b]{.48\textwidth} \includegraphics[width=\linewidth]{figures/ch_smbp_and_rem/different_solution_overlaps} \caption{Solution to an sMBP problem} \label{fig:ch_rem_smbp_illustration-0} \end{subfigure} \hfill \begin{subfigure}[b]{.48\textwidth} \includegraphics[width=\linewidth]{figures/ch_smbp_and_rem/different_solution_overlaps_1} \caption{No overlap at all} \label{fig:ch_rem_smbp_illustration-1} \end{subfigure} \\[.5cm] \begin{subfigure}[b]{.48\textwidth} \includegraphics[width=\linewidth]{figures/ch_smbp_and_rem/different_solution_overlaps_2} \caption{Vertex overlap, but no edge overlap} \label{fig:ch_rem_smbp_illustration-2} \end{subfigure} \hfill \begin{subfigure}[b]{.48\textwidth} \includegraphics[width=\linewidth]{figures/ch_smbp_and_rem/different_solution_overlaps_3} \caption{Only edge overlap counts} \label{fig:ch_rem_smbp_illustration-3} \end{subfigure} \\[.5cm] \caption{Illustration of various types of overlaps. Only case \textbf{(d)} contributes to statistical dependence between costs of solutions. Carefully computing edge overlap can make a huge step forward in understanding higher moments of $\log Z$ (in Lemma~\ref{lem:expct_d_asymptotics} we computed only expected value).} \label{fig:ch_rem_smbp_illustration} \end{figure} First, Theorems~\ref{thm:sparse_mbp_tight_bound},~\ref{thm:rem} and~\ref{thm:kl_divergence} can be interpreted as follows: they yield the similarity of both problems in terms of macroscopic thermodynamical properties like free energy rate, but at the same time they convey their difference in terms of KL-divergence rate: by definition of $\KL$, it measures the amount of information one might gain (or lose) by assuming $\prem$ instead of $\psmbp$ or vice versa. The fact that the expected KL-divergence rate is non-zero allows us to say that sMBP and REM are still different in terms of their distributions. Second, and probably most importantly, due to the same line of reasoning as in Theorem~\ref{thm:kl_divergence}, we can conclude that a KL-divergence between every pair $({\prem}', {\prem}'')$ of independent (that is, with independent $Y'$ and $Y''$) equivalent REMs is identical to that of $(\prem, \psmbp)$. This highlights the fact that for full understanding of relations (similarities and differences) between them one needs to properly quantify this difference in terms of higher moments and not only \textit{expectation} of KL-divergence rate. It is important to realize that the higher moments of the KL-divergence can be most likely controlled by the higher moments of $\log Z$, i.e. $\Var[\log Z]$ and so on. The necessary understanding of the behavior of $\Var[\log Z]$, in turn, can be reached via careful computing of higher moments of edge overlap $D$ of the two solutions $c_1$ and $c_2$ (illustrated in Figure~\ref{fig:ch_rem_smbp_illustration}): one can see this, for example, from the proof of Theorem~\ref{thm:sparse_mbp_tight_bound}, where we have \begin{equation*} \Var Z \sim (\Expct Z)^2 \bigl( \sigma^2 \beta^2 \Expct_\D D \bigr) \end{equation*} as an intermediate result. There exists evidence of a possibility to use Taylor expansions to express higher moments of $\log Z$ via those of $Z$. \section{Comparison of REM and non-sparse MBP} \label{sec:mbp_and_rem_how_similar} Interesting question arises: can we say anything about non-sparse (i.e. case when $d \not \ll n$) MBP? Note that in Section~\ref{sec:free_energy_in_general_case} of Chapter~\ref{ch:free_energy} we made a conjecture which is backed up by extensive simulations, which we restate here: \newtheorem*{adhocconj}{Conjecture~\ref{ad-hoc}} \begin{adhocconj}[see Section~\ref{sec:free_energy_in_general_case}] Consider a class of combinatorial optimization problems complying with Common Theorem Setting, weights $W_i$ having mean $\mu$ and variance $\sigma^2$. Then the free energy satisfies \begin{equation*} \lim_{n\to \infty} \frac{\E[\log Z(\beta, X)] +\hat \beta \mu \sqrt{N\log m}}{\log m} = \begin{cases} 1 + \alpha^2\frac{\hat{\beta}^2\sigma^2}{2}, &\hat{\beta} < \frac{\sqrt{2}}{\alpha\sigma} \\ \alpha\hat{\beta}\sigma\sqrt{2}, &\hat{\beta} \geq \frac{\sqrt{2}} {\alpha\sigma} \end{cases} \end{equation*} for some $\alpha \ge 1$, where $\alpha$ is well approximated by \begin{equation*} \alpha = \sqrt{\frac{\E_X\Var_{\mathcal{D}} R(c,X)}{\E_{\mathcal{D}}\Var_X R(c,X)}} = \sqrt{\frac{\E_X\Var_{\mathcal{D}} R(c,X)}{N\sigma^2}} \end{equation*} \end{adhocconj} It is instructive to note here that, loosely speaking, parameter $\alpha$ represents the ratio variability across solutions (nominator) and the variability inside each solution (denominator). The higher the dependence is, the less variability across solutions exists. Apparently $\alpha = 1$ corresponds to a case of no dependencies (sMBP), while lower alpha corresponds to higher levels of dependencies. And again, to further back up this conjecture, one has to develop the understanding of $\Var_{\mathcal{D}} R(c, X)$, i.e. variance of costs for uniformly chosen solutions, which boils down to taking overlaps under control (computing $\Var_\mathcal{D} [D]$ analogically to $\Expct_\mathcal{D}[D]$ as we did in Lemma~\ref{lem:expct_d_asymptotics}. \section{Discussion and Conclusion} \label{sec:rem_conclusion} The geometry of random combinatorial optimization problems is often characterized by exponentially many local minima which prevent an efficient search for low cost solutions. In this chapter, we have studied random instances of sparse Minimum Bisection Problem which exhibit a statistical behavior very similar to that of the Random Energy Model. This similarity between the Gibbs distributions for REM instances and for sMBP instances is documented by identical values for the free energies and for the Kullback-Leibler divergences between pairs of distributions. Furthermore, this equivalence also suggests that the computational complexity of REM and sMBP might be the same and, consequently, random instances of sMBP might not be efficiently optimized due to the lack of search information as in REM. As a future research direction we plan to analyze the finite $\log m$ corrections to estimate the convergence rate of the free energy toward its asymptotic limit. Another open question remains if there exist REM instances that are arbitrarily close (w.r.t. KL-divergence) to sMBP instances in the asymptotic limit, explaining why we cannot find efficient optimization schemes for sparse Minimum Bisection. For answering the last question, we have highlighted the importance of developing a better understanding of the higher moments of the solutions overlap (Figure~\ref{fig:ch_rem_smbp_illustration}), since the proofs we gave earlier yield that controlling the dependency between the solutions gives us the essential information about the problem's (dis)similarity from REM, where such dependencies don't exist at all.
{ "alphanum_fraction": 0.7251774643, "avg_line_length": 49.0264317181, "ext": "tex", "hexsha": "f4bdaa3e881fceb1b14c5e1840ad511232913a5c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "182fcc5c09c8aa20df54cf536eb87766bfb6c353", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "agronskiy/phd-thesis", "max_forks_repo_path": "thesis/ch_smbp_and_rem/ch_smbp_and_rem.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "182fcc5c09c8aa20df54cf536eb87766bfb6c353", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "agronskiy/phd-thesis", "max_issues_repo_path": "thesis/ch_smbp_and_rem/ch_smbp_and_rem.tex", "max_line_length": 110, "max_stars_count": null, "max_stars_repo_head_hexsha": "182fcc5c09c8aa20df54cf536eb87766bfb6c353", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "agronskiy/phd-thesis", "max_stars_repo_path": "thesis/ch_smbp_and_rem/ch_smbp_and_rem.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6250, "size": 22258 }
With the advent of multicore processors, there has been a renewed interest in the development of performance tools and algorithms targeted for parallel architectures. Many research areas provide a wide variety of problems which would show improved performance when executed in parallel. One such area is scientific and numerical computing. Scientific algorithms are used by researchers from various fields such as chemistry, biology, geography etc. as well as different sub-fields of computer science like machine learning. In most cases, these algorithms are written in languages that are collectively known as array-based languages. A few examples of such languages are \matlab\cite{matlab}, Julia\cite{julia} and Python\cite{python} with it's NumPy\cite{numpy} library. Array-based languages offer features like, an interpreter style read-eval-print-loop, functions such as eval and feval for dynamic code evaluation, no types etc. which enable rapid prototyping. However due to the very same features, these languages show poorer performance when compared to statically compiled languages. A common approach for improving the performance is compile whole programs to languages such as {\sc FORTRAN}\cite{fortran} and C\cite{clang}. However, in most cases, the most computationally intensive portion of the program is small, often localised inside a loop body. Hence compiling the entire program is not necessary. In most cases speed up observed through partial compilation of hot code sections is commensurate with that observed by compiling the whole program. This allows the user to continue programming in the language he/she is more comfortable in. Additionally, these functions may be reusable for other programs. In such cases, the functions will have to be compiled only once and can be reused for the other programs. This thesis addresses the problem of improving the performance of programs written in array-based languages by compiling the hot sections to parallel C++\cite{cpp}. We support both \matlab and NumPy. There are two main challenges. First one is supporting the different and often complementary semantics of both languages. The other is supporting the large number of builtins methods that are supported by both languages. Our solution implement a static C++ backend for Velociraptor\cite{velociraptor} toolkit and use tools to compile \matlab and Python programs to the Velociraptor intermediate representation, VRIR. The \mclab\cite{casey:mclab} static pipeline is used for \matlab and PyVrir, a Python frontend for Velociraptor is used for Python. \section{\velocty Compilation Pipeline} \label{sec:comppipe} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.5]{Figures/Overview_thesis.png} \caption[Overview of the VeloCty]{Overview of VeloCty. The shaded boxes indicate the components presented in this thesis. The other solid boxes correspond to existing \mclab and Velociraptor tools we use}\label{Fig:Overview} \end{center} \end{figure} The compilation pipeline for \velocty can be seen in \figref{Fig:Overview}. As mentioned earlier, PyVrir is a proof of concept Python frontend that is part of the Velociraptor framework and the \matlab frontend is written using the \mclab frontend. In the \mclab pipeline a \matlab program is parsed by the \mclab frontend and converted into an AST based representation known as McAST. The McSAF\cite{doherty11} framework then performs various analyses such as kind analysis\cite{Doherty:2011:KAM:2076021.2048077} and function lookup on McAST and then generates another AST based representation called McLAST. The framework also performs a transformation from the colon expression to the range expression that was implemented as part of this thesis. Additional details on the transformation can be found in Section \ref{sec:colonExpr}. McLAST is then converted to Tame IR\cite{Dubrau:2012} by the Tamer\cite{Dubrau:2012} framework. Tame IR is a three address representation of \matlab. Analyses such as value analysis, shape analysis, isComplex analysis and IntegerOk analysis are performed on this IR. These analyses provide information on the type, dimensions and complexity of different variables code which is useful for generating the VRIR and subsequently the C++ code. The IntegerOk analysis identifies variables which can safely be declared as integers in the target language. This analysis is useful since \matlab defines all variables as double by default. Tame IR is then given as input to Tamer+, a code aggregation framework, which generates the high-level McLAST representation from it. Code generated from McLAST is devoid of temporary variables and hence has better readability. The VRIR code generator takes McLAST as input and generates VRIR in the s-expression format. It also generates, glue code using the \matlab MEX\cite{mex} API required for interfacing with \matlab. The VRIR is then parsed by the Velociraptor frontend and converted into an AST representation. Various passes such as the simplification pass, loop info collector and the index info collector pass are performed over the AST and is then passed to the static code generator. Finally, the code generator outputs C++ code which can then be compiled to a shared library along with the language specific run time library containing helper functions and the glue code. \section{The Execution Model} \label{sec:execModel} The Execution model, shown in \figref{Fig:working}, describes how program execution occurs before and after statically compiling some of the methods in the program to C++ using \velocty. The user selects a function which he/she identifies as computationally intensive. \velocty generates a callgraph using the user-specified function as an entry point. In \figref{Fig:working} the core1 function is specified as the entry point. The compiler then generates a callgraph. In the current example, the callgraph contains core1 and core2. All functions in callgraph are then compiled to C++ by \velocty. The figure shows the implementation of the core1 function. The function contains a double array \textsf{a} and returns another double array \textsf{x}. The array \textsf{x} is initialised inside the function using the builtin function \textsf{zeros}. The function also contains a for loop which iterates from 1 to 10. On each iteration, the i$^{th}$ element of \textsf{x} is assigned the result of the sum of the i$^{th}$ element of \textsf{a} and a constant value 10 and a call is made to the \textsf{core2} function. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Figures/WorkingDetails.png} \caption[Execution Model]{Execution model of VeloCty. The dark shaded blocks represent functions written in the source language and the blocks that are lightly shaded are the functions written in C++. The white block represents the \velocty compiler.}\label{Fig:working} \end{center} \end{figure} The generated C++ code contains calls to functions in a language-specific library. The library contains functions which mirror the builtins in the source language. These functions are also written in C++ but are language-specific because the behaviour of the functions are dependent on the source language. In this case, the \textsf{zeros} function is implemented in the runtime library and the generated code contains a call to this function in the runtime library. \velocty also generates glue code to interface with the source program. The MEX API and the Python/C\cite{pyc} API are used for interfacing with \matlab and Python respectively. The generated code is compiled with the runtime and the glue code and packaged as a shared library. All calls to the entry point function, in this case, core1, would be directed to the shared library instead of the original source language version.\\ \section{Contributions} The main contributions of this thesis are as follows. \begin{itemize} \item Design and implementation of a system for partially compiling array-based languages. \item Generating the Velociraptor intermediate representation from the McSAF intermediate representation. \item Implementation of a transformation from Colon expressions to Range expressions \item Generating glue code necessary for invoking C++ functions from \matlab. \item Generating C++ code from the Velociraptor IR. \item Optimizing generated code by eliminating bounds checks removing unnecessary memory allocations and parallel code execution. \end{itemize} \section{Thesis Outline} This thesis is divided into \ref{chap:Conclusions} chapters, including this one, which are structured as follows. \chapref{chap:Background} gives a brief overview of the tools used by VeloCty. \chapref{chap:McSAFTranslate} describes the translation from the McSAF intermediate representation to the Velociraptor intermediate representation(VRIR). \chapref{chap:vrirBackend} talks about the generation of C++ code from VRIR. \chapref{chap:glueCode} describes the various aspects of generating glue code for \matlab's MEX API including how input data is converted from MEX data structures to VeloCty data structures. \chapref{chap:codeOptimise} explains the code optimisations implemented to improve the performance. \chapref{chap:results} describes the performance of \velocty compared to various other systems. \chapref{chap:Related} provides an overview of related work and \chapref{chap:Conclusions} concludes.
{ "alphanum_fraction": 0.8126329222, "avg_line_length": 162.1379310345, "ext": "tex", "hexsha": "73849f0dc1e6985252e8bee7797dddeb09005fae", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2d11534b5a922c7e4342325908bc35076468b90d", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "sameerjagdale/thesis", "max_forks_repo_path": "text/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2d11534b5a922c7e4342325908bc35076468b90d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "sameerjagdale/thesis", "max_issues_repo_path": "text/intro.tex", "max_line_length": 1891, "max_stars_count": null, "max_stars_repo_head_hexsha": "2d11534b5a922c7e4342325908bc35076468b90d", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "sameerjagdale/thesis", "max_stars_repo_path": "text/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2061, "size": 9404 }
\chapter[Introduction]{{\color{red}\fontsize 99 :}Introduction} \label{ch:intro} % \section{Motivation}\label{ch:motivation} \kant[1] \section{Objective} \kant[2-3]
{ "alphanum_fraction": 0.7261904762, "avg_line_length": 16.8, "ext": "tex", "hexsha": "8ac1154abe5971996404cf34dc2d62157b8bc282", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-03-02T09:00:55.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-02T08:48:46.000Z", "max_forks_repo_head_hexsha": "7917b99d1f18fe6bae5678d96178d39f0a9d6058", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ziu1986/TeXTemplate-PhD-thesis-MetOs", "max_forks_repo_path": "parts/introduction.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "7917b99d1f18fe6bae5678d96178d39f0a9d6058", "max_issues_repo_issues_event_max_datetime": "2021-02-05T13:02:58.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-05T12:50:04.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ziu1986/TeXTemplate-PhD-thesis-MetOs", "max_issues_repo_path": "parts/introduction.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "7917b99d1f18fe6bae5678d96178d39f0a9d6058", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ziu1986/TeXTemplate-PhD-thesis-MetOs", "max_stars_repo_path": "parts/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 58, "size": 168 }
\documentclass[12pt]{amsart} \usepackage{geometry} % see geometry.pdf on how to lay out the page. There's lots. \geometry{a4paper} % or letter or a5paper or ... etc % \geometry{landscape} % rotated page geometry % See the ``Article customise'' template for come common customisations \title{} \author{} \date{} % delete this line to display the current date %%% BEGIN DOCUMENT \begin{document} \maketitle \tableofcontents \section{} \subsection{} \end{document}
{ "alphanum_fraction": 0.7371794872, "avg_line_length": 21.2727272727, "ext": "tex", "hexsha": "479967467493f6263de2e7946323f7fe0203a0c7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b8c089500809d034a717d66a9938a511117e0771", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "yuetsin/bash-utils", "max_forks_repo_path": "templates/LaTeX/article.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "b8c089500809d034a717d66a9938a511117e0771", "max_issues_repo_issues_event_max_datetime": "2020-12-11T05:38:40.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-15T09:53:50.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "yuetsin/bash-utils", "max_issues_repo_path": "templates/LaTeX/article.tex", "max_line_length": 82, "max_stars_count": null, "max_stars_repo_head_hexsha": "b8c089500809d034a717d66a9938a511117e0771", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "yuetsin/bash-utils", "max_stars_repo_path": "templates/LaTeX/article.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 126, "size": 468 }
\input{../../../utils/header_article.tex} \title{Potential Outcome Model\thanks{We are indebted to Liudmila Kiseleva and Tim Mensinger for the design of the problem set.}} \subtitle{-- Problem Set 1 --} \date{} \begin{document}\maketitle\vspace{-2cm} The \href{https://www.cdc.gov/nchs/nhis/index.htm}{\texttt{National Health Interview Survey (NHIS)}} data is collected on U.S. households since 1957. It covers a broad range of health-related topics from medical conditions, health insurance, and the number of doctor visits to measures of physical activity. Here we focus on indicators relevant for the Potential outcome model (POM) framework. In particular, we will compare the health status of hospitalized and non-hospitalized individuals in 2018. For this purpose, we use answers to the survey question \textit{During the past 12 months, has the respondent been hospitalized overnight?} with potential answers \textit{Yes} and \textit{No} which we code as one and zero. Further, we consider answers to the questions \textit{Would you say your health, in general, is excellent, very good, good, fair, poor?} where responses are coded as one for poor health up to five for excellent health. The survey also collects data on relevant characteristics as sex, age, level of education, hours worked last week, and total earnings. \section*{Task A} \begin{boenumerate} \item Open a \texttt{Jupyter Notebook} and import the data set \texttt{nhis-initial.xslx} (available at \url{https://bit.ly/nhis-initial}). Try to think of ways to answer the following questions: \textit{Are there more females or males?} \textit{Are there more individuals who hold a degree or not?}. Now try to relate individual characteristics to the hospitalization status. \textit{Are high or low earners/old or young people more often hospitalized?} \item Compute the average health status of hospitalized and non-hospitalized individuals. \textit{Who is healthier on average?} \textit{What could be a reason for this difference?} \item Adjust the data set for the POM framework, with health status as the outcome and hospitalization as the treatment status. \item Compute the naive estimate for the \textit{average treatment effect} (ATE). \end{boenumerate} \section*{Task B} \begin{boenumerate} \item As we've seen in the lecture, in reality, we can only ever observe one counterfactual; however, when simulating data, we can bypass this problem. The (simulated) data set \texttt{nhis-simulated.xslx} (available at \url{https://bit.ly/nhis-simulated}) contains counterfactual outcomes, i.e., outcomes under control for individuals assigned to the treatment group and vice versa. Derive and compute the average outcomes in the two observable and two unobservables states. Design them similar to Table 2.3 in \cite{Morgan.2014}. \end{boenumerate} \noindent From here on, we assume that 5$\%$ of the population take the treatment. \begin{boenumerate}\setcounter{enumi}{1} \item Derive and explain Equation 2.12 from \cite{Morgan.2014} for the naive estimator as a decomposition of true ATE, baseline bias, and differential treatment effect bias. \item Compute the naive estimate and true value of the ATE for the simulated data. \textit{Is the naive estimator upwardly or downwardly biased?} Calculate the baseline bias and differential treatment effect bias. \textit{How could we interpret these biases in our framework of health status of hospitalized and non-hospitalized respondents?} \item Under which assumptions does the naive estimator provide the ATE? \end{boenumerate} \nocite{Angrist.2008} \nocite{NHIS} \bibliographystyle{apacite} \bibliography{../../../submodules/bibliography/literature} \end{document}
{ "alphanum_fraction": 0.7729199788, "avg_line_length": 72.5769230769, "ext": "tex", "hexsha": "dd3415e678b37922909ce3f20ef01a48890c4327", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ede1eceb25e578b3109c03d35f26d34d41777aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "milakis/microeconometrics", "max_forks_repo_path": "problem-sets/01-potential-outcome-model/sources/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ede1eceb25e578b3109c03d35f26d34d41777aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "milakis/microeconometrics", "max_issues_repo_path": "problem-sets/01-potential-outcome-model/sources/main.tex", "max_line_length": 1078, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ede1eceb25e578b3109c03d35f26d34d41777aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "milakis/microeconometrics", "max_stars_repo_path": "problem-sets/01-potential-outcome-model/sources/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 904, "size": 3774 }
\setchapterstyle{kao} %\setchapterpreamble[u]{\margintoc} \chapter{References} \labch{references} \section{Citations} \index{citations} To cite someone \sidecite{Visscher2008,James2013} is very simple: just use the \Command{sidecite}\index{\Command{sidecite}} command. It does not have an offset argument yet, but it probably will in the future. This command supports multiple entries, as you can see, and by default it prints the reference on the margin as well as adding it to the bibliography at the end of the document. For this setup I used biblatex but I think that workarounds are possible.\sidecite{James2013} Note that the citations have nothing to do with the text, they are completely random as they only serve the purpose to illustrate the feature. To compile a document containing citations, you need to use an external tool, which for this class is biber. You need to run the following (assuming that your tex file is called main.text): \begin{lstlisting}[style=kaolstplain] $ pdflatex main $ biber main $ pdflatex main \end{lstlisting} \section{Glossaries and Indices} \index{glossary} The \Class{kaobook} class loads the packages \Package{glossaries} and \Package{imakeidx}, with which you can add glossaries and indices to your book. For instance, I previously defined some glossary entries and now I am going to use them, like this: \gls{computer}. \Package{glossaries} also allows you to use acronyms, like the following: this is the full version, \acrfull{fpsLabel}, and this is the short one \acrshort{fpsLabel}. These entries will appear in the glossary in the backmatter. Unless you use \href{https://www.overleaf.com}{Overleaf} or some other fancy IDE for \LaTeX, you need to run an external command from your terminal in order to compile a document with a glossary. In particular, the commands required are:\sidenote[-2mm][]{These are the commands you would run in a UNIX system; I have no idea on how it works in Windows.} \begin{lstlisting}[style=kaolstplain] $ pdflatex main $ makeglossaries main $ pdflatex main \end{lstlisting} Note that you need not run \texttt{makeglossaries} every time you compile your document, but only when you change the glossary entries. \index{index} To create an index, you need to insert the command \lstinline|\index{subject}| whenever you are talking about \enquote{subject} in the text. For instance, at the start of this paragraph I would write \lstinline|index{index}|, and an entry would be added to the Index in the backmatter. Check it out! \marginnote[2mm]{In theory, you would need to run an external command for the index as well, but luckily the package we suggested, \Package{imakeidx}, can compile the index automatically.} \index{nomenclature} A nomenclature is just a special kind of index; you can find one at the end of this book. To insert a nomenclature, we use the package \Package{nomencl} and add the terms with the command \Command{nomenclature}. We put then a \Command{printnomenclature} where we want it to appear. Also with this package we need to run an external command to compile the document, otherwise the nomenclature will not appear: \begin{lstlisting}[style=kaolstplain] $ pdflatex main $ makeindex main.nlo -s nomencl.ist -o main.nls $ pdflatex main \end{lstlisting} These packages are all loaded in \href{style/packages.sty}{packages.sty}, one of the files that come with this class. However, the configuration of the elements is best done in the main.tex file, since each book will have different entries and styles. Note that the \Package{nomencl} package caused problems when the document was compiled, so, to make a long story short, I had to prevent \Package{scrhack} to load the hack-file for \Package{nomencl}. When compiling the document on Overleaf, however, this problem seem to vanish. \marginnote[-19mm]{This brief section was by no means a complete reference on the subject, therefore you should consult the documentation of the above package to gain a full understanding of how they work.} \section{Hyperreferences} \index{hyperreferences} In this class we provide a handy sub-package to help you referencing the same elements always in the same way, for consistency across the book. First, you can label each element with a specific command. For instance, should you want to label a chapter, you would put \lstinline|\labch{chapter-title}| right after the \Command{chapter} directive. This is just a convienence, because \Command{labch} is actually just an alias to \lstinline|\label{ch:chapter-title}|, so it spares you the writing of \enquote{ch}. We defined similar commands for many typically labeled elements, including: \begin{multicols}{2} \setlength{\columnseprule}{0pt} \begin{itemize} \item Page: \Command{labpage} \item Part: \Command{labpart} \item Chapter: \Command{labch} \item Section: \Command{labsec} \item Figure: \Command{labfig} \item Table: \Command{labtab} \item Definition: \Command{labdef} \item Theorem: \Command{labthm} \item Proposition: \Command{labprop} \item Lemma: \Command{lablemma} \item Remark: \Command{labremark} \item Example: \Command{labexample} \item Exercise: \Command{labexercise} \end{itemize} \end{multicols} Of course, we have similar commands for referencing those elements. However, since the style of the reference should depend on the context, we provide different commands to reference the same thing. For instance, in some occasions you may want to reference the chapter by name, but other times you want to reference it only by number. In general, there are four reference style, which we call plain, vario, name, and full. The plain style references only by number. It is accessed, for chapters, with \lstinline|\refch{chapter-title}| (for other elements, the syntax is analogous). Such a reference results in: \refch{references}. The vario and name styles rest upon the \Package{varioref} package. Their syntax is \lstinline|\vrefch{chapter-title}| and \lstinline|\nrefch{chapter-title}|, and they result in: \vrefch{references}, for the vario style, and: \nrefch{references}, for the name style. As you can see, the page is referenced in \Package{varioref} style. The full style references everything. You can use it with \lstinline|\frefch{chapter-title}| and it looks like this: \frefch{references}. Of course, all the other elements have similar commands (\eg for parts you would use \lstinline|\vrefpart{part-title}| or something like that). However, not all elements implement all the four styles. The commands provided should be enough, but if you want to see what is available or to add the missing ones, have a look at the \href{styles/references.sty}{attached package}.
{ "alphanum_fraction": 0.7738112872, "avg_line_length": 42.7278481013, "ext": "tex", "hexsha": "98906d342f0e869be817549b25073fcb0c082fd4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c2fb69c9bc077a1ffe08e1258e4f6f735f8238cb", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "robertdstein/kaobook", "max_forks_repo_path": "chapters/references.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c2fb69c9bc077a1ffe08e1258e4f6f735f8238cb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "robertdstein/kaobook", "max_issues_repo_path": "chapters/references.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "c2fb69c9bc077a1ffe08e1258e4f6f735f8238cb", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "robertdstein/kaobook", "max_stars_repo_path": "chapters/references.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1747, "size": 6751 }
\subsection{Recap of Hidden Markov Models (HMMs)} We don’t see state Each state produces a visible output. this output is drawn from a distribution for each state. We observe a sequence of outputs, not states.
{ "alphanum_fraction": 0.7720930233, "avg_line_length": 21.5, "ext": "tex", "hexsha": "1d9e50fb166c1edc56946863f33e01a131de90f8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/markovHMMEstimation/01-01-HMM.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/markovHMMEstimation/01-01-HMM.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/markovHMMEstimation/01-01-HMM.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 50, "size": 215 }
\section{Power Series}\label{sec:powerseries} Recall that the sum of a geometric series can be expressed using the simple formula: \[\sum_{n=0}^\infty kx^n = {k\over 1-x},\] if $|x|<1$, and that the series diverges when $|x|\ge 1$. At the time, we thought of $x$ as an unspecified constant, but we could just as well think of it as a variable, in which case the series \[\sum_{n=0}^\infty kx^n\] is a function, namely, the function $k/(1-x)$, as long as $|x|<1$: Looking at this from the opposite perspective, this means that the function $k/(1-x)$ can be represented as the sum of an infinite series. Why would this be useful? While $k/(1-x)$ is a reasonably easy function to deal with, the more complicated representation $\sum kx^n$ does have some advantages: it appears to be an infinite version of one of the simplest function types---a polynomial. Later on we will investigate some of the ways we can take advantage of this `infinite polynomial' representation, but first we should ask if other functions can even be represented this way. The geometric series has a special feature that makes it unlike a typical polynomial---the coefficients of the powers of $x$ are all the same, namely $k$. We will need to allow more general coefficients if we are to get anything other than the geometric series. \begin{definition}{Power Series}{PowerSeriesDefinition} A power series is a series of the form $$\ds\sum_{n=0}^\infty a_nx^n,$$ where each $a_n$ is a real number. \end{definition} As we did in the section on sequences, we can think of the $a_n$ as being a function $a(n)$ defined on the non-negative integers. Note, however, that the $a_n$ do not depend on $x$. \begin{example}{}{} Determine whether the power series $\ds\sum_{n=1}^\infty {x^n\over n}$ converges. \end{example} \begin{solution} We can investigate convergence using the ratio test: \[ \lim_{n\to\infty} {|x|^{n+1}\over n+1}{n\over |x|^n} =\lim_{n\to\infty} |x|{n\over n+1} =|x|. \] Thus when $|x|<1$ the series converges and when $|x|>1$ it diverges, leaving only two values in doubt. When $x=1$ the series is the harmonic series and diverges; when $x=-1$ it is the alternating harmonic series (actually the negative of the usual alternating harmonic series) and converges. Thus, we may think of $\ds\sum_{n=1}^\infty {x^n\over n}$ as a function from the interval $[-1,1)$ to the real numbers. \end{solution} A bit of thought reveals that the ratio test applied to a power series will always have the same nice form. In general, we will compute \[ \lim_{n\to\infty} {|a_{n+1}||x|^{n+1}\over |a_n||x|^n} =\lim_{n\to\infty} |x|{|a_{n+1}|\over |a_n|} = |x|\lim_{n\to\infty} {|a_{n+1}|\over |a_n|} =L|x|, \] assuming that $\ds \lim |a_{n+1}|/|a_n|$ exists. Then the series converges if $L|x|<1$, that is, if $|x|<1/L$, and diverges if $|x|>1/L$. Only the two values $x=\pm1/L$ require further investigation. Thus the series will always define a function on the interval $(-1/L,1/L)$, that perhaps will extend to one or both endpoints as well. Two special cases deserve mention: if $L=0$ the limit is $0$ no matter what value $x$ takes, so the series converges for all $x$ and the function is defined for all real numbers. If $L=\infty$, then no matter what value $x$ takes the limit is infinite and the series converges only when $x=0$. The value $1/L$ is called the \dfont{radius of convergence} of the series, and the interval on which the series converges is the \dfont{interval of convergence}. We can make these ideas a bit more general. Consider the series \[\ds\sum_{n=0}^{\infty}\frac{(x+2)^n}{3^n}\] This looks a lot like a power series, but with $(x+2)^n$ instead of $x^n$. Let's try to determine the values of $x$ for which it converges. This is just a geometric series, so it converges when \begin{align*} |x+2|/3&<1 \\ |x+2|&<3 \\ -3 < x+2 &< 3 \\ -5<x&<1. \\ \end{align*} So the interval of convergence for this series is $(-5,1)$. The center of this interval is at $-2$, which is at distance 3 from the endpoints, so the radius of convergence is 3, and we say that the series is centered at $-2$. Interestingly, if we compute the sum of the series we get \[\ds\sum_{n=0}^{\infty}\left(\frac{x+2}{3}\right)^n=\frac{1}{1-\frac{x+2}{3}}=\frac{3}{1-x}.\] Multiplying both sides by 1/3 we obtain \[\sum_{n=0}^\infty {(x+2)^n\over 3^{n+1}}={1\over 1-x},\] which we recognize as being equal to \[\sum_{n=0}^{\infty}x^n,\] so we have two series with the same sum but different intervals of convergence. This leads to the following definition: \begin{definition}{Power Series}{PowerSeriesDefinition2} A power series centered at $c$ has the form $$\ds\sum_{n=0}^\infty a_n(x-c)^n,$$ where $c$ and each $a_n$ are real numbers. \end{definition} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Opensolutionfile{solutions}[ex] \section*{Exercises for \ref{sec:powerseries}} \begin{enumialphparenastyle} \begin{ex} Find the radius and interval of convergence for each series. In part c), do not attempt to determine whether the endpoints are in the interval of convergence. \begin{multicols}{2} \begin{enumerate} \item $\ds\sum_{n=0}^\infty n x^n$ \item $\ds\sum_{n=0}^\infty {x^n\over n!}$ \item $\ds\sum_{n=1}^\infty {n!\over n^n}(x-2)^n$ \item $\ds\sum_{n=1}^\infty {(n!)^2\over n^n}(x-2)^n$ \item $\ds\sum_{n=1}^\infty {(x+5)^n\over n(n+1)}$ \end{enumerate} \end{multicols} \begin{sol} \begin{enumerate} \item $R=1$, $I=(-1,1)$ \item $R=\infty$, $I=(-\infty,\infty)$ \item $R=e$, $I=(2-e,2+e)$ \item $R=0$, converges only when $x=2$ \item $R=1$, $I=[-6,-4]$ \end{enumerate} \end{sol} \end{ex} \begin{ex} Find the radius of convergence for the series $\ds\sum_{n=1}^\infty {n!\over n^n}x^n$. \begin{sol} $R=e$ \end{sol} \end{ex} \end{enumialphparenastyle} \clearpage
{ "alphanum_fraction": 0.6929188256, "avg_line_length": 39.387755102, "ext": "tex", "hexsha": "6d219035e6663c3d14226eb03ef8d02a4d7f45c3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_path": "9-sequences-and-series/9-8-power-series.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_path": "9-sequences-and-series/9-8-power-series.tex", "max_line_length": 101, "max_stars_count": null, "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_path": "9-sequences-and-series/9-8-power-series.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1928, "size": 5790 }
\title{Computer Architecture - CS 301} % You may change the title if you want. \author{Rishit Saiya - 180010027, Assignment - 14} \date{\today} \documentclass[12pt]{article} \usepackage{fullpage} \usepackage{enumitem} \usepackage{amsmath,mathtools} \usepackage{amssymb} \usepackage[super]{nth} \usepackage{textcomp} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \begin{document} \maketitle %---------------------------------------------------------------- \section{} \subsection{Storage Media} If we turn off our system, all the data inside the processor and inside the memory devices like DRAM, etc. all of them get erased. But still files/data are intact with the system. The reason the data is unerased in because we have permanent data storage called Storage Media. Some examples of Storage Devices are Hard Disks, Optical Disks, Solid State Disks, etc. \subsection{Hard Disk} In this problem, we will focus on Hard Disk only. Hard Disk is a sequential magnetic arrangement that uses NRZI Encoding to store Data with the help of dummy bits. Figure 1 shows the NRZI Encoding used in Hard Disk. \begin{figure} \centering \includegraphics[width=12cm]{Assignment-14/NRZI.png} \caption{NRZI Encoding} \end{figure} \subsection{Addressing in Hard Disk} The steps are followed as below. Hard Disks can also use this mechanism to mark bad sectors (sectors which are damaged permanently maybe lost its magnetic property) by storing information in recording surface about the sectors being good or bad, and then reconstruct the mapping between logical sectors and healthy physical sectors. \begin{enumerate} \item The software programs use a component called Logical Block Address (LBA) to address a block in the hard disk. This might be alternatively referred as sector in the hard disk. \item The hard disk then internally converts the given logical address to a physical address. This physical address actually has the recording surface, track and sector information. \item Most hard disk will dedicate a logical block which helps in recording surface for storing this information provided by the Logical Address. \item Hard Disks then use a small cache (DRAM) to store the most recently used mappings mapped from logical block to physical block. \item The hard disk can also use this mechanism to mark bad sectors and remap logical sectors to healthy physical sectors. \end{enumerate} %---------------------------------------------------------------- \section{} A processing unit perceives a Hard Disk as a large array of bytes. But internally Hard Disk is fairly complex device and exposing such disks to Electro-Magnetic Field would lead to loss of data. Similarly hard disks also tend to have high failure rates because of high temperature sensitivity. To mitigate our losses, we use RAID (Redundant Array of Disks) which are redundant disks and tolerate faults, and recover from failures RAID also helps to increase bandwidth and read from multiple disk in parallel. So our main idea is to safe guard our previous mapping onto a recording surface. \subsection{Different Types of RAID Configurations} \begin{itemize} \item \textbf{RAID 0:} \\ In RAID 0 (Figure 2), we essentially have 2 disks. We store all the odd numbered blocks in one disk and all even numbered blocks in the other disk. This type of arrangement is known as Data Striping. \\ It allows us to read even and odd blocks in parallel leading to increase in bandwidth in some cases. But if we want to read from data blocks that belong to same disk, then it won't be a case of bandwidth increase. Error coverage = No Reliability \begin{figure} \centering \includegraphics{Assignment-14/Raid_0.png} \caption{RAID 0} \end{figure} \item \textbf{RAID 1:} \\ In RAID 1 (Figure 3), we again have 2 disks. The data blocks in Disk 1 are exactly replicated to constitute Disk 2, each block is of size 512 bytes. This can read blocks in parallel as it is mirror image of other disk. \\ Error Coverage - Immune to just one disk failure. It means that if either disks fail, data can be recovered from functional disk. So, our bandwidth has doubled in this case. While data recovery, when a new disk is inserted after removing failed disk, all data can be copied to new disk from old functional disk by RAID software. Overhead of the redundancy = 100\% overhead in storage. \begin{figure} \centering \includegraphics{Assignment-14/Raid_1.png} \caption{RAID 1} \end{figure} \item \textbf{RAID 2,3,4:} \\ The RAID 2,3,4 belong to the same family of RAID protocols. In here we have a total of 5 disks, where 4 disks are for data storage and 1 disk is storing parity bits. As shown in Figure 5, small chunks of data are stored in blocks in Disk 1,2,3,4 and following XOR operation is performed to store corresponding Parity Block \begin{equation*} P1 = B1 \oplus B2 \oplus B3 \oplus B4 \end{equation*} where P1 is the block in the Parity Disk. \begin{figure} \centering \includegraphics[width=15cm]{Assignment-14/Raid_234.png} \caption{RAID 2,3,4} \end{figure} In this case, if a disk fails, we can add a new disk and reconstruct our data back from the parity bits on data. Let's say that $B_1$ disk fails, then we can retrieve the data of $B_1$ by performing following XOR operation. \begin{equation*} B1 = P1 \oplus B2 \oplus B3 \oplus B4 \end{equation*} Error Coverage - Immune to just one disk failure. If a disk fails, add a new disk and reconstruct it from the parity bits. Overhead of the redundancy = 25\% overhead in storage. The only problem with this RAID 4 is that for every read and write, we have to access Parity disk as well, thus parity disk becomes bottleneck. This problem is solved in RAID 5 which we will detail in below. The block size in RAID 2,3,4 is different and it is as follows (Table 1). \\ % Table insert here: \begin{table} \centering \begin{tabular}{|c|c|} \hline \textbf{RAID} & \textbf{Block Size} \\ \hline 2 & 1 Bit \\ \hline 3 & 1 Byte \\ \hline 4 & 1 Block (512 Byte) \\ \hline \end{tabular} \caption{Block Size - RAID 2,3,4} \end{table} In RAID 2, we in general do not access data in the form of bits. So the access of data becomes bit complicated, which is why the usage is forfeited. In RAID 3, we have to write in many blocks for a small chunk of data which leads to increase in traffic, which is why this also proves to be less practical in implementation. \item \textbf{RAID 5:} \\ In RAID 5 (Figure 5), we in general have 5 disks and instead of having a separate parity disk like in RAID 4, we distribute the parity blocks across the data storage disks. Distributing the parity blocks like this helps us to mitigate the issue of Parity Disk Failure and hence Data Retrieval is quicker.The storage overhead here will be same as in RAID 4. Additionally reliability also increases. The bandwidth increases upto 5 fold unless 2 request traffic are to the same disk. This system proves to be rather reliable. \begin{figure} \centering \includegraphics[width=15cm]{Assignment-14/Raid_5.png} \caption{RAID 5} \end{figure} \item \textbf{RAID 6:} \\ In RAID 6 (Figure 6), we have 2 parity blocks per row where the parity blocks are rotated like in RAID 5 and are computed separately. Thus if memory is not an issue, RAID-6 is best possible configuration in terms of error coverage as it is immune to 2 disk failures.\\ In here the Error Coverage - Immune to 2 disk failure. This instills high reliability but with a trade off with the cost of increase in overhead of data storage. \begin{figure} \centering \includegraphics[width=15cm]{Assignment-14/Raid_6.png} \caption{RAID 6} \end{figure} \end{itemize} %---------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.7015175051, "avg_line_length": 62.9248120301, "ext": "tex", "hexsha": "76020fa1264fdf1a5a5b3eab39308000230f4f83", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rishitsaiya/Computer-Architecture-Theory", "max_forks_repo_path": "Assignment-14/180010027_RishitSaiya.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rishitsaiya/Computer-Architecture-Theory", "max_issues_repo_path": "Assignment-14/180010027_RishitSaiya.tex", "max_line_length": 527, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rishitsaiya/Computer-Architecture-Theory", "max_stars_repo_path": "Assignment-14/180010027_RishitSaiya.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-25T17:20:42.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-25T17:20:42.000Z", "num_tokens": 2049, "size": 8369 }
\documentclass[]{book} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage{hyperref} \PassOptionsToPackage{usenames,dvipsnames}{color} % color is loaded by hyperref \hypersetup{unicode=true, pdftitle={Introduction to R for Population dynamics}, pdfauthor={Anthony Davidson}, colorlinks=true, linkcolor=blue, citecolor=Blue, urlcolor=blue, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{natbib} \bibliographystyle{apalike} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \providecommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{Introduction to R for Population dynamics} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Anthony Davidson} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{Build from Ben Statons R book with contributions by Henry Hershey} \usepackage{booktabs} \usepackage{pdfpages} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \let\oldmaketitle\maketitle \AtBeginDocument{\let\maketitle\relax} \begin{document} \maketitle % % \thispagestyle{empty} % \begin{center} % % {\Huge A BOOK} % \includegraphics{cover_image.png} % % {\huge by Me} % \end{center} % % \let\maketitle\oldmaketitle % \maketitle % \cleardoublepage\newpage\thispagestyle{empty}\null % \cleardoublepage\newpage\thispagestyle{empty}\null % \cleardoublepage\newpage % \thispagestyle{empty} % \cleardoublepage\begin{center} % \newgeometry{left=0cm,right=0cm,top=0cm,bottom=0cm} % \includegraphics{cover_image.png} % \restoregeometry % \end{center} % \setlength{\abovedisplayskip}{-5pt} % \setlength{\abovedisplayshortskip}{-5pt} \includepdf[pages={1}]{img/cover_image.pdf} \thispagestyle{empty} \let\maketitle\oldmaketitle \maketitle \thispagestyle{empty} { \hypersetup{linkcolor=black} \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{overview}{% \chapter*{Overview}\label{overview}} \addcontentsline{toc}{chapter}{Overview} To begin with this model defines the life cycle of a species, then extendes this to estimate population size and growth rate over a projected time frame and then analysis the elasticity and sensistivity of the matrix population model. I have done this with southern right whales as an example. \hypertarget{what-is-covered}{% \section*{What is Covered?}\label{what-is-covered}} \addcontentsline{toc}{section}{What is Covered?} The book is composed of six chapters intended to cover a suite of topics in introductory R programming. In general, the material builds in complexity from chapter to chapter and earlier chapters can be seen as prerequisites for later chapters. \begin{itemize} \tightlist \item \textbf{Chapter \ref{ch1}} covers the basics of working in R through RStudio, including the basics of the R coding language and environment. \item \textbf{Chapter \ref{ch2}} covers the basics of plotting using the base R graphics functionality. \item \textbf{Chapter \ref{ch3}} covers the basics of fitting statistical models using built-in functionality for generalized linear models as well as non-linear models.\\ \item \textbf{Chapter \ref{ch4}} covers the basics of simulation modeling in R. \item \textbf{Chapter \ref{ch5}} covers the basics of the \texttt{\{dplyr\}} and \texttt{\{reshape2\}} packages for manipulating and summarizing large data sets using highly readable code. \item \textbf{Chapter \ref{ch6}} covers the basics of producing maps and performing spatial analysis in R. \emph{This chapter was contributed by Henry Hershey} \end{itemize} \hypertarget{external-resources}{% \section{External Resources}\label{external-resources}} This book is an extention of several gitbooks that are intended to be a first course in R programming for a range of different professionals. It is by no means comprehensive (no book about R ever could be), but instead attempts to introduce the main topics and develop the skills needed to get a beginner up and running with applying R to their own work in the context of population dynamics. Some of the courses were intended to be a companion to in-person workshop sessions. Although the examples shown have a natural resource/ecological theme, the general skills presented are general to R users across all scientific disciplines. \hypertarget{ch4}{% \chapter{Monte Carlo Methods}\label{ch4}} \hypertarget{chapter-overview}{% \section*{Chapter Overview}\label{chapter-overview}} \addcontentsline{toc}{section}{Chapter Overview} Simulation modeling is one of the primary reasons to move away from \texttt{spreadsheet-type} programs (like Microsoft Excel) and into a program like \texttt{R}. R allows you to replicate the same (possibly complex and detailed) calculations over and over with different random values. You can then summarize and plot the results of these replicated calculations all within the same program. Analyses of this type are called \textbf{Monte Carlo methods}: they randomly sample from a set of quantities for the purpose of generating and summarizing a distribution of some statistic related to the sampled quantities. If this concept is confusing, hopefully this chapter will clarify. In this chapter, you will learn the basic skills needed for simulation (i.e., Monte Carlo) modeling in R including: \begin{itemize} \tightlist \item introduce randomness to a model \item repeat calculations many times with \texttt{replicate()} and \texttt{for()} loops \item summarization of many values from a distribution \item more advanced function writing \item applying population dynamics \emph{added by Anthony} \end{itemize} \textbf{IMPORTANT NOTE}: If you did not attend the sessions corresponding to Chapters \ref{ch1} or \ref{ch2} or \ref{ch3}, you are recommended to walk through the material found in those chapters before proceeding to this material. Remember that if you are confused about a topic, you can use \textbf{CTRL + F} to find previous cases where that topic has been discussed in this book. \hypertarget{before-you-begin}{% \section*{Before You Begin}\label{before-you-begin}} \addcontentsline{toc}{section}{Before You Begin} You should create a new directory and R script for your work in this Chapter. Create a new R script called \texttt{Ch4.R} and save it in the directory \texttt{C:/Users/YOU/Documents/R-Book/Chapter4}. Set your working directory to that location. Revisit the material in Sections \ref{scripts} and \ref{working-dir} for more details on these steps. \hypertarget{layout-of-this-chapter}{% \section*{Layout of This Chapter}\label{layout-of-this-chapter}} \addcontentsline{toc}{section}{Layout of This Chapter} This chapter is divided into three main sections: \begin{itemize} \item \textbf{Required Material} (Sections \ref{randomness} - \ref{mc-summaries}) which is necessary to understand the examples in this chapter and the subsequent chapters \item \textbf{Example Cases} (Sections \ref{sim-examples} and \ref{resample-examples}) which apply the skills learned in the required material. In the workshop session, you will walkthrough 2-3 of these example cases at the choice of the group of the participants. If you are interested in simulation modeling, you are suggested to work through all of the example cases, as slightly different tricks will be shown in the different examples. \item \textbf{Population dynamics} (Sections \ldots) - this is an extention of simulation and MCMC methods to estimate the viability of populations (PVA). \end{itemize} \hypertarget{randomness}{% \section{Introducing Randomness}\label{randomness}} A critical part of simulation modeling is the use of random processes. A \textbf{random process} is one that generates a different outcome according to some rules each time it is executed. They are tightly linked to the concept of \textbf{uncertainty}: you are unsure about the outcome the next time the process is executed. There are two basic ways to introduce randomness in R: \textbf{random deviates} and \textbf{resampling}. \hypertarget{random-deviates}{% \subsection{Random deviates}\label{random-deviates}} In Section \ref{dists}, you learned about using probability distributions in R. One of the uses was the \texttt{r-} family of distribution functions. These functions create random numbers following a random process specified by a probability distribution. Consider animal survival as an example. At the end of each year, each individual alive at the start can either live or die. There are two outcomes here, and suppose each animal has an 80\% chance of surviving. The number of individuals that survive is the result of a \textbf{binomial random process} in which there were \(n\) individuals alive at the start of this year and \(p\) is the probability that any one individual survives to the next year. You can execute one binomial random process where \(p = 0.8\) and \(n = 100\) like this: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{rbinom}\NormalTok{(}\DataTypeTok{n =} \DecValTok{1}\NormalTok{, }\DataTypeTok{size =} \DecValTok{100}\NormalTok{, }\DataTypeTok{prob =} \FloatTok{0.8}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 88 \end{verbatim} The result you get will almost certainly be different from the one printed here. That is the random component. You can execute many such binomial processes by changing the \texttt{n} argument. Plot the distribution of expected surviving individuals: \begin{Shaded} \begin{Highlighting}[] \NormalTok{survivors =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(}\DecValTok{1000}\NormalTok{, }\DecValTok{100}\NormalTok{, }\FloatTok{0.8}\NormalTok{)} \KeywordTok{hist}\NormalTok{(survivors, }\DataTypeTok{col =} \StringTok{"skyblue"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-5-1} \end{center} Another random process is the \textbf{lognormal process}: it generates random numbers such that the log of the values are normally-distributed with mean equal to \texttt{logmean} and standard deviation equal to \texttt{logsd}: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{hist}\NormalTok{(}\KeywordTok{rlnorm}\NormalTok{(}\DecValTok{1000}\NormalTok{, }\DecValTok{0}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\DataTypeTok{col =} \StringTok{"skyblue"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-7-1} \end{center} There are many random processes you can use in R. Checkout Table \ref{tab:dist-table-pdf} for more examples as well as the help files for each individual function for more details. \hypertarget{resampling}{% \subsection{Resampling}\label{resampling}} Using random deviates works great for creating new random numbers, but what if you already have a set of numbers that you wish to introduce randomness to? For this, you can use \textbf{resampling techniques}. In R, the \texttt{sample()} function is used to sample \texttt{size} elements from the vector \texttt{x}: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{sample}\NormalTok{(}\DataTypeTok{x =} \DecValTok{1}\OperatorTok{:}\DecValTok{10}\NormalTok{, }\DataTypeTok{size =} \DecValTok{5}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 5 4 3 1 7 \end{verbatim} You can sample with replacement (where it is possible to sample the same element two or more times): \begin{Shaded} \begin{Highlighting}[] \KeywordTok{sample}\NormalTok{(}\DataTypeTok{x =} \KeywordTok{c}\NormalTok{(}\StringTok{"a"}\NormalTok{, }\StringTok{"b"}\NormalTok{, }\StringTok{"c"}\NormalTok{), }\DataTypeTok{size =} \DecValTok{10}\NormalTok{, }\DataTypeTok{replace =}\NormalTok{ T)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "a" "c" "a" "c" "a" "c" "c" "a" "a" "c" \end{verbatim} You can set probabilities on the sampling of different elements\footnote{If \texttt{prob} doesn't sum to 1, then it will be rescaled: \texttt{prob\ =\ prob/sum(prob)}}: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{sample}\NormalTok{(}\DataTypeTok{x =} \KeywordTok{c}\NormalTok{(}\StringTok{"live"}\NormalTok{, }\StringTok{"die"}\NormalTok{), }\DataTypeTok{size =} \DecValTok{10}\NormalTok{, }\DataTypeTok{replace =}\NormalTok{ T,} \DataTypeTok{prob =} \KeywordTok{c}\NormalTok{(}\FloatTok{0.8}\NormalTok{, }\FloatTok{0.2}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "live" "live" "live" "die" "live" "live" "live" "live" "live" "die" \end{verbatim} Notice that this is the same as the binomial random process above, but with only 10 trials and the printing of the outcomes rather than the number of successes. \hypertarget{reproducing-randomness}{% \section{Reproducing Randomness}\label{reproducing-randomness}} For reproducibility purposes, you may wish to get the same exact random numbers each time you run your script. To do this, you need to set the \textbf{random seed}, which is the starting point of the random number generator your computer uses. If you run these two lines of code, you should get the same result as printed here: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1234}\NormalTok{)} \KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] -1.207066 \end{verbatim} \hypertarget{replication}{% \section{Replication}\label{replication}} To use Monte Carlo methods, you need to be able to replicate some random process many times. There are two main ways this is commonly done: either with\texttt{replicate()} or with \texttt{for()} loops. \hypertarget{replicate}{% \subsection{\texorpdfstring{\texttt{replicate()}}{replicate()}}\label{replicate}} The \texttt{replicate()} function executes some expression many times and returns the output from each execution. Say we have a vector \texttt{x}, which represents 30 observations of fish length (mm): \begin{Shaded} \begin{Highlighting}[] \NormalTok{x =}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{30}\NormalTok{, }\DecValTok{500}\NormalTok{, }\DecValTok{30}\NormalTok{)} \end{Highlighting} \end{Shaded} We wish to build the sampling distribution of the mean length ``by hand''. We can sample randomly from it, calculate the mean, then repeat this process many times: \begin{Shaded} \begin{Highlighting}[] \NormalTok{means =}\StringTok{ }\KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =} \DecValTok{1000}\NormalTok{, }\DataTypeTok{expr =}\NormalTok{ \{} \NormalTok{ x_i =}\StringTok{ }\KeywordTok{sample}\NormalTok{(x, }\KeywordTok{length}\NormalTok{(x), }\DataTypeTok{replace =}\NormalTok{ T)} \KeywordTok{mean}\NormalTok{(x_i)} \NormalTok{\})} \end{Highlighting} \end{Shaded} If we take \texttt{mean(means)} and \texttt{sd(means)}, that should be very similar to \texttt{mean(x)} and \texttt{se(x)}. Create the \texttt{se()} function (also shown in Section \ref{error-bars}) and prove this to yourself: \begin{Shaded} \begin{Highlighting}[] \NormalTok{se =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x) }\KeywordTok{sd}\NormalTok{(x)}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(}\KeywordTok{length}\NormalTok{(x))} \KeywordTok{mean}\NormalTok{(means); }\KeywordTok{mean}\NormalTok{(x)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 493.4968 \end{verbatim} \begin{verbatim} ## [1] 493.4166 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{sd}\NormalTok{(means); }\KeywordTok{se}\NormalTok{(x)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 4.967572 \end{verbatim} \begin{verbatim} ## [1] 5.044153 \end{verbatim} \hypertarget{for-loops}{% \subsection{\texorpdfstring{The \texttt{for()} loop}{The for() loop}}\label{for-loops}} In programming, a \emph{loop} is a command that does something over and over until it reaches some point that you specify. R has a few types of loops: \texttt{repeat()}, \texttt{while()}, and \texttt{for()}, to name a few. \texttt{for()} loops are among the most common in simulation modeling. A \texttt{for()} loop repeats some action for however many times you tell it \textbf{for} each value in some vector. The syntax is: \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (var }\ControlFlowTok{in}\NormalTok{ seq) \{} \KeywordTok{expression}\NormalTok{(var)} \NormalTok{\}} \end{Highlighting} \end{Shaded} The loop calculates the expression for values of \texttt{var} for each element in the vector \texttt{seq}. For example: \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\DecValTok{5}\NormalTok{) \{} \KeywordTok{print}\NormalTok{(i}\OperatorTok{^}\DecValTok{2}\NormalTok{)} \NormalTok{\}} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1 ## [1] 4 ## [1] 9 ## [1] 16 ## [1] 25 \end{verbatim} The \texttt{print()} command will be executed 5 times: once for each value of \texttt{i}. It is the same as: \begin{Shaded} \begin{Highlighting}[] \NormalTok{i =}\StringTok{ }\DecValTok{1}\NormalTok{; }\KeywordTok{print}\NormalTok{(i}\OperatorTok{^}\DecValTok{2}\NormalTok{); i =}\StringTok{ }\DecValTok{2}\NormalTok{; }\KeywordTok{print}\NormalTok{(i}\OperatorTok{^}\DecValTok{2}\NormalTok{); i =}\StringTok{ }\DecValTok{3}\NormalTok{; }\KeywordTok{print}\NormalTok{(i}\OperatorTok{^}\DecValTok{2}\NormalTok{); i =}\StringTok{ }\DecValTok{4}\NormalTok{; }\KeywordTok{print}\NormalTok{(i}\OperatorTok{^}\DecValTok{2}\NormalTok{); i =}\StringTok{ }\DecValTok{5}\NormalTok{; }\KeywordTok{print}\NormalTok{(i}\OperatorTok{^}\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} If you remove the \texttt{print()} function, see what happens: \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\DecValTok{5}\NormalTok{) \{} \NormalTok{ i}\OperatorTok{^}\DecValTok{2} \NormalTok{\}} \end{Highlighting} \end{Shaded} Nothing is printed to the console. R did the calculation, but did not show you or store the result. Often, you'll need to store the results of the calculation in a \textbf{container object}: \begin{Shaded} \begin{Highlighting}[] \NormalTok{results =}\StringTok{ }\KeywordTok{numeric}\NormalTok{(}\DecValTok{5}\NormalTok{)} \end{Highlighting} \end{Shaded} This makes an empty numeric vector of length 5 that are all 0's. You can store the output of your loop calculations in \texttt{results}: \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\DecValTok{5}\NormalTok{) \{} \NormalTok{ results[i] =}\StringTok{ }\NormalTok{i}\OperatorTok{^}\DecValTok{2} \NormalTok{\}} \NormalTok{results} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1 4 9 16 25 \end{verbatim} When \texttt{i\^{}2} is calculated, it will be placed in the element \texttt{results{[}i{]}}. This was a trivial example, because you would never use a \texttt{for()} loop to do things as simple as vectorized calculation. The expression \texttt{(1:5)\^{}2} would give the same result with significantly less code (see Section \ref{vector-math}). However, there are times where it is advantageous to use a loop. Particularly in cases where: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item the calculations in one element are determined from the value in previous elements, such as in time series models \item the calculations have multiple steps \item you wish to store multiple results \item you wish to track the progress of your calculations \end{enumerate} As an illustration for item (1) above, build a (very) basic population model. At the start of the first year, the population abundance is 1000 individuals and grows by an average factor of 1.1 per year (reproduction and death processes result in a growth rate of 10\%) before harvest. The growth rate varies randomly, however. Each year, the 1.1 growth factor has variability introduced by small changes in survival and reproductive process. Model these variations as lognormal random variables. After production, 8\% of the population is harvested. Simulate the abundance at the end of the year for 100 years: \begin{Shaded} \begin{Highlighting}[] \NormalTok{nt =}\StringTok{ }\DecValTok{100} \CommentTok{# number of years} \NormalTok{N =}\StringTok{ }\OtherTok{NULL} \CommentTok{# container for abundance} \NormalTok{N[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\DecValTok{1000} \CommentTok{# first end-of-year abundance} \ControlFlowTok{for}\NormalTok{ (t }\ControlFlowTok{in} \DecValTok{2}\OperatorTok{:}\NormalTok{nt) \{} \CommentTok{# N this year is N last year * growth *} \CommentTok{# randomness * fraction that survive harvest} \NormalTok{ N[t] =}\StringTok{ }\NormalTok{(N[t}\DecValTok{-1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\FloatTok{1.1} \OperatorTok{*}\StringTok{ }\KeywordTok{rlnorm}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, }\FloatTok{0.1}\NormalTok{)) }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\FloatTok{0.08}\NormalTok{)} \NormalTok{\}} \end{Highlighting} \end{Shaded} Plot the abundance time series: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(N, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{pch =} \DecValTok{15}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"Abundance"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-23-1} \end{center} Examples of the other three utilities of using \texttt{for()} loops over replicate are shown in the example cases and exercises. \hypertarget{adv-funcs}{% \section{Function Writing}\label{adv-funcs}} In Monte Carlo analyses, it is often useful to wrap code into functions. This allows for easy replication and setting adjustment (e.g., if you wanted to compare the growth trajectories of two populations with differing growth rates). As an example, turn the population model shown above into a function: \begin{Shaded} \begin{Highlighting}[] \NormalTok{pop_sim =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(nt, grow, sd_grow, U, }\DataTypeTok{plot =}\NormalTok{ F) \{} \NormalTok{ N =}\StringTok{ }\OtherTok{NULL} \CommentTok{# empty flexible vector container} \NormalTok{ N[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\DecValTok{1000} \ControlFlowTok{for}\NormalTok{ (t }\ControlFlowTok{in} \DecValTok{2}\OperatorTok{:}\NormalTok{nt) \{} \NormalTok{ N[t] =}\StringTok{ }\NormalTok{(N[t}\DecValTok{-1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{grow }\OperatorTok{*}\StringTok{ }\KeywordTok{rlnorm}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, sd_grow)) }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{U)} \NormalTok{ \}} \ControlFlowTok{if}\NormalTok{ (plot) \{} \KeywordTok{plot}\NormalTok{(N, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{pch =} \DecValTok{15}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"Abundance"}\NormalTok{)} \NormalTok{ \}} \NormalTok{ N} \NormalTok{\}} \end{Highlighting} \end{Shaded} This function takes five inputs: \begin{itemize} \tightlist \item \texttt{nt}: the number of years, \item \texttt{grow}: the population growth rate, \item \texttt{sd\_grow}: the amount of annual variability in the growth rate \item \texttt{U}: the annual exploitation rate \item \texttt{plot}: whether you wish to have a plot created. It has a default setting of \texttt{FALSE}: if you don't specify \texttt{plot\ =\ T} when you call \texttt{pop\_sim()}, you won't see a plot made. \end{itemize} It returns one output: the vector of population abundance. Execute your simulation function once using the same settings as before: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{pop_sim}\NormalTok{(}\DecValTok{100}\NormalTok{, }\FloatTok{1.1}\NormalTok{, }\FloatTok{0.1}\NormalTok{, }\FloatTok{0.08}\NormalTok{, T)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-25-1} \end{center} Now, you wish to replicate this simulation 1000 times. Use the \texttt{replicate()} function to do this: \begin{Shaded} \begin{Highlighting}[] \NormalTok{out =}\StringTok{ }\KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =} \DecValTok{1000}\NormalTok{, }\DataTypeTok{expr =} \KeywordTok{pop_sim}\NormalTok{(}\DecValTok{100}\NormalTok{, }\FloatTok{1.1}\NormalTok{, }\FloatTok{0.1}\NormalTok{, }\FloatTok{0.08}\NormalTok{, F))} \end{Highlighting} \end{Shaded} If you do \texttt{dim(out)}, you'll see that years are stored as rows (there are 100 of them) and replicates are stored as columns (there are 1000 of them). Notice how wrapping the code in the function made the \texttt{replicate()} call easy. \textbf{Here are some advantages of wrapping code like this into a function}: \begin{itemize} \tightlist \item If you do the same task at multiple places in your script, you don't need to type all of the code to perform the task, just the function call. \item If you need to change the way the function behaves (i.e., the function body), you only need to change it in one place: in the function definition. \item You can easily change the settings of the code (e.g., whether or not you want to see the plot) in one place \item Function writing can lead to shorter scripts \item Function writing can lead to more readable code (if it is easy for readers to interpret what your functions do - informative function and argument names, as well as documentation can help here) \end{itemize} \hypertarget{mc-summaries}{% \section{Summarization}\label{mc-summaries}} After replicating a calculation many times, you will need to summarize the results. Here are several examples using the \texttt{out} matrix from Section \ref{adv-funcs}. \hypertarget{central-tendency}{% \subsection{Central Tendency}\label{central-tendency}} You can calculate the mean abundance each year across your iterations using the \texttt{apply()} function (Section \ref{data-summaries}): \begin{Shaded} \begin{Highlighting}[] \NormalTok{N_mean =}\StringTok{ }\KeywordTok{apply}\NormalTok{(out, }\DecValTok{1}\NormalTok{, mean)} \NormalTok{N_mean[}\DecValTok{1}\OperatorTok{:}\DecValTok{10}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1000.000 1017.150 1034.970 1052.022 1068.358 1086.700 1102.685 ## [8] 1125.383 1141.774 1159.921 \end{verbatim} You could do the same thing using \texttt{median} rather than \texttt{mean}. The mode is more difficult to calculate in R, if you need to get the mode, try to Google it\footnote{Google is an R programmer's best friend. There is a massive online community for R, and if you have a question on something, it has almost certainly been asked somewhere on the web.}. \hypertarget{variability}{% \subsection{Variability}\label{variability}} One of the primary reasons to conduct a Monte Carlo analysis is to obtain estimates of variability. You can summarize the variability easily using the \texttt{quantile()} function: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# obtain the 10% and 90% quantiles each year across iterations} \NormalTok{N_quants =}\StringTok{ }\KeywordTok{apply}\NormalTok{(out, }\DecValTok{1}\NormalTok{, }\ControlFlowTok{function}\NormalTok{(x) }\KeywordTok{quantile}\NormalTok{(x, }\KeywordTok{c}\NormalTok{(}\FloatTok{0.1}\NormalTok{, }\FloatTok{0.9}\NormalTok{)))} \end{Highlighting} \end{Shaded} Notice how a user-defined function was passed to \texttt{apply()}. Now plot the summary of the randomized abundances as a time series like before: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(N_mean, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{10000}\NormalTok{))} \KeywordTok{lines}\NormalTok{(N_quants[}\DecValTok{1}\NormalTok{,], }\DataTypeTok{lty =} \DecValTok{2}\NormalTok{)} \KeywordTok{lines}\NormalTok{(N_quants[}\DecValTok{2}\NormalTok{,], }\DataTypeTok{lty =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-30-1} \end{center} The range within the two dashed lines represents the range that encompassed the central 80\% of the random abundances each year. \hypertarget{frequencies}{% \subsection{Frequencies}\label{frequencies}} Often you will want to count how many times something happened. In some cases, the fraction of times something happened can be interpreted as a probability of that event occuring. The \texttt{table()} function is useful for counting occurrences of discrete events. Suppose you are interested in how many of your iterations resulted in fewer than 1000 individuals at year 10: \begin{Shaded} \begin{Highlighting}[] \NormalTok{out10 =}\StringTok{ }\KeywordTok{ifelse}\NormalTok{(out[}\DecValTok{10}\NormalTok{,] }\OperatorTok{<}\StringTok{ }\DecValTok{1000}\NormalTok{, }\StringTok{"less10"}\NormalTok{, }\StringTok{"greater10"}\NormalTok{)} \KeywordTok{table}\NormalTok{(out10)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## out10 ## greater10 less10 ## 649 351 \end{verbatim} Suppose you are also interested in how many of your iterations resulted in fewer than 1100 individuals at year 20: \begin{Shaded} \begin{Highlighting}[] \NormalTok{out20 =}\StringTok{ }\KeywordTok{ifelse}\NormalTok{(out[}\DecValTok{20}\NormalTok{,] }\OperatorTok{<}\StringTok{ }\DecValTok{1100}\NormalTok{, }\StringTok{"less20"}\NormalTok{, }\StringTok{"greater20"}\NormalTok{)} \KeywordTok{table}\NormalTok{(out20)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## out20 ## greater20 less20 ## 587 413 \end{verbatim} Now suppose you are interested in how these two metrics are related: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{table}\NormalTok{(out10, out20)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## out20 ## out10 greater20 less20 ## greater10 486 163 ## less10 101 250 \end{verbatim} One example of an interpretation of this output might be that populations that were greater than 1000 at year 10 were commonly greater than 1100 at year 20. Also, if a population was less than 1000 at year 10, it was more likely to be less than 1100 at year 20 than to be greater than it. You can turn these into probabilities (if you believe your model represents reality) by dividing each cell by the total number of iterations: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{round}\NormalTok{(}\KeywordTok{table}\NormalTok{(out10, out20)}\OperatorTok{/}\DecValTok{1000}\NormalTok{, }\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## out20 ## out10 greater20 less20 ## greater10 0.49 0.16 ## less10 0.10 0.25 \end{verbatim} \hypertarget{sim-examples}{% \section{Simulation-Based Examples}\label{sim-examples}} \hypertarget{rnorm-ex}{% \subsection{\texorpdfstring{Test \texttt{rnorm}}{Test rnorm}}\label{rnorm-ex}} In this example, you will verify that the function \texttt{rnorm()} works the same way that \texttt{qnorm()} and \texttt{pnorm()} indicate that it should work. That is, you will verify that random deviates generated using \texttt{rnorm()} have the same properties as the true normal distribution given by \texttt{qnorm()} and \texttt{pnorm()}. Hopefully it will also reinforce the way the random, quantile, and cumulative distribution functions work in R. First, specify the mean and standard deviation for this example: \begin{Shaded} \begin{Highlighting}[] \NormalTok{mu =}\StringTok{ }\DecValTok{500}\NormalTok{; sig =}\StringTok{ }\DecValTok{30} \end{Highlighting} \end{Shaded} Now make up \texttt{n} (any number of your choosing, something greater than 10) random deviates from this normal distribution: \begin{Shaded} \begin{Highlighting}[] \NormalTok{random =}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{100}\NormalTok{, mu, sig)} \end{Highlighting} \end{Shaded} Test the quantiles (obtain the values that \texttt{p} * 100\% of the quantities fall below, both for random numbers and from the \texttt{qnorm()} function): \begin{Shaded} \begin{Highlighting}[] \NormalTok{p =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\FloatTok{0.01}\NormalTok{, }\FloatTok{0.99}\NormalTok{, }\FloatTok{0.01}\NormalTok{)} \NormalTok{random_q =}\StringTok{ }\KeywordTok{quantile}\NormalTok{(random, p)} \NormalTok{normal_q =}\StringTok{ }\KeywordTok{qnorm}\NormalTok{(p, mu, sig)} \KeywordTok{plot}\NormalTok{(normal_q }\OperatorTok{~}\StringTok{ }\NormalTok{random_q); }\KeywordTok{abline}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-38-1} \end{center} The fact that all the quantiles fall around the 1:1 line suggests the \texttt{n} random samples are indeed from a normal distribution. Any deviations you see are due to sampling errors. If you increase \texttt{n} to \texttt{n\ =\ 1e6} (one million), you'll see no deviations. This is called a \textbf{q-q plot}, and is frequently used to assess the fit of data to a distribution. Now test the random values in their agreement with the \texttt{pnorm()} function. Plot the cumulative density functions for the truly normal curve and the one approximated by the random deviates: \begin{Shaded} \begin{Highlighting}[] \NormalTok{q =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\DecValTok{400}\NormalTok{, }\DecValTok{600}\NormalTok{, }\DecValTok{10}\NormalTok{)} \NormalTok{random_cdf =}\StringTok{ }\KeywordTok{ecdf}\NormalTok{(random)} \NormalTok{random_p =}\StringTok{ }\KeywordTok{random_cdf}\NormalTok{(q)} \NormalTok{normal_p =}\StringTok{ }\KeywordTok{pnorm}\NormalTok{(q, mu, sig)} \KeywordTok{plot}\NormalTok{(normal_p }\OperatorTok{~}\StringTok{ }\NormalTok{q, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{col =} \StringTok{"blue"}\NormalTok{)} \KeywordTok{points}\NormalTok{(random_p }\OperatorTok{~}\StringTok{ }\NormalTok{q, }\DataTypeTok{col =} \StringTok{"red"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-40-1} \end{center} The \texttt{ecdf()} function obtains the empirical cumulative density function (which is just \texttt{pnorm()} for a sample). It allows you to plug in any random variable and obtain the probability of having one less than it. \hypertarget{power-ex}{% \subsection{Stochastic Power Analysis}\label{power-ex}} A \textbf{power analysis} is one where the analyst wishes to determine how much power they will have to detect an effect. Power is inversely related to the probability of making a Type II Error: failing to reject a false null hypothesis\footnote{English: concluding there is no effect when there truly is one}. In other words, having high power means that you have a high chance of detecting an effect if an effect truly exists. Power is a function of the effect size, the sample size \texttt{n}, and the variability in the data. Strong effects are easier to detect than weak ones, more samples increase the test's sensitivity (the ability to detect weak effects), and lower variability results in more power. You can conduct a power analysis using stochastic simulation (i.e., a Monte Carlo analysis). Here, you will write a power analysis to determine how likely are you to be able to correctly identify what you deem to be a biologically-meaningful difference in survival between two tagging procedures. You know one tagging procedure has approximately a 10\% mortality rate (10\% of tagged fish die within the first 12 hours as result of the tagging process). Another cheaper, and less labor-intensive method has been proposed, but before implementing it, your agency wishes to determine if it will have a meaningful impact on the reliability of the study or on the ability of the crew to tag enough individuals that will survive long enough to be useful. You and your colleagues determine that if the mortality rate of the new tagging method reaches 25\%, then gains in time and cost-efficiency would be offset by needing to tag more fish (because more will die). You have decided to perform a small-scale study to determine if using the new method could result in 25\% or more mortality. The study will tag \texttt{n} individuals using both methods (new and old) and track the fraction that survived after 12 hours. Before performing the study however, you deem it important to determine how large \texttt{n} needs to be to answer this question. You decide to use a stochastic power analysis to help your research group. The small-scale study can tag a total of at most 100 fish with the currently available resources. Could you tag fewer than 100 total individuals and still have a high probability of detecting a statistically significant difference in mortality? The stochastic power analysis approach works like this (this is called \textbf{psuedocode}): \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Simulate data under the reality that the difference is real with \texttt{n} observations per treatment, where \texttt{n\ \textless{}\ 100/2} \item Fit the model that will be used when the real data are collected to the simulated data \item Determine if the difference was detected with a significant p-value \item Replicate steps 1 - 3 many times \item Replicate step 4 while varying \texttt{n} over the interval from 10 to 50 \item Determine what fraction of the p-values were deemed significant at each \texttt{n} \end{enumerate} Step 2 will require fitting a generalized linear model; for a review, revisit Section \ref{glms} (specifically Section \ref{logis-regression} on logistic regression). First, create a function that will generate data, fit the model, and determine if the p-value is significant (steps 1-3 above): \begin{Shaded} \begin{Highlighting}[] \NormalTok{sim_fit =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(n, }\DataTypeTok{p_old =} \FloatTok{0.10}\NormalTok{, }\DataTypeTok{p_new =} \FloatTok{0.25}\NormalTok{) \{} \CommentTok{### step 1: create the data }\AlertTok{###} \CommentTok{# generate random response data} \NormalTok{ dead_old =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_old)} \NormalTok{ dead_new =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_new)} \CommentTok{# create the predictor variable} \NormalTok{ method =}\StringTok{ }\KeywordTok{rep}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"old"}\NormalTok{, }\StringTok{"new"}\NormalTok{), }\DataTypeTok{each =}\NormalTok{ n)} \CommentTok{# create a data.frame to pass to glm} \NormalTok{ df =}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{dead =} \KeywordTok{c}\NormalTok{(dead_old, dead_new), }\DataTypeTok{method =}\NormalTok{ method)} \CommentTok{# relevel so old is the reference} \NormalTok{ df}\OperatorTok{$}\NormalTok{method =}\StringTok{ }\KeywordTok{relevel}\NormalTok{(df}\OperatorTok{$}\NormalTok{method, }\DataTypeTok{ref =} \StringTok{"old"}\NormalTok{)} \CommentTok{### step 2: fit the model }\AlertTok{###} \NormalTok{ fit =}\StringTok{ }\KeywordTok{glm}\NormalTok{(dead }\OperatorTok{~}\StringTok{ }\NormalTok{method, }\DataTypeTok{data =}\NormalTok{ df, }\DataTypeTok{family =}\NormalTok{ binomial)} \CommentTok{### step 3: determine if a sig. p-value was found }\AlertTok{###} \CommentTok{# extract the p-value} \NormalTok{ pval =}\StringTok{ }\KeywordTok{summary}\NormalTok{(fit)}\OperatorTok{$}\NormalTok{coef[}\DecValTok{2}\NormalTok{,}\DecValTok{4}\NormalTok{]} \CommentTok{# determine if it was found to be significant} \NormalTok{ pval }\OperatorTok{<}\StringTok{ }\FloatTok{0.05} \NormalTok{\}} \end{Highlighting} \end{Shaded} Next, for steps 4 and 5, set up a \textbf{nested \texttt{for} loop}. This will have two loops: one that loops over sample sizes (step 5) and one that loops over replicates of each sample size (step 4). First, create the looping objects and containers: \begin{Shaded} \begin{Highlighting}[] \NormalTok{I =}\StringTok{ }\DecValTok{500} \CommentTok{# the number of replicates at each sample size} \NormalTok{n_try =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{50}\NormalTok{, }\DecValTok{10}\NormalTok{) }\CommentTok{# the test sample sizes} \NormalTok{N =}\StringTok{ }\KeywordTok{length}\NormalTok{(n_try) }\CommentTok{# count them} \CommentTok{# container: } \NormalTok{out =}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{, I, N) }\CommentTok{# matrix with I rows and N columns} \end{Highlighting} \end{Shaded} Now perform the nested loop. The inner-loop iterations will be completed for each element of \texttt{n} in the sequence \texttt{1:N}. The output (which is one element: \texttt{TRUE} or \texttt{FALSE} based on the significance of the p-value) is stored in the corresponding row and column for that iteration of that sample size. \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ out[i,n] =}\StringTok{ }\KeywordTok{sim_fit}\NormalTok{(}\DataTypeTok{n =}\NormalTok{ n_try[n])} \NormalTok{ \}} \NormalTok{\}} \end{Highlighting} \end{Shaded} You now have a matrix of \texttt{TRUE} and \texttt{FALSE} elements that indicates whether a significant difference was found at the \(\alpha = 0.05\) level if the effect was truly as large as you care about. You can obtain the proportion of all the replicates at each sample size that resulted in a significant difference using the \texttt{mean()} function with \texttt{apply()}: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(}\KeywordTok{apply}\NormalTok{(out, }\DecValTok{2}\NormalTok{, mean) }\OperatorTok{~}\StringTok{ }\NormalTok{n_try, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Tagged Fish per Treatment"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of Finding Effect (Power)"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-46-1} \end{center} Even if you tagged 100 fish total, you would only have a 49\% chance of saying the effect (which truly is there!) is present under the null hypothesis testing framework. Suppose you and your colleagues aren't relying on p-values in this case, and are purely interested in how precisely the \textbf{effect size} would be estimated. Adapt your function to determine how frequently you would be able to estimate the true mortality of the new method within +/- 5\% based on the point estimate only (the estimate for the tagging mortality of the new method must be between 0.2 and 0.3 for a successful study). Change your function to calculate this additional metric and re-run the analysis: \begin{Shaded} \begin{Highlighting}[] \NormalTok{sim_fit =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(n, }\DataTypeTok{p_old =} \FloatTok{0.10}\NormalTok{, }\DataTypeTok{p_new =} \FloatTok{0.25}\NormalTok{) \{} \CommentTok{# create the data} \NormalTok{ dead_old =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_old)} \NormalTok{ dead_new =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_new)} \CommentTok{# create the predictor variable} \NormalTok{ method =}\StringTok{ }\KeywordTok{rep}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"old"}\NormalTok{, }\StringTok{"new"}\NormalTok{), }\DataTypeTok{each =}\NormalTok{ n)} \CommentTok{# create a data.frame to pass to glm} \NormalTok{ df =}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{dead =} \KeywordTok{c}\NormalTok{(dead_old, dead_new), }\DataTypeTok{method =}\NormalTok{ method)} \CommentTok{# relevel so old is the reference} \NormalTok{ df}\OperatorTok{$}\NormalTok{method =}\StringTok{ }\KeywordTok{relevel}\NormalTok{(df}\OperatorTok{$}\NormalTok{method, }\DataTypeTok{ref =} \StringTok{"old"}\NormalTok{)} \CommentTok{# fit the model} \NormalTok{ fit =}\StringTok{ }\KeywordTok{glm}\NormalTok{(dead }\OperatorTok{~}\StringTok{ }\NormalTok{method, }\DataTypeTok{data =}\NormalTok{ df, }\DataTypeTok{family =}\NormalTok{ binomial)} \CommentTok{# extract the p-value} \NormalTok{ pval =}\StringTok{ }\KeywordTok{summary}\NormalTok{(fit)}\OperatorTok{$}\NormalTok{coef[}\DecValTok{2}\NormalTok{,}\DecValTok{4}\NormalTok{]} \CommentTok{# determine if it was found to be significant} \NormalTok{ sig_pval =}\StringTok{ }\NormalTok{pval }\OperatorTok{<}\StringTok{ }\FloatTok{0.05} \CommentTok{# obtain the estimated mortality rate for the new method} \NormalTok{ p_new_est =}\StringTok{ }\KeywordTok{predict}\NormalTok{(fit, }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{method =} \KeywordTok{c}\NormalTok{(}\StringTok{"new"}\NormalTok{)),} \DataTypeTok{type =} \StringTok{"response"}\NormalTok{)} \CommentTok{# determine if it is +/- 5% from the true value} \NormalTok{ prc_est =}\StringTok{ }\NormalTok{p_new_est }\OperatorTok{>=}\StringTok{ }\NormalTok{(p_new }\OperatorTok{-}\StringTok{ }\FloatTok{0.05}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{p_new_est }\OperatorTok{<=}\StringTok{ }\NormalTok{(p_new }\OperatorTok{+}\StringTok{ }\FloatTok{0.05}\NormalTok{)} \CommentTok{# return a vector with these two elements} \KeywordTok{c}\NormalTok{(}\DataTypeTok{sig_pval =}\NormalTok{ sig_pval, }\DataTypeTok{prc_est =} \KeywordTok{unname}\NormalTok{(prc_est))} \NormalTok{\}} \CommentTok{# containers: } \NormalTok{out_sig =}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{, I, N) }\CommentTok{# matrix with I rows and N columns} \NormalTok{out_prc =}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{, I, N) }\CommentTok{# matrix with I rows and N columns} \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ tmp =}\StringTok{ }\KeywordTok{sim_fit}\NormalTok{(}\DataTypeTok{n =}\NormalTok{ n_try[n]) }\CommentTok{# run sim} \NormalTok{ out_sig[i,n] =}\StringTok{ }\NormalTok{tmp[}\StringTok{"sig_pval"}\NormalTok{] }\CommentTok{# extract and store significance metric} \NormalTok{ out_prc[i,n] =}\StringTok{ }\NormalTok{tmp[}\StringTok{"prc_est"}\NormalTok{] }\CommentTok{# extract and store precision metric} \NormalTok{ \}} \NormalTok{\}} \KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{), }\DataTypeTok{mar =} \KeywordTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{,}\DecValTok{4}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{))} \KeywordTok{plot}\NormalTok{(}\KeywordTok{apply}\NormalTok{(out_sig, }\DecValTok{2}\NormalTok{, mean) }\OperatorTok{~}\StringTok{ }\NormalTok{n_try, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Tagged Fish per Treatment"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of Finding Effect (Power)"}\NormalTok{)} \KeywordTok{plot}\NormalTok{(}\KeywordTok{apply}\NormalTok{(out_prc, }\DecValTok{2}\NormalTok{, mean) }\OperatorTok{~}\StringTok{ }\NormalTok{n_try, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Tagged Fish per Treatment"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of a Precise Estimate"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-47-1} \end{center} It seems that even if you tagged 50 fish per treatment, you would have a 60\% chance of estimating that the mortality rate is between 0.2 and 0.3 if it was truly 0.25. You and your colleagues consider these results and determine that you will need to somehow acquire more funds to tag more fish in the small-scale study in order to have a high level of confidence in the results. \hypertarget{harv-ex}{% \subsection{Harvest Policy Analysis}\label{harv-ex}} In this example, you will simulate population dynamics under a more realistic model than in Sections \ref{for-loops} and \ref{adv-funcs} for the purpose of evaluating different harvest policies. Suppose you are a fisheries research biologist, and a commercial fishery for pink salmon (\emph{Oncorhynchus gorbuscha}) takes place in your district. For the past 10 years, it has been fished with an exploitation rate of 40\% (40\% of the fish that return each year have been harvested, exploitation rate is abbreviated by \(U\)), resulting in an average annual harvest of 8.5 million fish. The management plan is up for evaluation this year, and your supervisor has asked you to prepare an analysis that determines if more harvest could be sustained if a different exploitation rate were to be used in the future. Based on historical data, your best understanding implies that the stock is driven by Ricker spawner-recruit dynamics. That is, the total number of fish that return this year (recruits) is a function of the total number of fish that spawned (spawners) in the year of their birth. The Ricker model can be written this way: \begin{equation} R_t = \alpha S_{t-1} e^{-\beta S_{t-1} + \varepsilon_t} ,\varepsilon_t \sim N(0,\sigma) \label{eq:ricker-ch4} \end{equation} where \(\alpha\) is a parameter representing the maximum recruits per spawner (obtained at very low spawner abundances) and \(\beta\) is a measure of the strength of density-dependent mortality. Notice that the error term is in the exponent, which makes \(e^{\varepsilon_t}\) lognormal. You have estimates of the parameters\footnote{In reality, these estimates would have substantial uncertainty that you would need to propagate through your harvest policy analysis. In this example, you will ignore this complication}: \begin{itemize} \tightlist \item \(\alpha = 6\) \item \(\beta = 1 \times 10^{-7}\) \item \(\sigma = 0.4\) \end{itemize} You decide that you can build a policy analysis by simulating the stock forward through time under different exploitation rates. With enough iterations of the simulation, you will be able to see whether a different exploitation rate can provide more harvest than what is currently being extracted. First, write a function for your population model. Your function must: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item take the parameters, dimensions (number of years), and the policy variable (\(U\)) as input arguments \item simulate the population using Ricker dynamics \item calculate and return the average harvest and escapement over the number of future years you simulated. \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Step #1: name the function and give it some arguments} \NormalTok{ricker_sim =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(ny, params, U) \{} \CommentTok{# extract the parameters out by name:} \NormalTok{ alpha =}\StringTok{ }\NormalTok{params[}\StringTok{"alpha"}\NormalTok{]} \NormalTok{ beta =}\StringTok{ }\NormalTok{params[}\StringTok{"beta"}\NormalTok{]} \NormalTok{ sigma =}\StringTok{ }\NormalTok{params[}\StringTok{"sigma"}\NormalTok{]} \CommentTok{# create containers} \CommentTok{# this is a neat trick to condense your code:} \NormalTok{ R =}\StringTok{ }\NormalTok{S =}\StringTok{ }\NormalTok{H =}\StringTok{ }\OtherTok{NULL} \CommentTok{# initialize the population in the first year} \CommentTok{# start the population at being fished at 40%} \CommentTok{# with lognormal error} \NormalTok{ R[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\KeywordTok{log}\NormalTok{(alpha }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\FloatTok{0.4}\NormalTok{))}\OperatorTok{/}\NormalTok{(beta }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\FloatTok{0.4}\NormalTok{)) }\OperatorTok{*}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, sigma))} \NormalTok{ S[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\NormalTok{R[}\DecValTok{1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{U)} \NormalTok{ H[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\NormalTok{R[}\DecValTok{1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{U} \CommentTok{# carry simulation forward through time} \ControlFlowTok{for}\NormalTok{ (y }\ControlFlowTok{in} \DecValTok{2}\OperatorTok{:}\NormalTok{ny) \{} \CommentTok{# use the ricker function with random lognormal noise} \NormalTok{ R[y] =}\StringTok{ }\NormalTok{S[y}\DecValTok{-1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{alpha }\OperatorTok{*}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{beta }\OperatorTok{*}\StringTok{ }\NormalTok{S[y}\DecValTok{-1}\NormalTok{] }\OperatorTok{+}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, sigma))} \CommentTok{#harvest and spawners are the same as before} \NormalTok{ S[y] =}\StringTok{ }\NormalTok{R[y] }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{U)} \NormalTok{ H[y] =}\StringTok{ }\NormalTok{R[y] }\OperatorTok{*}\StringTok{ }\NormalTok{U} \NormalTok{ \}} \CommentTok{# wrap output in a list object} \KeywordTok{list}\NormalTok{(} \DataTypeTok{mean_H =} \KeywordTok{mean}\NormalTok{(H),} \DataTypeTok{mean_S =} \KeywordTok{mean}\NormalTok{(S)} \NormalTok{ )} \NormalTok{\}} \end{Highlighting} \end{Shaded} Use the function once: \begin{Shaded} \begin{Highlighting}[] \NormalTok{params =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DataTypeTok{alpha =} \DecValTok{6}\NormalTok{, }\DataTypeTok{beta =} \FloatTok{1e-7}\NormalTok{, }\DataTypeTok{sigma =} \FloatTok{0.4}\NormalTok{)} \NormalTok{out =}\StringTok{ }\KeywordTok{ricker_sim}\NormalTok{(}\DataTypeTok{U =} \FloatTok{0.4}\NormalTok{, }\DataTypeTok{ny =} \DecValTok{20}\NormalTok{, }\DataTypeTok{params =}\NormalTok{ params)} \CommentTok{#average annual harvest (in millions)} \KeywordTok{round}\NormalTok{(out}\OperatorTok{$}\NormalTok{mean_H}\OperatorTok{/}\FloatTok{1e6}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 7.99 \end{verbatim} If you completed the stochastic power analysis example (Section \ref{power-ex}), you might see where this is going. You are going to replicate applying a fixed policy many times to a random system. This is the Monte Carlo part of the analysis. The policy part is that you will compare the output from several candidate exploitation rates to inform a decision about which is best. This time, set up your analysis using \texttt{sapply()} (to iterate over different values of \(U\)) and \texttt{replicate()} (to iterate over different random populations fished at each \(U\)) instead of performing a nested \texttt{for()} loop as in previous examples: \begin{Shaded} \begin{Highlighting}[] \NormalTok{U_try =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\FloatTok{0.4}\NormalTok{, }\FloatTok{0.6}\NormalTok{, }\FloatTok{0.01}\NormalTok{)} \NormalTok{n_rep =}\StringTok{ }\DecValTok{2000} \NormalTok{H_out =}\StringTok{ }\KeywordTok{sapply}\NormalTok{(U_try, }\ControlFlowTok{function}\NormalTok{(u) \{} \KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =}\NormalTok{ n_rep, }\DataTypeTok{expr =}\NormalTok{ \{} \KeywordTok{ricker_sim}\NormalTok{(}\DataTypeTok{U =}\NormalTok{ u, }\DataTypeTok{ny =} \DecValTok{20}\NormalTok{, }\DataTypeTok{params =}\NormalTok{ params)}\OperatorTok{$}\NormalTok{mean_H}\OperatorTok{/}\FloatTok{1e6} \NormalTok{ \})} \NormalTok{\})} \end{Highlighting} \end{Shaded} The nested \texttt{replicate()} and \texttt{sapply()} method is a bit cleaner than a nested \texttt{for()} loop, but you have less control over the format of the output. Plot the output of your simulations using a boxplot. To make things easier, give \texttt{H\_out} column names representing the exploitation rate: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{colnames}\NormalTok{(H_out) =}\StringTok{ }\NormalTok{U_try} \KeywordTok{boxplot}\NormalTok{(H_out, }\DataTypeTok{outline =}\NormalTok{ F,} \DataTypeTok{xlab =} \StringTok{"U"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"Harvest (Millions of Fish)"}\NormalTok{,} \DataTypeTok{col =} \StringTok{"tomato"}\NormalTok{, }\DataTypeTok{las =} \DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-53-1} \end{center} It appears the stock could produce more harvest than its current 8.5 million fish per year if it was fished harder. However, your supervisors also do not want to see the escapement drop below three-quarters of what it has been in recent history (75\% of approximately 13 million fish). They ask you to obtain the expected average annual escapement as well as harvest. You can simply re-run the code above, but extracting \texttt{S\_mean} rather than \texttt{H\_mean}. Call this output \texttt{S\_out} and plot it just like harvest (if you're curious, this blue color is \texttt{col\ =\ "skyblue"}): \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-54-1} \end{center} After seeing this information, your supervisor realizes they are faced with a trade-off: the stock could produce more with high exploitation rates, but they are concerned about pushing the stock too low would be unsustainable. They tell you to determine the probability that the average escapement would not be pushed below 75\% of 13 million at each exploitation rate, as well as the probability that the average annual harvests will be at least 20\% greater than they are currently (approximately 8.5 million fish). Given your output, this is easy: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# determine if each element meets escapement criterion} \NormalTok{Smeet =}\StringTok{ }\NormalTok{S_out }\OperatorTok{>}\StringTok{ }\NormalTok{(}\FloatTok{0.75} \OperatorTok{*}\StringTok{ }\DecValTok{13}\NormalTok{)} \CommentTok{# determine if each element meets harvest criterion} \NormalTok{Hmeet =}\StringTok{ }\NormalTok{H_out }\OperatorTok{>}\StringTok{ }\NormalTok{(}\FloatTok{1.2} \OperatorTok{*}\StringTok{ }\FloatTok{8.5}\NormalTok{)} \CommentTok{# calculate the probability of each occuring at a given exploitation rate} \CommentTok{# remember, mean of a logical vector calculate the proportion of TRUEs} \NormalTok{p_Smeet =}\StringTok{ }\KeywordTok{apply}\NormalTok{(Smeet, }\DecValTok{2}\NormalTok{, mean)} \NormalTok{p_Hmeet =}\StringTok{ }\KeywordTok{apply}\NormalTok{(Hmeet, }\DecValTok{2}\NormalTok{, mean)} \end{Highlighting} \end{Shaded} You plot this for your supervisor as follows: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# the U levels to highlight on plot} \NormalTok{plot_U =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\FloatTok{0.4}\NormalTok{, }\FloatTok{0.6}\NormalTok{, }\FloatTok{0.05}\NormalTok{)} \CommentTok{# create an empty plot} \KeywordTok{par}\NormalTok{(}\DataTypeTok{mar =} \KeywordTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{,}\DecValTok{4}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \KeywordTok{plot}\NormalTok{(p_Smeet }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet, }\DataTypeTok{type =} \StringTok{"n"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Probability of Meeting Harvest Criterion"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of Meeting Escapement Criterion"}\NormalTok{)} \CommentTok{# add gridlines} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{v =} \KeywordTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{h =} \KeywordTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \CommentTok{#draw on the tradeoff curve} \KeywordTok{lines}\NormalTok{(p_Smeet }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{lwd =} \DecValTok{2}\NormalTok{)} \CommentTok{# add points and text for particular U policies} \KeywordTok{points}\NormalTok{(p_Smeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U] }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U],} \DataTypeTok{pch =} \DecValTok{16}\NormalTok{, }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \KeywordTok{text}\NormalTok{(p_Smeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U] }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U],} \DataTypeTok{labels =}\NormalTok{ U_try[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U], }\DataTypeTok{pos =} \KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-56-1} \end{center} Equipped with this analysis, your supervisor plans to go to the policy-makers with the recommendation of adjusting the exploitation rate policy to use \(U = 0.5\), because they think it balances the trade-off. Notice how if the status quo was maintained, your model suggests you would have complete certainty of staying where you are now: escapement will remain above 75\% of its current level with a 100\% chance, but you would have no chance of improving harvests to greater than 20\% of their current level. Small increases in the exploitation rate (e.g., from 0.4 to 0.45) have a reasonably large gain in harvest performance, but hardly any losses for the escapement criterion. Your supervisor is willing to live with a 90\% chance that the escapement will stay where they desire in order to gain a \textgreater80\% chance of obtaining the desired amount of increases in harvest. The utility of using Monte Carlo methods in this example is the ability to calculate the probability of some event you are interested in. There are analytical (i.e., not simulation-based) solutions to predict the annual harvest and escapement from a fixed \(U\) from a population with parameters \(\alpha\) and \(\beta\), but by incorporating randomness, you were able to obtain the relative weights of outcomes other than the expectation under the deterministic Ricker model, thereby allowing the assignment of probabilities to meeting the two criteria. \hypertarget{resample-examples}{% \section{Resampling-Based Examples}\label{resample-examples}} \hypertarget{boot-test-ex}{% \subsection{The Bootstrap}\label{boot-test-ex}} Say you have a fitted model from which you want to propagate the uncertainty in some derived quantity. Consider the case of the \textbf{von Bertalanffy growth model}. This is a non-linear model used to predict the size of an organism (weight or length) based on its age. The model can be written for a non-linear regression model (see Section \ref{nls}) as: \begin{equation} L_i = L_{\infty}\left(1 - e^{-k(age_i-t_0)}\right) + \varepsilon_i, \varepsilon_i \sim N(0, \sigma) \label{eq:vonB} \end{equation} where \(L_i\) and \(age_i\) are the observed length and age of individual \(i\), respectively, and \(L_{\infty}\), \(k\), and \(t_0\) are parameters to be estimated. The interpretations of the parameters are as follows: \begin{itemize} \tightlist \item \(L_{\infty}\): the maximum average length achieved \item \(k\): a growth coefficient linked to metabolic rate. It specifies the rate of increase in length as the fish ages early in life \item \(t_0\): the theoretical age when length equals zero (the x-intercept). \end{itemize} Use the data set \texttt{growth.csv} for this example (see the \protect\hyperlink{data-sets}{instructions} on acquiring data files). Read in and plot the data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{dat =}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"../Data/growth.csv"}\NormalTok{)} \KeywordTok{plot}\NormalTok{(length }\OperatorTok{~}\StringTok{ }\NormalTok{age, }\DataTypeTok{data =}\NormalTok{ dat, }\DataTypeTok{pch =} \DecValTok{16}\NormalTok{, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-58-1} \end{center} Due to a large amount of variability in individual growth rates, the relationship looks pretty noisy. Notice how you have mostly young fish in your sample: this is characteristic of ``random'' sampling of fish populations. Suppose you would like to obtain the probability that an average-sized fish of each age is sexually mature. You know that fish of this species mature at approximately 450 mm, and you simply need to determine the fraction of all fish at each age that are greater than 450 mm. However, you don't have any observations for some ages (e.g., age 8), so you cannot simply calculate this fraction based on your raw data. You need to fit the von Bertalanffy growth model, then carry the statistical uncertainty from the fitted model forward to the predicted length-at-age. This would be difficult to obtain using only the coefficient estimates and their standard errors, because of the non-linear relationship between the \(x\) and \(y\) variables. Enter the \textbf{bootstrap}, which is a Monte Carlo analysis using an observed data set and a model. The \textbf{pseudocode} for a bootstrap analysis is: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Resample from the original data (with replacement) \item Fit a model of interest \item Derive some quantity of interest from the fitted model \item Repeat steps 1 - 3 many times \item Summarize the randomized quantities from step 4 \end{enumerate} In this example, you will apply a bootstrap approach to obtain the distribution of expected fish lengths at each age, then use these distributions to quantify the probability that an averaged-sized fish of each age is mature (i.e., greater than 450 mm). You will write a function for each of steps 1 - 3 above. The first is to resample the data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{randomize =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(dat) \{} \CommentTok{# number of observed pairs} \NormalTok{ n =}\StringTok{ }\KeywordTok{nrow}\NormalTok{(dat)} \CommentTok{# sample the rows to determine which will be kept} \NormalTok{ keep =}\StringTok{ }\KeywordTok{sample}\NormalTok{(}\DataTypeTok{x =} \DecValTok{1}\OperatorTok{:}\NormalTok{n, }\DataTypeTok{size =}\NormalTok{ n, }\DataTypeTok{replace =}\NormalTok{ T)} \CommentTok{# retreive these rows from the data} \NormalTok{ dat[keep,]} \NormalTok{\}} \end{Highlighting} \end{Shaded} Notice the use of \texttt{replace\ =\ T} here: without this, there would be no bootstrap. You would just sample the same observations over and over, their order in the rows would just be shuffled. Next, write a function to fit the model (revisit Section \ref{nls} for more details on \texttt{nls()}): \begin{Shaded} \begin{Highlighting}[] \NormalTok{fit_vonB =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(dat) \{} \KeywordTok{nls}\NormalTok{(length }\OperatorTok{~}\StringTok{ }\NormalTok{linf }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{k }\OperatorTok{*}\StringTok{ }\NormalTok{(age }\OperatorTok{-}\StringTok{ }\NormalTok{t0))),} \DataTypeTok{data =}\NormalTok{ dat,} \DataTypeTok{start =} \KeywordTok{c}\NormalTok{(}\DataTypeTok{linf =} \DecValTok{600}\NormalTok{, }\DataTypeTok{k =} \FloatTok{0.3}\NormalTok{, }\DataTypeTok{t0 =} \FloatTok{-0.2}\NormalTok{)} \NormalTok{ )} \NormalTok{\}} \end{Highlighting} \end{Shaded} This function will return a fitted model object when executed. Next, write a function to predict mean length-at-age: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# create a vector of ages} \NormalTok{ages =}\StringTok{ }\KeywordTok{min}\NormalTok{(dat}\OperatorTok{$}\NormalTok{age)}\OperatorTok{:}\KeywordTok{max}\NormalTok{(dat}\OperatorTok{$}\NormalTok{age)} \NormalTok{pred_vonB =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(fit) \{} \CommentTok{# extract the coefficients} \NormalTok{ ests =}\StringTok{ }\KeywordTok{coef}\NormalTok{(fit)} \CommentTok{# predict length-at-age} \NormalTok{ ests[}\StringTok{"linf"}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{ests[}\StringTok{"k"}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{(ages }\OperatorTok{-}\StringTok{ }\NormalTok{ests[}\StringTok{"t0"}\NormalTok{])))} \NormalTok{\}} \end{Highlighting} \end{Shaded} Notice your function will use the object \texttt{ages} even though it was not defined in the function. This has to do with \textbf{lexical scoping} and \textbf{environments}, which are beyond the scope of this introductory material. If you'd like more details, see the section in \citet{adv-r-cite} on it\footnote{The section on \textbf{lexical scoping} is found here: \url{http://adv-r.had.co.nz/Functions.html\#lexical-scoping}}. Basically, if an object with the same name as one defined in the function exists outside of the function, the function will use the one that is defined within the function. If there is no object defined in the function with that name, it will look outside of the function for that object. Now, use these three functions to perform one iteration: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{pred_vonB}\NormalTok{(}\DataTypeTok{fit =} \KeywordTok{fit_vonB}\NormalTok{(}\DataTypeTok{dat =} \KeywordTok{randomize}\NormalTok{(}\DataTypeTok{dat =}\NormalTok{ dat)))} \end{Highlighting} \end{Shaded} You can wrap this inside of a \texttt{replicate()} call to perform step 4 above: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{2}\NormalTok{)} \NormalTok{out =}\StringTok{ }\KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =} \DecValTok{100}\NormalTok{, }\DataTypeTok{expr =}\NormalTok{ \{} \KeywordTok{pred_vonB}\NormalTok{(}\DataTypeTok{fit =} \KeywordTok{fit_vonB}\NormalTok{(}\DataTypeTok{dat =} \KeywordTok{randomize}\NormalTok{(}\DataTypeTok{dat =}\NormalTok{ dat)))} \NormalTok{\})} \KeywordTok{dim}\NormalTok{(out)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10 100 \end{verbatim} It appears the rows are different ages and the columns are different bootstrapped iterations. Summarize the random lengths at each age: \begin{Shaded} \begin{Highlighting}[] \NormalTok{summ =}\StringTok{ }\KeywordTok{apply}\NormalTok{(out, }\DecValTok{1}\NormalTok{, }\ControlFlowTok{function}\NormalTok{(x) }\KeywordTok{c}\NormalTok{(}\DataTypeTok{mean =} \KeywordTok{mean}\NormalTok{(x), }\KeywordTok{quantile}\NormalTok{(x, }\KeywordTok{c}\NormalTok{(}\FloatTok{0.025}\NormalTok{, }\FloatTok{0.975}\NormalTok{))))} \end{Highlighting} \end{Shaded} Plot the data, the summarized ranges of mean lengths, and the length at which all fish are assumed to be mature (450 mm) \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(length }\OperatorTok{~}\StringTok{ }\NormalTok{age, }\DataTypeTok{data =}\NormalTok{ dat, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{, }\DataTypeTok{pch =} \DecValTok{16}\NormalTok{,} \DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, }\KeywordTok{max}\NormalTok{(dat}\OperatorTok{$}\NormalTok{length, summ[}\StringTok{"97.5%"}\NormalTok{,])),} \DataTypeTok{ylab =} \StringTok{"Length (mm)"}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Age (years)"}\NormalTok{)} \KeywordTok{lines}\NormalTok{(summ[}\StringTok{"mean"}\NormalTok{,] }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{lwd =} \DecValTok{2}\NormalTok{)} \KeywordTok{lines}\NormalTok{(summ[}\StringTok{"2.5%"}\NormalTok{,] }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{lines}\NormalTok{(summ[}\StringTok{"97.5%"}\NormalTok{,] }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{h =} \DecValTok{450}\NormalTok{, }\DataTypeTok{col =} \StringTok{"blue"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-66-1} \end{center} Obtain the fraction of iterations that resulted in the mean length-at-age being greater than 450 mm. This is interpreted as the probability that the average-sized fish of each age is mature: \begin{Shaded} \begin{Highlighting}[] \NormalTok{p_mat =}\StringTok{ }\KeywordTok{apply}\NormalTok{(out, }\DecValTok{1}\NormalTok{, }\ControlFlowTok{function}\NormalTok{(x) }\KeywordTok{mean}\NormalTok{(x }\OperatorTok{>}\StringTok{ }\DecValTok{450}\NormalTok{))} \KeywordTok{plot}\NormalTok{(p_mat }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{type =} \StringTok{"b"}\NormalTok{, }\DataTypeTok{pch =} \DecValTok{17}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Age (years)"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"Probability of Average Fish Mature"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-68-1} \end{center} This \textbf{maturity schedule} can be used by fishery managers in attempting to decide which ages should be allowed to be harvested and which should be allowed to grow more\footnote{possibly in a \textbf{yield-per-recruit} analysis}. Because each age has an associated expected length, managers can use what they know about the size selectivity of various gear types to set policies that attempt to target some ages more than others. \hypertarget{perm-test-ex}{% \subsection{Permutation Test}\label{perm-test-ex}} In the previous example (Section \ref{boot-test-ex}), you learned about the bootstrap. A related Monte Carlo analysis is the \textbf{permutation test}. This is a non-parametric statistical test used to determine if there is a statistically-significant difference in the mean of some quantity between two populations. It is used in cases where the assumptions of a generalized linear model may not be met, but a p-value is still required. The \textbf{pseudocode} for the permutation test is: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Calculate the difference between means based on the original data set \item Shuffle the group assignments randomly among the observations \item Calculate the difference between the randomly-assigned groups \item Repeat steps 2 - 3 many times. This builds the \textbf{null distribution}: the distribution of the test statistic (the difference) assuming the null hypothesis (that there is no difference) in means is true \item Determine what fraction of the absolute differences were larger than the original difference. This constitutes a \textbf{two-tailed} p-value. One-tailed tests can also be derived using the same steps 1 - 4, which is left as an exercise. \end{enumerate} Use the data set \texttt{ponds.csv} for this example (see the \protect\hyperlink{data-sets}{instructions} on acquiring data files). This is the same data set used for \protect\hyperlink{ex1b}{Exercise 1B}, revisit that exercise for details on this hypothetical data set. Read in and plot the data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{dat =}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"ponds.csv"}\NormalTok{)} \KeywordTok{plot}\NormalTok{(chl.a }\OperatorTok{~}\StringTok{ }\NormalTok{treatment, }\DataTypeTok{data =}\NormalTok{ dat)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-70-1} \end{center} It appears as though there is a relatively strong signal indicating a difference. Use the permutation test to determine if it is statistically significant. Step 1 from the pseudocode is to calculate the observed difference between groups: \begin{Shaded} \begin{Highlighting}[] \NormalTok{Dobs =}\StringTok{ }\KeywordTok{mean}\NormalTok{(dat}\OperatorTok{$}\NormalTok{chl.a[dat}\OperatorTok{$}\NormalTok{treatment }\OperatorTok{==}\StringTok{ "Add"}\NormalTok{]) }\OperatorTok{-}\StringTok{ }\KeywordTok{mean}\NormalTok{(dat}\OperatorTok{$}\NormalTok{chl.a[dat}\OperatorTok{$}\NormalTok{treatment }\OperatorTok{==}\StringTok{ "Control"}\NormalTok{])} \NormalTok{Dobs} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 26.166 \end{verbatim} Write a function to perform one iteration of steps 2 - 3 from the pseudocode: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# x is the group: Add or Control} \CommentTok{# y is chl.a} \NormalTok{perm =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x, y) \{} \CommentTok{# turn x to a character, easier to deal with} \NormalTok{ x =}\StringTok{ }\KeywordTok{as.character}\NormalTok{(x)} \CommentTok{# shuffle the x values:} \NormalTok{ x_shuff =}\StringTok{ }\KeywordTok{sample}\NormalTok{(x)} \CommentTok{# calculate the mean of each group:} \NormalTok{ x_bar_add =}\StringTok{ }\KeywordTok{mean}\NormalTok{(y[x_shuff }\OperatorTok{==}\StringTok{ "Add"}\NormalTok{])} \NormalTok{ x_bar_ctl =}\StringTok{ }\KeywordTok{mean}\NormalTok{(y[x_shuff }\OperatorTok{==}\StringTok{ "Control"}\NormalTok{])} \CommentTok{# calculate the difference:} \NormalTok{ x_bar_add }\OperatorTok{-}\StringTok{ }\NormalTok{x_bar_ctl} \NormalTok{\}} \end{Highlighting} \end{Shaded} Use your function once: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{perm}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{treatment, }\DataTypeTok{y =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{chl.a)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10.648 \end{verbatim} Perform step 4 from the pseudocode by replicating your \texttt{perm()} function many times: \begin{Shaded} \begin{Highlighting}[] \NormalTok{Dnull =}\StringTok{ }\KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =} \DecValTok{5000}\NormalTok{, }\DataTypeTok{expr =} \KeywordTok{perm}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{treatment, }\DataTypeTok{y =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{chl.a))} \end{Highlighting} \end{Shaded} Plot the distribution of the null test statistic and draw a line where the originally-observed difference falls: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{hist}\NormalTok{(Dnull, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{v =}\NormalTok{ Dobs, }\DataTypeTok{col =} \StringTok{"blue"}\NormalTok{, }\DataTypeTok{lwd =} \DecValTok{3}\NormalTok{, }\DataTypeTok{lty =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-76-1} \end{center} Notice the null distribution is centered on zero: this is because the null hypothesis is that there is no difference. The observation (blue line) falls way in the upper tail of the null distribution, indicating it is unlikely that an effect that large was observed by random chance. The two-tailed p-value can be calculated as: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{mean}\NormalTok{(}\KeywordTok{abs}\NormalTok{(Dnull) }\OperatorTok{>=}\StringTok{ }\NormalTok{Dobs)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0 \end{verbatim} Very few (or zero) of the random data sets resulted in a difference greater than what was observed, indicating there is statistical support to the hypothesis that there is a non-zero difference between the two nutrient treatments. \hypertarget{population-dynamics}{% \section{Population dynamics}\label{population-dynamics}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{source}\NormalTok{(}\StringTok{"./R/Rcode/Final_report_Davidson2017.R"}\NormalTok{, }\DataTypeTok{echo =} \OtherTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## > library(boot) ## ## > library(tidyverse) ## ## > library(dplyr) ## ## > library(ggplot2) ## ## > library(qpcR) ## ## > library(pwr) ## ## > library(ggthemes) ## ## > library(gridExtra) ## ## > Data <- read.csv("./R/Data/RawCI.csv", header = T, ## + quote = "\"") ## ## > Year <- unique(Data$Calves.1) ## ## > year2010a <- c(3, 3, 2) ## ## > year2010 <- filter(Data, Calves.1 < 2011) ## ## > year2010 <- year2010$Interval.1[!is.na(year2010$Interval.1)] ## ## > year2011a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2) ## ## > year2011 <- filter(Data, Calves.1 < 2012) ## ## > year2011 <- year2011$Interval.1[!is.na(year2011$Interval.1)] ## ## > year2012a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2, 6, 4, 4, 4, 4, 4, 3, 3, 3, 3) ## ## > year2012 <- filter(Data, Calves.1 < 2013) ## ## > year2012 <- year2012$Interval.1[!is.na(year2012$Interval.1)] ## ## > year2013a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2, 6, 4, 4, 4, 4, 4, 3, 3, 3, 3, 6, 5, 4, 4, 4, 4, ## + 4, 3, 3, 3, 3, 3, 3, 3, 3, .... [TRUNCATED] ## ## > full <- c(Data$Interval.1, Data$Interval.2) ## ## > year2013 <- full[!is.na(unlist(full))] ## ## > mean2010 <- sum(year2010)/length(year2010) ## ## > s2010 <- sd(year2010) ## ## > SE2010 <- s2010/(sqrt(length(year2010))) ## ## > n2010 <- (length(year2010)) ## ## > low.qt2010 <- mean2010 - (qt(0.975, length(year2010)) * ## + SE2010) ## ## > high.qt2010 <- mean2010 + (qt(0.975, length(year2010)) * ## + SE2010) ## ## > mean2011 <- sum(year2011)/length(year2011) ## ## > s2011 <- sd(year2011) ## ## > SE2011 <- s2011/(sqrt(length(year2011))) ## ## > n2011 <- (length(year2011)) ## ## > low.qt2011 <- mean2011 - (qt(0.975, length(year2011)) * ## + SE2011) ## ## > high.qt2011 <- mean2011 + (qt(0.975, length(year2011)) * ## + SE2011) ## ## > mean2012 <- sum(year2012)/length(year2012) ## ## > s2012 <- sd(year2012) ## ## > SE2012 <- s2012/(sqrt(length(year2012))) ## ## > n2012 <- (length(year2012)) ## ## > low.qt2012 <- mean2012 - (qt(0.975, length(year2012)) * ## + SE2012) ## ## > high.qt2012 <- mean2012 + (qt(0.975, length(year2012)) * ## + SE2012) ## ## > mean2013 <- sum(year2013)/length(year2013) ## ## > s2013 <- sd(year2013) ## ## > SE2013 <- s2013/(sqrt(length(year2013))) ## ## > n2013 <- (length(year2013)) ## ## > low.qt2013 <- mean2013 - (qt(0.975, length(year2013)) * ## + SE2013) ## ## > high.qt2013 <- mean2013 + (qt(0.975, length(year2013)) * ## + SE2013) ## ## > n <- c(length(year2010), length(year2011), length(year2012), ## + length(year2013)) ## ## > mY <- c(mean(year2010), mean(year2011), mean(year2012), ## + mean(year2013)) ## ## > year <- Year ## ## > low.qt <- c(low.qt2010, low.qt2011, low.qt2012, low.qt2013) ## ## > high.qt <- c(high.qt2010, high.qt2011, high.qt2012, ## + high.qt2013) ## ## > sd <- c(s2010, s2011, s2012, s2013) ## ## > sum.dat <- cbind(year, n, mY, low.qt, high.qt, sd) ## ## > sum.dat <- as.data.frame(sum.dat) ## ## > library(knitr) ## ## > kable(sum.dat, format = "markdown") ## ## ## | year| n| mY| low.qt| high.qt| sd| ## |----:|--:|--------:|--------:|--------:|---------:| ## | 2010| 3| 2.666667| 1.605851| 3.727482| 0.5773503| ## | 2011| 15| 2.866667| 2.673022| 3.060312| 0.3518658| ## | 2012| 25| 3.240000| 2.919170| 3.560830| 0.7788881| ## | 2013| 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## ## > ggplot(sum.dat, aes(y = mY, x = year)) + geom_point() + ## + geom_line() + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), ## + width = 0.1) + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-1} \end{center} \begin{verbatim} ## ## > par(mfrow = c(2, 2)) ## ## > plot(factor(year2010), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "a)", sub = "Sample size 3", ylab = "Frequency", ## + xlab = "Calving interval", cex.main = 1.5, font.main = 4, ## + col.main = "bl ..." ... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2011), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "b)", sub = "Sample size 15", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2012), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "c)", sub = "Sample size 25", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2013), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-2} \end{center} \begin{verbatim} ## ## > title(main = "d)", sub = "Sample size 45", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > library(qpcR) ## ## > rawdata <- qpcR:::cbind.na(year2010, year2011, year2012, ## + year2013) ## ## > rawdata <- as.data.frame(rawdata) ## ## > year2010 <- data.frame(year2010, year = c("2010")) ## ## > year2010 <- rename(year2010, interval = year2010, ## + year = year) ## ## > year2011 <- data.frame(year2011, year = c("2011")) ## ## > year2011 <- rename(year2011, interval = year2011, ## + year = year) ## ## > year2012 <- data.frame(year2012, year = c("2012")) ## ## > year2012 <- rename(year2012, interval = year2012, ## + year = year) ## ## > year2013 <- data.frame(year2013, year = c("2013")) ## ## > year2013 <- rename(year2013, interval = year2013, ## + year = year) ## ## > ggplotraw <- rbind(year2010, year2011, year2012, year2013) ## ## > ggplotraw$interval <- as.numeric(as.character(ggplotraw$interval)) ## ## > ggplot(year2013, aes(x = interval)) + geom_bar(alpha = 1, ## + width = 0.9, fill = "black") + xlab(expression("Calving" ~ ## + "interval" ~ (ita .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-3} \end{center} \begin{verbatim} ## ## > RealCI <- as.numeric(year2013$interval) ## ## > xlong <- RealCI ## ## > meanlong <- sum(xlong)/length(xlong) ## ## > slong <- sd(xlong) ## ## > SElong <- slong/(sqrt(length(xlong))) ## ## > nlong <- (length(xlong)) ## ## > lowqtlong <- meanlong - (qt(0.975, nlong) * SElong) ## ## > highqtlong <- meanlong + (qt(0.975, nlong) * SElong) ## ## > MedCI <- c(RealCI[RealCI < 5], 3, 3, 3, 3, 2, 3) ## ## > xmed <- MedCI ## ## > meanmed <- sum(xmed)/length(xmed) ## ## > smed <- sd(xmed) ## ## > SEmed <- smed/(sqrt(length(xmed))) ## ## > nmed <- (length(xmed)) ## ## > lowqtmed <- meanmed - (qt(0.975, length(xmed)) * SEmed) ## ## > highqtmed <- meanmed + (qt(0.975, length(xmed)) * ## + SEmed) ## ## > LowCI <- c(RealCI[RealCI < 4], 3, 3, 3, 3, 3, 2, 2, ## + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2) ## ## > xshort <- LowCI ## ## > meanshort <- mean(xshort) ## ## > sshort <- sd(xshort) ## ## > SEshort <- sshort/(sqrt(length(xshort))) ## ## > lowqtshort <- meanshort - (qt(0.975, length(xshort)) * ## + SEshort) ## ## > highqtshort <- meanshort + (qt(0.975, length(xshort)) * ## + SEshort) ## ## > bdata <- qpcR:::cbind.na(RealCI, MedCI, LowCI) ## ## > bdata <- as.data.frame(bdata) ## ## > par(mfrow = c(1, 3)) ## ## > plot(factor(bdata$LowCI), main = "Lowest possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(factor(bdata$MedCI), main = "Medium possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(factor(bdata$RealCI), main = "Observed interval") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-4} \end{center} \begin{verbatim} ## ## > par(mfrow = c(3, 1)) ## ## > plot(density(as.numeric(as.character(LowCI)), bw = 0.5), ## + main = "Lowest possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(density(as.numeric(as.character(MedCI)), bw = 0.5), ## + main = "Medium possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(density(as.numeric(as.character(RealCI)), bw = 0.5), ## + main = "Observed interval") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-5} \end{center} \begin{verbatim} ## ## > Sumtable <- data.frame(variable = c("low.qt", "mean", ## + "high.qt", "sd", "SE"), short = c(lowqtshort, meanshort, ## + highqtshort, sshort, SE .... [TRUNCATED] ## ## > n <- c(length(LowCI), length(MedCI), length(year2013$interval)) ## ## > mY <- c(mean(LowCI), mean(MedCI), mean(year2013$interval)) ## ## > interval <- c("Low", "Medium", "Observed") ## ## > low.qt <- c(lowqtshort, lowqtmed, low.qt2013) ## ## > high.qt <- c(highqtshort, highqtmed, high.qt2013) ## ## > sd <- c(sshort, smed, s2013) ## ## > Sumtable <- cbind(interval, n, mY, low.qt, high.qt, ## + sd) ## ## > Sumtable <- as.data.frame(Sumtable) ## ## > Sumtable$n <- as.numeric(as.character(Sumtable$n)) ## ## > Sumtable$mY <- as.numeric(as.character(Sumtable$mY)) ## ## > Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt)) ## ## > Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt)) ## ## > Sumtable$sd <- as.numeric(as.character(Sumtable$sd)) ## ## > Sumtable$interval <- as.character(Sumtable$interval) ## ## > ggplot(Sumtable, aes(y = mY, x = interval)) + geom_point(size = 5) + ## + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05, ## + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-6} \end{center} \begin{verbatim} ## ## > library(knitr) ## ## > kable(Sumtable, format = "markdown", col.names = c("Interval", ## + "Sample size", "Mean", "Lower limit", "Higher limit", "SD")) ## ## ## |Interval | Sample size| Mean| Lower limit| Higher limit| SD| ## |:--------|-----------:|--------:|-----------:|------------:|---------:| ## |Low | 58| 2.568966| 2.437666| 2.700265| 0.4995461| ## |Medium | 48| 3.104167| 2.943089| 3.265244| 0.5550382| ## |Observed | 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## ## > library(knitr) ## ## > srwdat <- read.csv(file = "./R/Data/srw_data.csv") ## ## > kable(srwdat, format = "markdown", col.names = c("Sample size", ## + "Mean", "Lower limit", "Higher limit", "SE", "Author", "Location")) ## ## ## | Sample size| Mean| Lower limit| Higher limit| SE|Author |Location | ## |-----------:|----:|-----------:|------------:|----:|:------------------|:---------------------------------| ## | NA| 3.12| 3.07| 3.17| NA|Best et al. 2001 |South Africa | ## | 1504| 3.15| 3.11| 3.18| NA|Best et al. 2005 |South Africa (1971-2003 Updated) | ## | NA| 3.16| 3.13| 3.19| NA|Brandao et al 2010 |South Africa ( 1971-2006 Updated) | ## | NA| 3.35| NA| NA| 0.05|Cooke et al. 2001 |Argentina | ## | 749| 3.42| NA| NA| 0.11|Cooke et al. 2003 |Argentina | ## | NA| 3.63| NA| NA| 0.13|Burnell 2001 |Australia | ## ## > SAreps <- 1500 ## ## > ARreps <- 800 ## ## > Aussiereps <- 2000 ## ## > low <- 1000 ## ## > verylow <- 100 ## ## > lowest <- 10 ## ## > par(mfrow = c(2, 3)) ## ## > plot(factor(sample(year2013$interval, lowest, replace = T)), ## + main = "3 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, verylow, replace = T)), ## + main = "10 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, low, replace = T)), ## + main = "30 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, Aussiereps, ## + replace = T)), main = "500 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, ARreps, replace = T)), ## + main = "800 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, SAreps, replace = T)), ## + main = "1500 intervals") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-7} \end{center} \begin{verbatim} ## ## > boots <- 1000 ## ## > n <- c(1:1000) ## ## > var10 <- paste0("n_", 1:10) ## ## > sample10 <- matrix(data = NA, ncol = lowest, nrow = boots) ## ## > colnames(sample10) <- as.list(var10) ## ## > for (i in 1:boots) { ## + sample10[i, ] <- sample(year2013$interval, lowest, replace = T) ## + } ## ## > sample10 <- as.data.frame(sample10) ## ## > sample10 <- sample10 %>% mutate(mean10 = rowMeans(sample10)) ## ## > sample10t <- as.matrix(sample10) ## ## > sample10t <- t(sample10t) ## ## > var100 <- paste0("n_", 1:100) ## ## > sample100 <- matrix(data = NA, ncol = verylow, nrow = boots) ## ## > colnames(sample100) <- as.list(var100) ## ## > for (i in 1:boots) { ## + sample100[i, ] <- sample(year2013$interval, verylow, replace = T) ## + } ## ## > sample100 <- as.data.frame(sample100) ## ## > sample100 <- sample100 %>% mutate(mean100 = rowMeans(sample100)) ## ## > var500 <- paste0("n_", 1:500) ## ## > sample500 <- matrix(data = NA, ncol = 500, nrow = boots) ## ## > colnames(sample500) <- as.list(var500) ## ## > for (i in 1:boots) { ## + sample500[i, ] <- sample(year2013$interval, 500, replace = T) ## + } ## ## > sample500 <- as.data.frame(sample500) ## ## > sample500 <- sample500 %>% mutate(mean500 = rowMeans(sample500)) ## ## > var1000 <- paste0("n_", 1:1000) ## ## > sample1000 <- matrix(data = NA, ncol = low, nrow = boots) ## ## > colnames(sample1000) <- as.list(var1000) ## ## > for (i in 1:boots) { ## + sample1000[i, ] <- sample(year2013$interval, low, replace = T) ## + } ## ## > sample1000 <- as.data.frame(sample1000) ## ## > sample1000 <- sample1000 %>% mutate(mean1000 = rowMeans(sample1000)) ## ## > varA <- paste0("n_", 1:2000) ## ## > sampleA <- matrix(data = NA, ncol = Aussiereps, nrow = boots) ## ## > colnames(sampleA) <- as.list(varA) ## ## > for (i in 1:boots) { ## + sampleA[i, ] <- sample(year2013$interval, Aussiereps, replace = T) ## + } ## ## > sampleA <- as.data.frame(sampleA) ## ## > sampleA <- sampleA %>% mutate(meanA = rowMeans(sampleA)) ## ## > sampleAt <- t(sampleA) ## ## > for (i in c(1:ncol(sampleA))) { ## + sampleA[, i] <- as.numeric(as.character(sampleA[, i])) ## + } ## ## > ab <- sort(sampleA$meanA) ## ## > nab <- length(ab) ## ## > ab2.5 <- ab[25] ## ## > ab0.97.5 <- ab[975] ## ## > ab <- sort(sampleA$meanA) ## ## > nab <- length(ab) ## ## > ab2.5 <- ab[25] ## ## > ab0.97.5 <- ab[975] ## ## > par(mfrow = c(1, 1)) ## ## > plot(density(sample10$mean10, bw = 0.05), col = "black", ## + lty = 1, main = "", lwd = 5, ylim = c(0, 8), xlim = c(2, ## + 4.5), axes = FAL .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-8} \end{center} \begin{verbatim} ## ## > lines(density(sample100$mean100, bw = 0.05), col = "black", ## + lty = 2, lwd = 4) ## ## > lines(density(sample500$mean500, bw = 0.05), col = "black", ## + lty = 3, lwd = 3) ## ## > lines(density(sample1000$mean1000, bw = 0.05), col = "black", ## + lty = 4, lwd = 2) ## ## > lines(density(sampleA$meanA, bw = 0.05), col = "black", ## + lty = 5, lwd = 1) ## ## > legend("topright", title = "Legend", c("n=10, cv=8.12 ", ## + "n=100, cv=2.43", "n=500, c.v=1.15", "n=1000, cv=0.79", "n=2000, cv=0.56"), ## + b .... [TRUNCATED] ## ## > axis(1, lwd = 2) ## ## > axis(2, lwd = 2) ## ## > plot(density(sample10$mean10, bw = 0.05), col = "black", ## + lty = 3, main = "", lwd = 1, ylim = c(0, 8), xlim = c(2.5, ## + 4.5), axes = F .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-9} \end{center} \begin{verbatim} ## ## > lines(density(sample100$mean100, bw = 0.05), col = "black", ## + lty = 4, lwd = 1) ## ## > lines(density(sample500$mean500, bw = 0.05), col = "black", ## + lty = 5, lwd = 1) ## ## > lines(density(sample1000$mean1000, bw = 0.05), col = "black", ## + lty = 2, lwd = 1) ## ## > lines(density(sampleA$meanA, bw = 0.05), col = "black", ## + lty = 1, lwd = 2) ## ## > legend(y = 8, x = 3.9, title = expression(bold("Sample size (n)")), ## + c(expression(italic("n") ~ "=" ~ "10"), expression(italic("n") ~ ## + .... [TRUNCATED] ## ## > axis(1, lwd = 2) ## ## > axis(2, lwd = 2) ## ## > rev.one <- bdata$RealCI[1:45] ## ## > sample.true <- year2013$interval ## ## > pwr.test.results <- power.t.test(n = 45, delta = seq(0, ## + 0.99, 0.001), sd = sd(sample.true), alternative = "one.sided", ## + sig.level = 0.0 .... [TRUNCATED] ## ## > pwr.analysis <- as.data.frame(cbind(pwr.test.results$power, ## + pwr.test.results$delta)) ## ## > colnames(pwr.analysis) <- c("Power", "Mean.difference") ## ## > pwr.analysis.1 <- pwr.analysis %>% mutate(Alpha = 1 - ## + Power, Mean.estimate = 3.31 + Mean.difference) ## ## > a <- filter(pwr.analysis.1, Alpha < 0.05) ## ## > a[1, ] ## Power Mean.difference Alpha Mean.estimate ## 1 0.9501505 0.593 0.04984946 3.903 ## ## > ggplot(data = pwr.analysis.1, aes(x = Mean.estimate, ## + y = Alpha)) + geom_line(size = 1.5) + geom_vline(xintercept = 3.903, ## + col = "blue" .... [TRUNCATED] \end{verbatim} \begin{verbatim} ## ## > rev.one <- bdata$RealCI[1:45] ## ## > sample.true <- year2013$interval ## ## > diff <- 3.63 - 3.31 ## ## > pwr.test.results <- power.t.test(n = seq(1, 200, 1), ## + delta = diff, sd = sd(sample.true), alternative = "one.sided", ## + sig.level = 0.05) \end{verbatim} \begin{verbatim} ## Warning in qt(sig.level/tside, nu, lower.tail = FALSE): NaNs produced \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-10} \end{center} \begin{verbatim} ## ## > pwr.analysis <- as.data.frame(cbind(pwr.test.results$power, ## + pwr.test.results$n)) ## ## > colnames(pwr.analysis) <- c("Power", "Sample.size") ## ## > pwr.analysis.1 <- pwr.analysis %>% mutate(Alpha = 1 - ## + Power) ## ## > a <- filter(pwr.analysis.1, Alpha < 0.05) ## ## > a[1, ] ## Power Sample.size Alpha ## 1 0.9503366 153 0.0496634 ## ## > ggplot(data = pwr.analysis.1, aes(x = Sample.size, ## + y = Alpha)) + geom_line(size = 1.5) + geom_vline(xintercept = 45, ## + col = "red") + ge .... [TRUNCATED] \end{verbatim} \begin{verbatim} ## Warning: Removed 1 rows containing missing values (geom_path). \end{verbatim} \begin{verbatim} ## ## > dat <- read.csv("./R/Data/raw_observations_2012.csv") ## ## > glimpse(dat) ## Observations: 180 ## Variables: 10 ## $ ID <fct> AI06006, AI06007, AI06015, AI06022, AI06038, AI100... ## $ X2006 <int> 1, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2007 <int> 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2008 <int> 1, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,... ## $ X2009 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2010 <int> 0, 0, 2, 0, 0, 6, 6, 5, 5, 3, 4, 2, 5, 4, 5, 3, 2,... ## $ X2011 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2012 <int> 0, 1, 2, 4, 0, 0, 0, 0, 0, 0, 5, 0, 0, 12, 0, 0, 0... ## $ total <int> 2, 4, 8, 5, 3, 6, 7, 5, 5, 3, 9, 2, 5, 16, 6, 3, 2... ## $ X..yrs.seen <int> 2, 3, 5, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1,... ## ## > head(dat) ## ID X2006 X2007 X2008 X2009 X2010 X2011 X2012 total X..yrs.seen ## 1 AI06006 1 0 1 0 0 0 0 2 2 ## 2 AI06007 2 1 0 0 0 0 1 4 3 ## 3 AI06015 2 1 1 0 2 0 2 8 5 ## 4 AI06022 1 0 0 0 0 0 4 5 2 ## 5 AI06038 1 0 2 0 0 0 0 3 2 ## 6 AI10040 0 0 0 0 6 0 0 6 1 ## ## > dat1 <- read.csv("./R/Data/RawCI.csv", header = T, ## + quote = "\"") ## ## > glimpse(dat1) ## Observations: 41 ## Variables: 8 ## $ ID <fct> AI10124, AI10070, AI10086, AI08340, AI08341, AI0... ## $ Yr.first.seen <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves.1 <int> 2010, 2010, 2010, 2011, 2011, 2011, 2011, 2011, ... ## $ Calves.2 <int> 2013, 2013, 2013, NA, NA, NA, NA, NA, NA, NA, NA... ## $ Interval.1 <int> 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 6, ... ## $ Interval.2 <int> 3, 3, 3, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,... ## $ X <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, ... ## ## > dat3 <- dplyr::select(dat, ID, X2006:X2012) %>% gather(year, ## + count, X2006:X2012) ## ## > dat4 <- full_join(dat3, dat1, by = "ID") \end{verbatim} \begin{verbatim} ## Warning: Column `ID` joining factors with different levels, coercing to ## character vector \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-11} \end{center} \begin{verbatim} ## ## > dat5 <- dplyr::select(dat4, ID, year, count, Yr.first.seen, ## + Calves, Calves.1, Calves.2) ## ## > dat6 <- filter(dat5, count > 0) ## ## > glimpse(dat6) ## Observations: 237 ## Variables: 7 ## $ ID <chr> "AI06006", "AI06007", "AI06015", "AI06022", "AI0... ## $ year <chr> "X2006", "X2006", "X2006", "X2006", "X2006", "X2... ## $ count <int> 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ## $ Yr.first.seen <int> NA, NA, NA, 2006, NA, NA, NA, 2007, 2007, NA, NA... ## $ Calves <int> NA, NA, NA, 2006, NA, NA, NA, 2007, 2007, NA, NA... ## $ Calves.1 <int> NA, NA, NA, 2012, NA, NA, NA, 2013, 2010, NA, NA... ## $ Calves.2 <int> NA, NA, NA, NA, NA, NA, NA, NA, 2013, NA, NA, 20... ## ## > dat7 <- mutate(dat6, year = ifelse(year == "X2006", ## + "2006", year), year = ifelse(year == "X2007", "2007", year), ## + year = ifelse(year == .... [TRUNCATED] ## ## > a <- group_by(dat7, ID, Yr.first.seen) %>% mutate(mother = ifelse(Yr.first.seen > ## + 0, 1, 0)) %>% filter(mother == 1) %>% ungroup() %>% dplyr:: .... [TRUNCATED] ## ## > a ## # A tibble: 1 x 4 ## ID year Calves Calves.1 ## <chr> <chr> <int> <int> ## 1 AI09216 2007 2009 2011 ## ## > greater.than.2 <- sample.true[sample.true > 2] ## ## > mean.2 <- sum(greater.than.2)/length(greater.than.2) ## ## > s.2 <- sd(greater.than.2) ## ## > SE.2 <- s2013/(sqrt(length(greater.than.2))) ## ## > n.2 <- length(greater.than.2) ## ## > low.qt.2 <- mean.2 - (qt(0.975, length(greater.than.2)) * ## + SE.2) ## ## > high.qt.2 <- mean.2 + (qt(0.975, length(greater.than.2)) * ## + SE.2) ## ## > Sumtable[4, ] <- c("miss2year", n.2, mean.2, low.qt.2, ## + high.qt.2, sd(greater.than.2)) ## ## > boots <- 1000 ## ## > n <- c(1:1000) ## ## > detect1 <- 44 ## ## > detect2 <- 42 ## ## > detect3 <- 40 ## ## > sample2 <- rep(NA, 1000) ## ## > sample5 <- rep(NA, 1000) ## ## > sample10 <- rep(NA, 1000) ## ## > for (i in 1:boots) { ## + sample2[i] <- mean(sample(year2013$interval, detect1, replace = T)) ## + sample5[i] <- mean(sample(year2013$interval, de .... [TRUNCATED] ## ## > sample2 <- sort(sample2) ## ## > sample2.2.5 <- sample2[25] ## ## > sample2.50 <- sample2[500] ## ## > sample2.975 <- sample2[975] ## ## > sample5 <- sort(sample5) ## ## > sample5.2.5 <- sample5[25] ## ## > sample5.50 <- sample5[500] ## ## > sample5.975 <- sample5[975] ## ## > sample10 <- sort(sample10) ## ## > sample10.2.5 <- sample10[25] ## ## > sample10.50 <- sample10[500] ## ## > sample10.975 <- sample10[975] ## ## > Sumtable[5, ] <- c("detect1", detect1, sample2.50, ## + sample2.2.5, sample2.975, NA) ## ## > Sumtable[6, ] <- c("detect2", detect2, sample5.50, ## + sample5.2.5, sample5.975, NA) ## ## > Sumtable[7, ] <- c("detect5", detect3, sample10.50, ## + sample10.2.5, sample10.975, NA) ## ## > length(Data$ID) ## [1] 41 ## ## > length(dat$ID) ## [1] 180 ## ## > glimpse(Data) ## Observations: 41 ## Variables: 8 ## $ ID <fct> AI10124, AI10070, AI10086, AI08340, AI08341, AI0... ## $ Yr.first.seen <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves.1 <int> 2010, 2010, 2010, 2011, 2011, 2011, 2011, 2011, ... ## $ Calves.2 <int> 2013, 2013, 2013, NA, NA, NA, NA, NA, NA, NA, NA... ## $ Interval.1 <int> 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 6, ... ## $ Interval.2 <int> 3, 3, 3, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,... ## $ X <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, ... ## ## > dat.detect <- dplyr::select(Data, ID, Calves, Calves.1, ## + Calves.2) %>% mutate(Calves = factor(Calves), Calves.1 = factor(Calves.1), ## + Cal .... [TRUNCATED] ## ## > a <- as.data.frame.matrix(table(Data$ID, Data$Calves)) ## ## > head(a) ## 2006 2007 2008 2009 2010 2011 ## AI06022 1 0 0 0 0 0 ## AI08340 0 0 1 0 0 0 ## AI08341 0 0 1 0 0 0 ## AI08343 0 0 1 0 0 0 ## AI08355 0 0 1 0 0 0 ## AI08362 0 0 1 0 0 0 ## ## > a[, 7] <- row.names(a) ## ## > colnames(a)[1] <- "y2006" ## ## > colnames(a)[2] <- "y2007" ## ## > colnames(a)[3] <- "y2008" ## ## > colnames(a)[4] <- "y2009" ## ## > colnames(a)[5] <- "y2010" ## ## > colnames(a)[6] <- "y2011" ## ## > colnames(a)[7] <- "ID" ## ## > a[, 8] <- 0 ## ## > colnames(a)[8] <- "y2012" ## ## > a[, 9] <- 0 ## ## > colnames(a)[9] <- "y2013" ## ## > a <- dplyr::select(a, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > b <- as.data.frame.matrix(table(Data$ID, Data$Calves.1)) ## ## > head(b) ## 2010 2011 2012 2013 ## AI06022 0 0 1 0 ## AI08340 0 1 0 0 ## AI08341 0 1 0 0 ## AI08343 0 0 0 1 ## AI08355 0 1 0 0 ## AI08362 0 1 0 0 ## ## > b[, 5] <- row.names(b) ## ## > colnames(b)[5] <- "ID" ## ## > b[, 6] <- 0 ## ## > colnames(b)[6] <- "y2006" ## ## > b[, 7] <- 0 ## ## > colnames(b)[7] <- "y2007" ## ## > b[, 8] <- 0 ## ## > colnames(b)[8] <- "y2008" ## ## > b[, 9] <- 0 ## ## > colnames(b)[9] <- "y2009" ## ## > colnames(b)[1] <- "y2010" ## ## > colnames(b)[2] <- "y2011" ## ## > colnames(b)[3] <- "y2012" ## ## > colnames(b)[4] <- "y2013" ## ## > b <- dplyr::select(b, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > c <- as.data.frame.matrix(table(Data$ID, Data$Calves.2)) ## ## > head(c) ## 2013 ## AI06022 0 ## AI08340 0 ## AI08341 0 ## AI08343 0 ## AI08355 0 ## AI08362 0 ## ## > colnames(c)[1] <- "y2013" ## ## > c[, 2] <- row.names(c) ## ## > colnames(c)[2] <- "ID" ## ## > c[, 3] <- 0 ## ## > colnames(c)[3] <- "y2006" ## ## > c[, 4] <- 0 ## ## > colnames(c)[4] <- "y2007" ## ## > c[, 5] <- 0 ## ## > colnames(c)[5] <- "y2008" ## ## > c[, 6] <- 0 ## ## > colnames(c)[6] <- "y2009" ## ## > c[, 7] <- 0 ## ## > colnames(c)[7] <- "y2010" ## ## > c[, 8] <- 0 ## ## > colnames(c)[8] <- "y2011" ## ## > c[, 9] <- 0 ## ## > colnames(c)[9] <- "y2012" ## ## > c <- dplyr::select(c, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > countdat <- rbind(a, b, c) ## ## > glimpse(countdat) ## Observations: 123 ## Variables: 9 ## $ ID <chr> "AI06022", "AI08340", "AI08341", "AI08343", "AI08355", "... ## $ y2006 <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2007 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2008 <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,... ## $ y2009 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,... ## $ y2010 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2011 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2012 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2013 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## ## > full.dat <- group_by(countdat, ID) %>% summarise(y2006 = sum(y2006), ## + y2007 = sum(y2007), y2008 = sum(y2008), y2009 = sum(y2009), ## + y2010 .... [TRUNCATED] ## ## > 2012 - 2006 ## [1] 6 ## ## > sort(Data$ID) ## [1] AI06022 AI08340 AI08341 AI08343 AI08355 AI08362 AI08364 AI08365 ## [9] AI08372 AI08378 AI08379 AI08383 AI08386 AI08387 AI08390 AI08395 ## [17] AI08403 AI09216 AI09217 AI09221 AI09224 AI09225 AI09247 AI09249 ## [25] AI09259 AI09265 AI09289 AI10043 AI10056 AI10070 AI10085 AI10086 ## [33] AI10102 AI10124 AI10144 AI10160 AI10167 AI10170 AI10177 AI11408 ## [41] AI11430 ## 41 Levels: AI06022 AI08340 AI08341 AI08343 AI08355 AI08362 ... AI11430 ## ## > filter(Data, ID == "AI06022") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI06022 2006 2006 2012 NA 6 NA NA ## ## > filter(Data, ID == "AI08340") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI08340 2008 2008 2011 NA 3 NA NA ## ## > filter(Data, ID == "AI08343") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI08343 2008 2008 2013 NA 5 NA NA ## ## > head(Data) ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI10124 2007 2007 2010 2013 3 3 NA ## 2 AI10070 2008 2008 2010 2013 2 3 NA ## 3 AI10086 2007 2007 2010 2013 3 3 NA ## 4 AI08340 2008 2008 2011 NA 3 NA NA ## 5 AI08341 2008 2008 2011 NA 3 NA NA ## 6 AI08355 2008 2008 2011 NA 3 NA NA ## ## > longer5.6 <- c(sample.true, 5, 6, 6) ## ## > mean.56 <- sum(longer5.6)/length(longer5.6) ## ## > s.56 <- sd(longer5.6) ## ## > SE.56 <- s.56/(sqrt(length(longer5.6))) ## ## > n.56 <- (length(longer5.6)) ## ## > low.qt.56 <- mean.56 - (qt(0.975, length(longer5.6)) * ## + SE.56) ## ## > high.qt.56 <- mean.56 + (qt(0.975, length(longer5.6)) * ## + SE.56) ## ## > Sumtable[8, ] <- c("longer.56", n.56, mean.56, low.qt.56, ## + high.qt.56, sd(longer5.6)) ## ## > Sumtable <- as.data.frame(Sumtable) ## ## > Sumtable$n <- as.numeric(as.character(Sumtable$n)) ## ## > Sumtable$mY <- as.numeric(as.character(Sumtable$mY)) ## ## > Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt)) ## ## > Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt)) ## ## > Sumtable$sd <- as.numeric(as.character(Sumtable$sd)) ## ## > Sumtable$interval <- as.character(Sumtable$interval) ## ## > library(knitr) ## ## > kable(Sumtable, format = "markdown", col.names = c("Interval", ## + "Sample size", "Mean", "Lower limit", "Higher limit", "SD")) ## ## ## |Interval | Sample size| Mean| Lower limit| Higher limit| SD| ## |:---------|-----------:|--------:|-----------:|------------:|---------:| ## |Low | 58| 2.568966| 2.437666| 2.700265| 0.4995461| ## |Medium | 48| 3.104167| 2.943089| 3.265244| 0.5550382| ## |Observed | 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## |miss2year | 41| 3.439024| 3.171549| 3.706499| 0.7761695| ## |detect1 | 44| 3.295454| 3.090909| 3.545454| NA| ## |detect2 | 42| 3.309524| 3.071429| 3.571429| NA| ## |detect5 | 40| 3.300000| 3.075000| 3.575000| NA| ## |longer.56 | 48| 3.458333| 3.165307| 3.751360| 1.0097047| ## ## > ggplot(Sumtable, aes(y = mY, x = interval)) + geom_point(size = 5) + ## + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05, ## + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-78-12} \end{center} \hypertarget{plot-steps}{% \subsection{Plot steps}\label{plot-steps}} \begin{Shaded} \begin{Highlighting}[] \CommentTok{## ----raw graph, echo=FALSE, message=FALSE, warning=FALSE-----------------} \CommentTok{#plot data} \KeywordTok{ggplot}\NormalTok{(sum.dat, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{y =}\NormalTok{ mY, }\DataTypeTok{x =}\NormalTok{ year)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_point}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_line}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_errorbar}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{ymin =}\NormalTok{ low.qt, }\DataTypeTok{ymax =}\NormalTok{ high.qt), }\DataTypeTok{width =} \FloatTok{0.1}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_bw}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-79-1} \end{center} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# ## ----raw graph 2, echo=FALSE, fig.height=6, fig.width=6, message=FALSE, warning=FALSE----} \CommentTok{# } \CommentTok{# #PLOTS} \CommentTok{# par(mfrow=c(2,2))} \CommentTok{# } \CommentTok{# plot(factor(year2010),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="a)",sub="Sample size 3", ylab="Frequency",xlab="Calving interval",} \CommentTok{# cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2011),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="b)",sub="Sample size 15", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2012),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="c)",sub="Sample size 25", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2013),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="d)",sub="Sample size 45", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----raw graph 3, echo=FALSE, fig.height=6, fig.width=6, message=TRUE, warning=TRUE----} \CommentTok{# library(qpcR)} \CommentTok{# #data in one way for plot} \CommentTok{# rawdata <- qpcR:::cbind.na(year2010,year2011,year2012,year2013)} \CommentTok{# rawdata <- as.data.frame(rawdata)} \CommentTok{# } \CommentTok{# #in correct format for ggplot2} \CommentTok{# year2010 <- data.frame(year2010,year = c("2010"))} \CommentTok{# year2010 <- rename(year2010, interval = year2010, year = year )} \CommentTok{# year2011 <- data.frame(year2011,year = c("2011"))} \CommentTok{# year2011 <- rename(year2011, interval = year2011, year = year )} \CommentTok{# year2012 <- data.frame(year2012,year = c("2012"))} \CommentTok{# year2012 <- rename(year2012, interval = year2012, year = year )} \CommentTok{# year2013 <- data.frame(year2013,year = c("2013"))} \CommentTok{# year2013 <- rename(year2013, interval = year2013, year = year )} \CommentTok{# ggplotraw <- rbind(year2010,year2011,year2012, year2013)} \CommentTok{# ggplotraw$interval <- as.numeric(as.character(ggplotraw$interval))} \CommentTok{# } \CommentTok{# #sort(year2013$interval) - sort(sample.true)} \CommentTok{# } \CommentTok{# } \CommentTok{# ggplot(year2013,aes(x = interval)) +} \CommentTok{# geom_bar(alpha = 1, width = 0.9,fill = "black") +} \CommentTok{# xlab(expression("Calving"~"interval"~(italic("years")))) +} \CommentTok{# ylab(expression("Total"~"number"~"of"~"observations"~(italic("n")))) +} \CommentTok{# scale_y_continuous(breaks = c(0,5,10,15,20,25,30), limits = c(0,30)) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black"))} \CommentTok{# #PLOTS} \CommentTok{# #code to store figure} \CommentTok{# # png("Figure_2_NZSRW_calving_interval_2017_highres.png", width = 12, height = 14.8, units = 'cm', res = 1200)} \CommentTok{# # dev.off()} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals, echo=FALSE, fig.height=10, message=FALSE, warning=FALSE----} \CommentTok{# #################################Missing calving intervals################} \CommentTok{# #Intervals modified by accounting for missed intervals} \CommentTok{# #Bradford et al. 2008} \CommentTok{# } \CommentTok{# #Raw Data} \CommentTok{# RealCI <- as.numeric(year2013$interval)} \CommentTok{# } \CommentTok{# #Confidence interval} \CommentTok{# xlong <- RealCI} \CommentTok{# meanlong<-sum(xlong)/length(xlong)} \CommentTok{# slong<-sd(xlong)} \CommentTok{# SElong<-slong/(sqrt(length(xlong)))} \CommentTok{# nlong<-(length(xlong))} \CommentTok{# #Standard error and confidence intervals} \CommentTok{# #2 sided t value at the 95% level = 2.093} \CommentTok{# lowqtlong <- meanlong-(qt(0.975,nlong)*SElong)} \CommentTok{# highqtlong <- meanlong+(qt(0.975,nlong)*SElong)} \CommentTok{# } \CommentTok{# ####################MED CI########################################} \CommentTok{# # 2x 6's and 1x 5 replaced with 3threes} \CommentTok{# MedCI <- c(RealCI[RealCI < 5],3,3,3,3,2,3)} \CommentTok{# #sort(MedCI)} \CommentTok{# xmed<-MedCI} \CommentTok{# meanmed<-sum(xmed)/length(xmed)} \CommentTok{# smed<-sd(xmed)} \CommentTok{# SEmed<-smed/(sqrt(length(xmed)))} \CommentTok{# nmed<-(length(xmed))} \CommentTok{# } \CommentTok{# #Standard error and confidence intervals} \CommentTok{# lowqtmed <- meanmed-(qt(0.975,length(xmed))*SEmed)} \CommentTok{# highqtmed <- meanmed+(qt(0.975,length(xmed))*SEmed)} \CommentTok{# } \CommentTok{# } \CommentTok{# ############################SHORT CI##################################} \CommentTok{# #6,5 replaced with 2 year intervals} \CommentTok{# } \CommentTok{# LowCI <- c(RealCI[RealCI < 4],3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2)} \CommentTok{# xshort<-LowCI} \CommentTok{# meanshort<-mean(xshort)} \CommentTok{# sshort<-sd(xshort)} \CommentTok{# SEshort<-sshort/(sqrt(length(xshort)))} \CommentTok{# } \CommentTok{# #Standard error and confidence intervals} \CommentTok{# lowqtshort <- meanshort-(qt(0.975,length(xshort))*SEshort)} \CommentTok{# highqtshort <- meanshort+(qt(0.975,length(xshort))*SEshort)} \CommentTok{# } \CommentTok{# bdata <-qpcR:::cbind.na(RealCI,MedCI,LowCI)} \CommentTok{# bdata <- as.data.frame(bdata)} \CommentTok{# } \CommentTok{# #Structure of data set} \CommentTok{# #str(bdata)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot, echo=FALSE, fig.height=3.5, fig.width=5.5, message=FALSE, warning=FALSE----} \CommentTok{# #Basic plots} \CommentTok{# par(mfrow=c(1,3))} \CommentTok{# plot(factor(bdata$LowCI),main="Lowest possible interval")} \CommentTok{# plot(factor(bdata$MedCI), main="Medium possible interval")} \CommentTok{# plot(factor(bdata$RealCI),main="Observed interval")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot2, fig.height=5.5, fig.width=4.5, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# #Density basic plots} \CommentTok{# par(mfrow=c(3,1))} \CommentTok{# plot(density(as.numeric(as.character(LowCI)),bw=.5), main="Lowest possible interval")} \CommentTok{# plot(density(as.numeric(as.character(MedCI)),bw= 0.5), main="Medium possible interval")} \CommentTok{# plot(density(as.numeric(as.character(RealCI)),bw = 0.5),main="Observed interval")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals table, fig.height=8, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# } \CommentTok{# ###################################SUMMARY############################} \CommentTok{# #Pull out important information} \CommentTok{# Sumtable<-data.frame(variable = c("low.qt","mean","high.qt","sd", "SE"), short=c(lowqtshort,meanshort,highqtshort,sshort,SEshort),} \CommentTok{# medium=c(lowqtmed,meanmed,highqtmed,smed,SEmed),} \CommentTok{# real=c(lowqtlong,meanlong,highqtlong,slong,SElong))} \CommentTok{# } \CommentTok{# #Make dataframe to plot} \CommentTok{# n <- c(length(LowCI),length(MedCI),length(year2013$interval))} \CommentTok{# mY <- c(mean(LowCI),mean(MedCI),mean(year2013$interval))} \CommentTok{# interval <-c("Low", "Medium","Observed")} \CommentTok{# low.qt <- c(lowqtshort,lowqtmed,low.qt2013)} \CommentTok{# high.qt <- c(highqtshort,highqtmed,high.qt2013)} \CommentTok{# sd <- c(sshort,smed,s2013)} \CommentTok{# Sumtable <- cbind(interval,n,mY,low.qt,high.qt,sd)} \CommentTok{# Sumtable <- as.data.frame(Sumtable)} \CommentTok{# } \CommentTok{# Sumtable$n <- as.numeric(as.character(Sumtable$n))} \CommentTok{# Sumtable$mY <- as.numeric(as.character(Sumtable$mY))} \CommentTok{# Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt))} \CommentTok{# Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt))} \CommentTok{# Sumtable$sd <- as.numeric(as.character(Sumtable$sd))} \CommentTok{# Sumtable$interval <- as.character(Sumtable$interval)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot3, echo=FALSE, fig.height=4, message=FALSE, warning=FALSE----} \CommentTok{# ggplot(Sumtable, aes(y = mY, x = interval)) +} \CommentTok{# geom_point(size = 5) +} \CommentTok{# geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05,size = 1, alpha = 0.5) +} \CommentTok{# scale_y_continuous(breaks = round(seq(2.3, 3.6, by = 0.2),1)) +} \CommentTok{# labs(y = "Mean calving interval",x = "Calving interval modification" ) +} \CommentTok{# geom_point(size = 3) +} \CommentTok{# theme_classic() +} \CommentTok{# theme_hc() +} \CommentTok{# theme(legend.position="none")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing_data_table, echo=FALSE--------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# } \CommentTok{# kable(Sumtable, format = "markdown",col.names = c("Interval","Sample size", "Mean", "Lower limit", "Higher limit", "SD"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----srw_data_table, echo=FALSE------------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# setwd("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data")} \CommentTok{# srwdat <- read.csv(file = "srw_data.csv")} \CommentTok{# } \CommentTok{# #str(srwdat)} \CommentTok{# kable(srwdat, format = "markdown",col.names = c("Sample size","Mean", "Lower limit", "Higher limit", "SE","Author", "Location"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap single, echo=FALSE, fig.height=5--------------------------} \CommentTok{# ############################NZ Simple sample##############################} \CommentTok{# #WITH replacement} \CommentTok{# } \CommentTok{# # to try and match number of intervals observed in other populations} \CommentTok{# # find references} \CommentTok{# SAreps <- 1500} \CommentTok{# ARreps <- 800} \CommentTok{# Aussiereps <- 2000} \CommentTok{# low <- 1000} \CommentTok{# verylow <- 100} \CommentTok{# lowest <- 10} \CommentTok{# } \CommentTok{# #Very raw plots} \CommentTok{# par(mfrow=c(2,3))} \CommentTok{# plot(factor(sample(year2013$interval,lowest,replace=T)),main = "3 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,verylow,replace=T)),main = "10 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,low,replace=T)),main = "30 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,Aussiereps,replace=T)),main = "500 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,ARreps,replace=T)),main = "800 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,SAreps,replace=T)),main = "1500 intervals")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap_multiple, echo=FALSE--------------------------------------} \CommentTok{# #do each one 1000 times} \CommentTok{# boots <- 1000} \CommentTok{# n <- c(1:1000)} \CommentTok{# } \CommentTok{# } \CommentTok{# ###########################n10} \CommentTok{# var10 <- paste0("n_", 1:10)} \CommentTok{# sample10 <-matrix(data = NA, ncol = lowest, nrow = boots)} \CommentTok{# colnames(sample10) <- as.list(var10)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample10 [i, ] <- sample(year2013$interval,lowest,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample10 <- as.data.frame(sample10)} \CommentTok{# sample10 <- sample10 %>%} \CommentTok{# mutate(mean10 = rowMeans(sample10))} \CommentTok{# } \CommentTok{# sample10t <- as.matrix(sample10)} \CommentTok{# sample10t <-t(sample10t)} \CommentTok{# } \CommentTok{# #########################verylow sample size} \CommentTok{# #set up variable names} \CommentTok{# var100 <- paste0("n_", 1:100)} \CommentTok{# } \CommentTok{# sample100 <-matrix(data = NA, ncol = verylow, nrow = boots)} \CommentTok{# colnames(sample100) <- as.list(var100)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample100 [i, ] <- sample(year2013$interval,verylow,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample100 <- as.data.frame(sample100)} \CommentTok{# sample100 <- sample100 %>%} \CommentTok{# mutate(mean100 = rowMeans(sample100))} \CommentTok{# } \CommentTok{# #########################middle one} \CommentTok{# #set up variable names} \CommentTok{# var500 <- paste0("n_", 1:500)} \CommentTok{# } \CommentTok{# sample500 <-matrix(data = NA, ncol = 500, nrow = boots)} \CommentTok{# colnames(sample500) <- as.list(var500)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample500 [i, ] <- sample(year2013$interval,500,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample500 <- as.data.frame(sample500)} \CommentTok{# sample500 <- sample500 %>%} \CommentTok{# mutate(mean500 = rowMeans(sample500))} \CommentTok{# } \CommentTok{# } \CommentTok{# #########################low sample size} \CommentTok{# #set up variable names} \CommentTok{# var1000 <- paste0("n_", 1:1000)} \CommentTok{# } \CommentTok{# sample1000 <-matrix(data = NA, ncol = low, nrow = boots)} \CommentTok{# colnames(sample1000) <- as.list(var1000)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample1000 [i, ] <- sample(year2013$interval,low,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample1000 <- as.data.frame(sample1000)} \CommentTok{# sample1000 <- sample1000 %>%} \CommentTok{# mutate(mean1000 = rowMeans(sample1000))} \CommentTok{# } \CommentTok{# #########################AUS sample size} \CommentTok{# #set up variable names} \CommentTok{# varA <- paste0("n_", 1:2000)} \CommentTok{# } \CommentTok{# sampleA <-matrix(data = NA, ncol = Aussiereps, nrow = boots)} \CommentTok{# colnames(sampleA) <- as.list(varA)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sampleA [i, ] <- sample(year2013$interval,Aussiereps,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sampleA <- as.data.frame(sampleA)} \CommentTok{# sampleA <- sampleA %>%} \CommentTok{# mutate(meanA = rowMeans(sampleA))} \CommentTok{# } \CommentTok{# sampleAt <- t(sampleA)} \CommentTok{# } \CommentTok{# for(i in c(1:ncol(sampleA))) \{} \CommentTok{# sampleA[,i] <- as.numeric(as.character(sampleA[,i]))} \CommentTok{# \}} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# #COnfidence intervals} \CommentTok{# } \CommentTok{# ab <- sort(sampleA$meanA)} \CommentTok{# nab <- length(ab)} \CommentTok{# #low = 25/1000} \CommentTok{# ab2.5 <- ab[25]} \CommentTok{# #high = 975/1000} \CommentTok{# ab0.97.5 <- ab[975]} \CommentTok{# } \CommentTok{# ab <- sort(sampleA$meanA)} \CommentTok{# nab <- length(ab)} \CommentTok{# #low = 25/1000} \CommentTok{# ab2.5 <- ab[25]} \CommentTok{# #high = 975/1000} \CommentTok{# ab0.97.5 <- ab[975]} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap plot2, fig.height=5, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# #plot the data over each other to look at change in density} \CommentTok{# par(mfrow=c(1,1))} \CommentTok{# #plot(density(sample3$mean3,bw = .15),lwd = 3,lyt = 5, main = "", xlab = "Calving interval", box = FALSE,axis = FALSE)} \CommentTok{# } \CommentTok{# plot(density(sample10$mean10,bw = .05),col ="black", lty = 1, main = "", lwd = 5,ylim = c(0,8),xlim = c(2,4.5), axes=FALSE,xlab = "Calving interval")} \CommentTok{# lines(density(sample100$mean100,bw = .05),col ="black", lty = 2, lwd = 4)} \CommentTok{# lines(density(sample500$mean500,bw = .05),col ="black", lty = 3, lwd = 3)} \CommentTok{# lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 4, lwd = 2)} \CommentTok{# lines(density(sampleA$meanA,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# legend('topright',title = "Legend", c("n=10, cv=8.12 ", "n=100, cv=2.43", "n=500, c.v=1.15", "n=1000, cv=0.79", "n=2000, cv=0.56"),bty = "n",} \CommentTok{# lty = c(1,2,3,4,5), lwd = c(5,4,3,2,1), cex=.75)} \CommentTok{# axis(1,lwd=2)} \CommentTok{# axis(2,lwd=2)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----final plot for publication1, echo=FALSE-----------------------------} \CommentTok{# #final [plot]} \CommentTok{# #size defined by NZJFMR} \CommentTok{# # 195 mm (h) ? 148 mm (w).} \CommentTok{# #ylab(expression("Total"~"number"~"of"~"observations"~(italic("n")))) +} \CommentTok{# } \CommentTok{# plot(density(sample10$mean10,bw = .05),col ="black", lty = 3, main = "", lwd = 1,ylim = c(0,8),xlim = c(2.5,4.5), axes=FALSE, xlab = expression("Calving"~"interval"~(italic("years"))))} \CommentTok{# lines(density(sample100$mean100,bw = .05),col ="black", lty = 4, lwd = 1)} \CommentTok{# lines(density(sample500$mean500,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 2, lwd = 1)} \CommentTok{# lines(density(sampleA$meanA,bw = .05),col ="black", lty = 1, lwd = 2)} \CommentTok{# legend(y = 8, x = 3.9,title = expression(bold("Sample size (n)")), c(expression(italic("n")~"="~"10"), expression(italic("n")~"="~"100"), expression(italic("n")~"="~"500"), expression(italic("n")~"="~"1000"), expression(italic("n")~"="~"2000")),bty = "n",} \CommentTok{# lty = c(3,4,5,2,1), lwd = c(1,1,1,1,2), cex=1)} \CommentTok{# axis(1,lwd=2)} \CommentTok{# axis(2,lwd=2)} \CommentTok{# } \CommentTok{# # PLOT CODE FOR PUBLICATION} \CommentTok{# # png("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Figures/Figure_3_NZSRW_calving_interval_2017_lowres.png", width = 14.8, height = 14.8, units = 'cm', res = 400)} \CommentTok{# # dev.off()} \CommentTok{# #} \CommentTok{# #} \CommentTok{# # png("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Figures/Figure_3_NZSRW_calving_interval_2017_highres.png", width = 14.8, height = 14.8, units = 'cm', res = 1200)} \CommentTok{# #} \CommentTok{# # plot(density(sample10$mean10,bw = .05),col ="black", lty = 3, main = "", lwd = 1,ylim = c(0,8),xlim = c(2.5,4.5), axes=FALSE,xlab = expression("Calving"~"interval"~(italic("years"))))} \CommentTok{# # lines(density(sample100$mean100,bw = .05),col ="black", lty = 4, lwd = 1)} \CommentTok{# # lines(density(sample500$mean500,bw = .05),col ="black", lty = 2, lwd = 1)} \CommentTok{# # lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# # lines(density(sampleA$meanA,bw = .05),col ="black", lty = 1, lwd = 2)} \CommentTok{# # legend(y = 8, x = 3.9,title = expression(bold("Sample size (n)")), c(expression(italic("n")~"="~"10"), expression(italic("n")~"="~"100"), expression(italic("n")~"="~"500"), expression(italic("n")~"="~"1000"), expression(italic("n")~"="~"2000")),bty = "n",} \CommentTok{# # lty = c(3,4,2,5,1), lwd = c(1,1,1,1,2), cex=1)} \CommentTok{# # axis(1,lwd=2)} \CommentTok{# # axis(2,lwd=2)} \CommentTok{# #} \CommentTok{# # dev.off()} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_1, echo=TRUE----------------------------------------} \CommentTok{# #observed sample} \CommentTok{# rev.one <- bdata$RealCI[1:45]} \CommentTok{# } \CommentTok{# #sample 45 times} \CommentTok{# sample.true <- year2013$interval} \CommentTok{# } \CommentTok{# #power analysis} \CommentTok{# pwr.test.results <- power.t.test(n = 45,# sample size} \CommentTok{# delta = seq(0,0.99,0.001), #difference between means} \CommentTok{# sd = sd(sample.true), #observed variation} \CommentTok{# alternative = "one.sided", #observed test type} \CommentTok{# sig.level = 0.05) #significance level} \CommentTok{# } \CommentTok{# #additional packages are avaliable for more complex analysis} \CommentTok{# #but have not done this as don't think it is needed} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_1_plot, echo=FALSE, message=FALSE, warning=FALSE----} \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis <- as.data.frame(cbind(} \CommentTok{# pwr.test.results$power,} \CommentTok{# pwr.test.results$delta))} \CommentTok{# } \CommentTok{# colnames(pwr.analysis) <- c("Power","Mean.difference")} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis.1 <- pwr.analysis %>%} \CommentTok{# mutate(Alpha = 1- Power,} \CommentTok{# Mean.estimate = 3.31 + Mean.difference)} \CommentTok{# # %>%} \CommentTok{# # select(Alpha,Mean.estimate)} \CommentTok{# } \CommentTok{# #work out where the cut-off is} \CommentTok{# a <- filter(pwr.analysis.1, Alpha < 0.05)} \CommentTok{# a[1,]} \CommentTok{# } \CommentTok{# #plot data} \CommentTok{# ggplot(data = pwr.analysis.1, aes(x = Mean.estimate, y = Alpha)) +} \CommentTok{# geom_line(size = 1.5) +} \CommentTok{# geom_vline(xintercept = 3.903, col = "blue") +} \CommentTok{# geom_hline(yintercept = 0.05) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black")) +} \CommentTok{# ggtitle("Raw data result plot (n = 45)")} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_2_plot, echo=FALSE, message=FALSE, warning=FALSE----} \CommentTok{# #observed sample} \CommentTok{# rev.one <- bdata$RealCI[1:45]} \CommentTok{# } \CommentTok{# #sample 45 times} \CommentTok{# sample.true <- year2013$interval} \CommentTok{# } \CommentTok{# #difference} \CommentTok{# diff <- 3.63-3.31 #observed mean of australian population} \CommentTok{# } \CommentTok{# #power analysis} \CommentTok{# pwr.test.results <- power.t.test(n = seq(1,200,1),# sample size} \CommentTok{# delta = diff, #difference between means} \CommentTok{# sd = sd(sample.true), #observed variation} \CommentTok{# alternative = "one.sided", #observed test type} \CommentTok{# sig.level = 0.05) #significance level} \CommentTok{# } \CommentTok{# #additional packages are avaliable for more complex analysis} \CommentTok{# #but have not done this as don't think it is needed} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis <- as.data.frame(cbind(} \CommentTok{# pwr.test.results$power,} \CommentTok{# pwr.test.results$n))} \CommentTok{# } \CommentTok{# colnames(pwr.analysis) <- c("Power","Sample.size")} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis.1 <- pwr.analysis %>%} \CommentTok{# mutate(Alpha = 1- Power)} \CommentTok{# # %>%} \CommentTok{# # select(Alpha,Mean.estimate)} \CommentTok{# } \CommentTok{# #work out where the cut-off is} \CommentTok{# a <- filter(pwr.analysis.1, Alpha < 0.05)} \CommentTok{# a[1,]} \CommentTok{# } \CommentTok{# #plot data} \CommentTok{# ggplot(data = pwr.analysis.1, aes(x = Sample.size, y = Alpha)) +} \CommentTok{# geom_line(size = 1.5) +} \CommentTok{# geom_vline(xintercept = 45, col = "red") +} \CommentTok{# geom_vline(xintercept = 153, col = "blue") +} \CommentTok{# geom_hline(yintercept = 0.05) +} \CommentTok{# scale_y_continuous(limits = c(0,1)) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black")) +} \CommentTok{# ggtitle("Observed difference between Australian and NZ mean")} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missed individuals 1, echo=FALSE------------------------------------} \CommentTok{# dat <- read.csv("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data/raw_observations_2012.csv")} \CommentTok{# #data structure} \CommentTok{# glimpse(dat)} \CommentTok{# head(dat)} \CommentTok{# #And the second dataset} \CommentTok{# dat1<- read.csv("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data/RawCI.csv", header=T, quote="\textbackslash{}"")} \CommentTok{# #data structure} \CommentTok{# glimpse(dat1)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missed individuals 2, echo=FALSE, message=FALSE, warning=FALSE------} \CommentTok{# ##I can then modify this data to} \CommentTok{# #restructure dataset of capture to long dataset} \CommentTok{# dat3 <- dplyr::select(dat, ID, X2006:X2012)%>%} \CommentTok{# gather(year, count,X2006:X2012)} \CommentTok{# } \CommentTok{# #add data on calves} \CommentTok{# dat4 <- full_join(dat3,dat1, by = "ID")} \CommentTok{# dat5 <- dplyr::select(dat4,ID,year,count,Yr.first.seen,Calves,Calves.1,Calves.2)} \CommentTok{# } \CommentTok{# dat6 <- filter(dat5,count >0)} \CommentTok{# glimpse(dat6)} \CommentTok{# } \CommentTok{# dat7 <- mutate(dat6, year = ifelse(year == "X2006","2006", year),} \CommentTok{# year = ifelse(year == "X2007","2007", year),} \CommentTok{# year = ifelse(year == "X2008","2008", year),} \CommentTok{# year = ifelse(year == "X2009","2009", year),} \CommentTok{# year = ifelse(year == "X2010","2010", year),} \CommentTok{# year = ifelse(year == "X2011","2011", year),} \CommentTok{# year = ifelse(year == "X2012","2012", year))} \CommentTok{# } \CommentTok{# a <- group_by(dat7, ID, Yr.first.seen) %>%} \CommentTok{# mutate(mother = ifelse(Yr.first.seen > 0, 1, 0)) %>%} \CommentTok{# filter(mother == 1) %>%} \CommentTok{# ungroup() %>%} \CommentTok{# dplyr::select(ID,year,Calves,Calves.1) %>%} \CommentTok{# filter(Calves.1<2013) %>%} \CommentTok{# filter(!year == Calves) %>%} \CommentTok{# filter(!year ==Calves.1)} \CommentTok{# } \CommentTok{# a} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment3, echo=TRUE, message=FALSE, warning=FALSE-----------} \CommentTok{# greater.than.2 <- sample.true[sample.true>2]} \CommentTok{# } \CommentTok{# #greater.than.2} \CommentTok{# mean.2<-sum(greater.than.2)/length(greater.than.2)} \CommentTok{# s.2<-sd(greater.than.2)} \CommentTok{# SE.2<-s2013/(sqrt(length(greater.than.2)))} \CommentTok{# n.2<-length(greater.than.2)} \CommentTok{# low.qt.2<- mean.2-(qt(0.975,length(greater.than.2))*SE.2)} \CommentTok{# high.qt.2 <- mean.2+(qt(0.975,length(greater.than.2))*SE.2)} \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[4,] <- c("miss2year",n.2,mean.2,low.qt.2,} \CommentTok{# high.qt.2,sd(greater.than.2))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----different missing intervals 1, echo=TRUE----------------------------} \CommentTok{# ########################### 2.2%} \CommentTok{# #parameters} \CommentTok{# boots <- 1000} \CommentTok{# n <- c(1:1000)} \CommentTok{# } \CommentTok{# ###round all percentages upwards} \CommentTok{# detect1 <- 44 # (45*1.02) - 45 = 0.9} \CommentTok{# detect2 <- 42 # (45*1.05) - 45 = 2.25} \CommentTok{# detect3 <- 40 # (45*1.10) - 45 = 4.5} \CommentTok{# } \CommentTok{# sample2 <-rep(NA, 1000)} \CommentTok{# sample5 <-rep(NA, 1000)} \CommentTok{# sample10 <-rep(NA, 1000)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample2[i]<-mean(sample(year2013$interval,detect1,replace=T))} \CommentTok{# sample5[i]<-mean(sample(year2013$interval,detect2,replace=T))} \CommentTok{# sample10[i]<-mean(sample(year2013$interval,detect3,replace=T))} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# ######################estimates##############} \CommentTok{# sample2 <- sort(sample2)} \CommentTok{# #low = 25/1000} \CommentTok{# sample2.2.5 <- sample2[25]} \CommentTok{# #median} \CommentTok{# sample2.50 <- sample2[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample2.975 <- sample2[975]} \CommentTok{# } \CommentTok{# sample5 <- sort(sample5)} \CommentTok{# #low = 25/1000} \CommentTok{# sample5.2.5 <- sample5[25]} \CommentTok{# #median} \CommentTok{# sample5.50 <- sample5[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample5.975 <- sample5[975]} \CommentTok{# } \CommentTok{# sample10 <- sort(sample10)} \CommentTok{# #low = 25/1000} \CommentTok{# sample10.2.5 <- sample10[25]} \CommentTok{# #median} \CommentTok{# sample10.50 <- sample10[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample10.975 <- sample10[975]} \CommentTok{# } \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[5,] <- c("detect1",detect1,sample2.50,sample2.2.5,sample2.975,NA)} \CommentTok{# Sumtable[6,] <- c("detect2",detect2,sample5.50,sample5.2.5,sample5.975,NA)} \CommentTok{# Sumtable[7,] <- c("detect5",detect3,sample10.50,sample10.2.5,sample10.975,NA)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----detection sim.2-----------------------------------------------------} \CommentTok{# } \CommentTok{# #be very careful as Dat is just IDS and no id of females with calves} \CommentTok{# #BUT Data is identified females...} \CommentTok{# length(Data$ID)} \CommentTok{# length(dat$ID)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# glimpse(Data)} \CommentTok{# dat.detect <- dplyr::select(Data,ID,Calves,Calves.1, Calves.2) %>%} \CommentTok{# mutate(Calves = factor(Calves),} \CommentTok{# Calves.1 = factor(Calves.1),} \CommentTok{# Calves.2 = factor(Calves.2))} \CommentTok{# } \CommentTok{# a <- as.data.frame.matrix(table(Data$ID,Data$Calves))} \CommentTok{# head(a)} \CommentTok{# a[,7] <-row.names(a)} \CommentTok{# colnames(a)[1] <- "y2006"} \CommentTok{# colnames(a)[2] <- "y2007"} \CommentTok{# colnames(a)[3] <- "y2008"} \CommentTok{# colnames(a)[4] <- "y2009"} \CommentTok{# colnames(a)[5] <- "y2010"} \CommentTok{# colnames(a)[6] <- "y2011"} \CommentTok{# colnames(a)[7] <- "ID"} \CommentTok{# a[,8] <- 0} \CommentTok{# colnames(a)[8] <- "y2012"} \CommentTok{# a[,9] <- 0} \CommentTok{# colnames(a)[9] <- "y2013"} \CommentTok{# a <- dplyr::select(a,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012, y2013)} \CommentTok{# } \CommentTok{# } \CommentTok{# b <- as.data.frame.matrix(table(Data$ID,Data$Calves.1))} \CommentTok{# head(b)} \CommentTok{# b[,5] <-row.names(b)} \CommentTok{# colnames(b)[5] <- "ID"} \CommentTok{# b[,6] <- 0} \CommentTok{# colnames(b)[6] <- "y2006"} \CommentTok{# b[,7] <- 0} \CommentTok{# colnames(b)[7] <- "y2007"} \CommentTok{# b[,8] <- 0} \CommentTok{# colnames(b)[8] <- "y2008"} \CommentTok{# b[,9] <- 0} \CommentTok{# colnames(b)[9] <- "y2009"} \CommentTok{# colnames(b)[1] <- "y2010"} \CommentTok{# colnames(b)[2] <- "y2011"} \CommentTok{# colnames(b)[3] <- "y2012"} \CommentTok{# colnames(b)[4] <- "y2013"} \CommentTok{# b <- dplyr::select(b,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012, y2013)} \CommentTok{# } \CommentTok{# } \CommentTok{# c <- as.data.frame.matrix(table(Data$ID,Data$Calves.2))} \CommentTok{# head(c)} \CommentTok{# colnames(c)[1] <- "y2013"} \CommentTok{# c[,2] <-row.names(c)} \CommentTok{# colnames(c)[2] <- "ID"} \CommentTok{# c[,3] <- 0} \CommentTok{# colnames(c)[3] <- "y2006"} \CommentTok{# c[,4] <- 0} \CommentTok{# colnames(c)[4] <- "y2007"} \CommentTok{# c[,5] <- 0} \CommentTok{# colnames(c)[5] <- "y2008"} \CommentTok{# c[,6] <- 0} \CommentTok{# colnames(c)[6] <- "y2009"} \CommentTok{# c[,7] <- 0} \CommentTok{# colnames(c)[7] <- "y2010"} \CommentTok{# c[,8] <- 0} \CommentTok{# colnames(c)[8] <- "y2011"} \CommentTok{# c[,9] <- 0} \CommentTok{# colnames(c)[9] <- "y2012"} \CommentTok{# } \CommentTok{# c <- dplyr::select(c,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012,y2013)} \CommentTok{# } \CommentTok{# countdat <- rbind(a,b,c)} \CommentTok{# glimpse(countdat)} \CommentTok{# head(full.dat)} \CommentTok{# } \CommentTok{# full.dat <- group_by(countdat, ID) %>%} \CommentTok{# summarise(y2006 = sum(y2006),} \CommentTok{# y2007 = sum(y2007),} \CommentTok{# y2008 = sum(y2008),} \CommentTok{# y2009 = sum(y2009),} \CommentTok{# y2010 = sum(y2010),} \CommentTok{# y2011 = sum(y2011),} \CommentTok{# y2012 = sum(y2012),} \CommentTok{# y2013 = sum(y2013))} \CommentTok{# } \CommentTok{# 2012-2006} \CommentTok{# } \CommentTok{# ##checking....} \CommentTok{# } \CommentTok{# sort(Data$ID)} \CommentTok{# filter(Data, ID == "AI06022")} \CommentTok{# filter(Data, ID == "AI08340")} \CommentTok{# filter(Data, ID == "AI08343")} \CommentTok{# } \CommentTok{# head(Data)} \CommentTok{# } \CommentTok{# } \CommentTok{# # glimpse(c)} \CommentTok{# # Data$Calves.1,} \CommentTok{# # # Spread and gather are complements} \CommentTok{# # df <- data.frame(x = c("a", "b"), y = c(3, 4), z = c(5, 6))} \CommentTok{# # df %>% spread(x, y) %>% gather(x, y, a:b, na.rm = TRUE)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----different missing intervals 2---------------------------------------} \CommentTok{# longer5.6 <- c(sample.true,5,6,6)} \CommentTok{# } \CommentTok{# #greater.than.2} \CommentTok{# mean.56<-sum(longer5.6)/length(longer5.6)} \CommentTok{# s.56<-sd(longer5.6)} \CommentTok{# SE.56<-s.56/(sqrt(length(longer5.6)))} \CommentTok{# n.56<-(length(longer5.6))} \CommentTok{# low.qt.56<- mean.56-(qt(0.975,length(longer5.6))*SE.56)} \CommentTok{# high.qt.56 <- mean.56+(qt(0.975,length(longer5.6))*SE.56)} \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[8,] <- c("longer.56",n.56,mean.56,low.qt.56,high.qt.56,sd(longer5.6))} \CommentTok{# } \CommentTok{# ###sort out numbering in dataframe} \CommentTok{# Sumtable <- as.data.frame(Sumtable)} \CommentTok{# } \CommentTok{# Sumtable$n <- as.numeric(as.character(Sumtable$n))} \CommentTok{# Sumtable$mY <- as.numeric(as.character(Sumtable$mY))} \CommentTok{# Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt))} \CommentTok{# Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt))} \CommentTok{# Sumtable$sd <- as.numeric(as.character(Sumtable$sd))} \CommentTok{# Sumtable$interval <- as.character(Sumtable$interval)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing_data_table 2, echo=FALSE------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# } \CommentTok{# kable(Sumtable, format = "markdown",col.names = c("Interval","Sample size", "Mean", "Lower limit", "Higher limit", "SD"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment3_plot, echo=FALSE-----------------------------------} \CommentTok{# ggplot(Sumtable, aes(y = mY, x = interval)) +} \CommentTok{# geom_point(size = 5) +} \CommentTok{# geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05,size = 1, alpha = 0.5) +} \CommentTok{# scale_y_continuous(breaks = round(seq(2.3, 5, by = 0.2),1)) +} \CommentTok{# labs(y = "Mean calving interval",x = "Calving interval modification" ) +} \CommentTok{# geom_point(size = 3) +} \CommentTok{# theme_classic() +} \CommentTok{# theme_hc() +} \CommentTok{# theme(legend.position="none")} \end{Highlighting} \end{Shaded} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \hypertarget{exercise-4}{% \section{Exercise 4}\label{exercise-4}} In these exercises, you will be adapting the code written in this chapter to investigate slightly different questions. You should create a new R script \texttt{Ex4.R} in your working directory for these exercises so your chapter code is left unchanged. Exercise 4A is based solely on the required material and Exercises 4B - 4F are based on the example cases. You should work through each example before attempting each of the later exercises. \emph{The solutions to this exercise are found at the end of this book (\protect\hyperlink{ex4a-answers}{here}). You are \textbf{strongly recommended} to make a good attempt at completing this exercise on your own and only look at the solutions when you are truly stumped.} \hypertarget{exercise-4a-required-material-only}{% \subsection*{Exercise 4A: Required Material Only}\label{exercise-4a-required-material-only}} \addcontentsline{toc}{subsection}{Exercise 4A: Required Material Only} These questions are based on the material in Sections \ref{randomness} - \ref{mc-summaries} only. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Simulate flipping an unfair coin (probability of heads = 0.6) 100 times using \texttt{rbinom()}. Count the number of heads and tails. \item Simulate flipping the same unfair coin 100 times, but using \texttt{sample()} instead. Determine what fraction of the flips resulted in heads. \item Simulate rolling a fair 6-sided die 100 times using \texttt{sample()}. Determine what fraction of the rolls resulted in an even number. \item Simulate rolling the same die 100 times, but use the function \texttt{rmultinom()} instead. Look at the help file for details on how to use this function. Determine what fraction of the rolls resulted in an odd number. \end{enumerate} \protect\hyperlink{ex4a-answers}{Solutions} \hypertarget{exercise-4b-test-rnorm}{% \subsection*{\texorpdfstring{Exercise 4B: Test \texttt{rnorm}}{Exercise 4B: Test rnorm}}\label{exercise-4b-test-rnorm}} \addcontentsline{toc}{subsection}{Exercise 4B: Test \texttt{rnorm}} These questions will require you to adapt the code written in Section \ref{rnorm-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Adapt this example to investigate another univariate probability distribution, like \texttt{-lnorm()}, \texttt{-pois()}, or \texttt{-beta()}. See the help files (e.g., \texttt{?rpois}) for details on how to use each function. \end{enumerate} \protect\hyperlink{ex4b-answers}{Solutions} \hypertarget{exercise-4c-stochastic-power-analysis}{% \subsection*{Exercise 4C: Stochastic Power Analysis}\label{exercise-4c-stochastic-power-analysis}} \addcontentsline{toc}{subsection}{Exercise 4C: Stochastic Power Analysis} These questions will require you to adapt the code written in Section \ref{power-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item What sample size \texttt{n} do you need to have a power of 0.8 of detecting a significant difference between the two tagging methods? \item How do the inferences from the power analysis change if you are interested in \texttt{p\_new\ =\ 0.4} instead of \texttt{p\_new\ =\ 0.25}? Do you need to tag more or fewer fish in this case? \item Your analysis takes a bit of time to run so you are interested in tracking its progress. Add a progress message to your nested \texttt{for()} loop that will print the sample size currently being analyzed: \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \KeywordTok{cat}\NormalTok{(}\StringTok{"}\CharTok{\textbackslash{}r}\StringTok{"}\NormalTok{, }\StringTok{"Sample Size = "}\NormalTok{, n_try[n])} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ ...} \NormalTok{ \}} \NormalTok{\}} \end{Highlighting} \end{Shaded} \protect\hyperlink{ex4c-answers}{Solutions} \hypertarget{exercise-4d-harvest-policy-analysis}{% \subsection*{Exercise 4D: Harvest Policy Analysis}\label{exercise-4d-harvest-policy-analysis}} \addcontentsline{toc}{subsection}{Exercise 4D: Harvest Policy Analysis} These questions will require you to adapt the code written in Section \ref{harv-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Add an argument to \texttt{ricker\_sim()} that will give the user an option to create a plot that shows the time series of recruitment, harvest, and escapement all on the same plot. Set the default to be to not plot the result, in case you forget to turn it off before performing the Monte Carlo analysis. \item Add an \emph{error handler} to \texttt{ricker\_sim()} that will cause the function to return an error \texttt{if()} the names of the vector passed to the \texttt{param} argument aren't what the function is expecting. You can use \texttt{stop("Error\ Message\ Goes\ Here")} to have your function stop and return an error. \item How do the results of the trade-off analysis differ if the process error was larger (a larger value of \(\sigma\))? \item Add implementation error to the harvest policy. That is, if the target exploitation rate is \(U\), make the real exploitation rate in year \(y\) be: \(U_y \sim Beta(a,b)\), where \(a = 100U\) and \(b = 100(1-U)\). You can make there be more implementation error by inserting a smaller number other than 100 here. How does this affect the trade-off analysis? \end{enumerate} \protect\hyperlink{ex4d-answers}{Solutions} \hypertarget{exercise-4e-the-bootstrap}{% \subsection*{Exercise 4E: The Bootstrap}\label{exercise-4e-the-bootstrap}} \addcontentsline{toc}{subsection}{Exercise 4E: The Bootstrap} These questions will require you to adapt the code written in Section \ref{boot-test-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Replicate the bootstrap analysis, but adapt it for the linear regression example in Section \ref{regression}. Stop at the step where you summarize the 95\% interval range. \item Compare the 95\% bootstrap confidence intervals to the intervals you get by running the \texttt{predict()} function on the original data set with the argument \texttt{interval\ =\ "confidence"}. \end{enumerate} \protect\hyperlink{ex4e-answers}{Solutions} \hypertarget{exercise-4f-permutation-tests}{% \subsection*{Exercise 4F: Permutation Tests}\label{exercise-4f-permutation-tests}} \addcontentsline{toc}{subsection}{Exercise 4F: Permutation Tests} These questions will require you to adapt the code written in Section \ref{perm-test-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Adapt the code to perform a permutation test for the difference in each of the zooplankton densities between treatments. Don't forget to fix the missing value in the \texttt{chao} variable. See \protect\hyperlink{ex1b}{Exercise 2} for more details on this. \item Adapt the code to perform a permutation test for another data set used in this book where there are observations of both a categorical variable and a continuous variable. The data sets \texttt{sockeye.csv}, \texttt{growth.csv}, or \texttt{creel.csv} should be good starting points. \item Add a calculation of the p-value for a one-tailed test (i.e., that the difference in means is greater or less than zero). Steps 1 - 4 are the same: all you need is \texttt{Dnull} and \texttt{Dobs}. Don't be afraid to Google this if you are confused. \end{enumerate} \protect\hyperlink{ex4f-answers}{Solutions} \hypertarget{calves}{% \chapter{Calving interval example}\label{calves}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{source}\NormalTok{(}\StringTok{"./R/Rcode/Final_report_Davidson2017.R"}\NormalTok{, }\DataTypeTok{echo =} \OtherTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## > library(boot) ## ## > library(tidyverse) ## ## > library(dplyr) ## ## > library(ggplot2) ## ## > library(qpcR) ## ## > library(pwr) ## ## > library(ggthemes) ## ## > library(gridExtra) ## ## > Data <- read.csv("./R/Data/RawCI.csv", header = T, ## + quote = "\"") ## ## > Year <- unique(Data$Calves.1) ## ## > year2010a <- c(3, 3, 2) ## ## > year2010 <- filter(Data, Calves.1 < 2011) ## ## > year2010 <- year2010$Interval.1[!is.na(year2010$Interval.1)] ## ## > year2011a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2) ## ## > year2011 <- filter(Data, Calves.1 < 2012) ## ## > year2011 <- year2011$Interval.1[!is.na(year2011$Interval.1)] ## ## > year2012a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2, 6, 4, 4, 4, 4, 4, 3, 3, 3, 3) ## ## > year2012 <- filter(Data, Calves.1 < 2013) ## ## > year2012 <- year2012$Interval.1[!is.na(year2012$Interval.1)] ## ## > year2013a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2, 6, 4, 4, 4, 4, 4, 3, 3, 3, 3, 6, 5, 4, 4, 4, 4, ## + 4, 3, 3, 3, 3, 3, 3, 3, 3, .... [TRUNCATED] ## ## > full <- c(Data$Interval.1, Data$Interval.2) ## ## > year2013 <- full[!is.na(unlist(full))] ## ## > mean2010 <- sum(year2010)/length(year2010) ## ## > s2010 <- sd(year2010) ## ## > SE2010 <- s2010/(sqrt(length(year2010))) ## ## > n2010 <- (length(year2010)) ## ## > low.qt2010 <- mean2010 - (qt(0.975, length(year2010)) * ## + SE2010) ## ## > high.qt2010 <- mean2010 + (qt(0.975, length(year2010)) * ## + SE2010) ## ## > mean2011 <- sum(year2011)/length(year2011) ## ## > s2011 <- sd(year2011) ## ## > SE2011 <- s2011/(sqrt(length(year2011))) ## ## > n2011 <- (length(year2011)) ## ## > low.qt2011 <- mean2011 - (qt(0.975, length(year2011)) * ## + SE2011) ## ## > high.qt2011 <- mean2011 + (qt(0.975, length(year2011)) * ## + SE2011) ## ## > mean2012 <- sum(year2012)/length(year2012) ## ## > s2012 <- sd(year2012) ## ## > SE2012 <- s2012/(sqrt(length(year2012))) ## ## > n2012 <- (length(year2012)) ## ## > low.qt2012 <- mean2012 - (qt(0.975, length(year2012)) * ## + SE2012) ## ## > high.qt2012 <- mean2012 + (qt(0.975, length(year2012)) * ## + SE2012) ## ## > mean2013 <- sum(year2013)/length(year2013) ## ## > s2013 <- sd(year2013) ## ## > SE2013 <- s2013/(sqrt(length(year2013))) ## ## > n2013 <- (length(year2013)) ## ## > low.qt2013 <- mean2013 - (qt(0.975, length(year2013)) * ## + SE2013) ## ## > high.qt2013 <- mean2013 + (qt(0.975, length(year2013)) * ## + SE2013) ## ## > n <- c(length(year2010), length(year2011), length(year2012), ## + length(year2013)) ## ## > mY <- c(mean(year2010), mean(year2011), mean(year2012), ## + mean(year2013)) ## ## > year <- Year ## ## > low.qt <- c(low.qt2010, low.qt2011, low.qt2012, low.qt2013) ## ## > high.qt <- c(high.qt2010, high.qt2011, high.qt2012, ## + high.qt2013) ## ## > sd <- c(s2010, s2011, s2012, s2013) ## ## > sum.dat <- cbind(year, n, mY, low.qt, high.qt, sd) ## ## > sum.dat <- as.data.frame(sum.dat) ## ## > library(knitr) ## ## > kable(sum.dat, format = "markdown") ## ## ## | year| n| mY| low.qt| high.qt| sd| ## |----:|--:|--------:|--------:|--------:|---------:| ## | 2010| 3| 2.666667| 1.605851| 3.727482| 0.5773503| ## | 2011| 15| 2.866667| 2.673022| 3.060312| 0.3518658| ## | 2012| 25| 3.240000| 2.919170| 3.560830| 0.7788881| ## | 2013| 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## ## > ggplot(sum.dat, aes(y = mY, x = year)) + geom_point() + ## + geom_line() + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), ## + width = 0.1) + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-1} \end{center} \begin{verbatim} ## ## > par(mfrow = c(2, 2)) ## ## > plot(factor(year2010), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "a)", sub = "Sample size 3", ylab = "Frequency", ## + xlab = "Calving interval", cex.main = 1.5, font.main = 4, ## + col.main = "bl ..." ... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2011), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "b)", sub = "Sample size 15", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2012), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "c)", sub = "Sample size 25", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2013), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-2} \end{center} \begin{verbatim} ## ## > title(main = "d)", sub = "Sample size 45", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > library(qpcR) ## ## > rawdata <- qpcR:::cbind.na(year2010, year2011, year2012, ## + year2013) ## ## > rawdata <- as.data.frame(rawdata) ## ## > year2010 <- data.frame(year2010, year = c("2010")) ## ## > year2010 <- rename(year2010, interval = year2010, ## + year = year) ## ## > year2011 <- data.frame(year2011, year = c("2011")) ## ## > year2011 <- rename(year2011, interval = year2011, ## + year = year) ## ## > year2012 <- data.frame(year2012, year = c("2012")) ## ## > year2012 <- rename(year2012, interval = year2012, ## + year = year) ## ## > year2013 <- data.frame(year2013, year = c("2013")) ## ## > year2013 <- rename(year2013, interval = year2013, ## + year = year) ## ## > ggplotraw <- rbind(year2010, year2011, year2012, year2013) ## ## > ggplotraw$interval <- as.numeric(as.character(ggplotraw$interval)) ## ## > ggplot(year2013, aes(x = interval)) + geom_bar(alpha = 1, ## + width = 0.9, fill = "black") + xlab(expression("Calving" ~ ## + "interval" ~ (ita .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-3} \end{center} \begin{verbatim} ## ## > RealCI <- as.numeric(year2013$interval) ## ## > xlong <- RealCI ## ## > meanlong <- sum(xlong)/length(xlong) ## ## > slong <- sd(xlong) ## ## > SElong <- slong/(sqrt(length(xlong))) ## ## > nlong <- (length(xlong)) ## ## > lowqtlong <- meanlong - (qt(0.975, nlong) * SElong) ## ## > highqtlong <- meanlong + (qt(0.975, nlong) * SElong) ## ## > MedCI <- c(RealCI[RealCI < 5], 3, 3, 3, 3, 2, 3) ## ## > xmed <- MedCI ## ## > meanmed <- sum(xmed)/length(xmed) ## ## > smed <- sd(xmed) ## ## > SEmed <- smed/(sqrt(length(xmed))) ## ## > nmed <- (length(xmed)) ## ## > lowqtmed <- meanmed - (qt(0.975, length(xmed)) * SEmed) ## ## > highqtmed <- meanmed + (qt(0.975, length(xmed)) * ## + SEmed) ## ## > LowCI <- c(RealCI[RealCI < 4], 3, 3, 3, 3, 3, 2, 2, ## + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2) ## ## > xshort <- LowCI ## ## > meanshort <- mean(xshort) ## ## > sshort <- sd(xshort) ## ## > SEshort <- sshort/(sqrt(length(xshort))) ## ## > lowqtshort <- meanshort - (qt(0.975, length(xshort)) * ## + SEshort) ## ## > highqtshort <- meanshort + (qt(0.975, length(xshort)) * ## + SEshort) ## ## > bdata <- qpcR:::cbind.na(RealCI, MedCI, LowCI) ## ## > bdata <- as.data.frame(bdata) ## ## > par(mfrow = c(1, 3)) ## ## > plot(factor(bdata$LowCI), main = "Lowest possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(factor(bdata$MedCI), main = "Medium possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(factor(bdata$RealCI), main = "Observed interval") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-4} \end{center} \begin{verbatim} ## ## > par(mfrow = c(3, 1)) ## ## > plot(density(as.numeric(as.character(LowCI)), bw = 0.5), ## + main = "Lowest possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(density(as.numeric(as.character(MedCI)), bw = 0.5), ## + main = "Medium possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(density(as.numeric(as.character(RealCI)), bw = 0.5), ## + main = "Observed interval") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-5} \end{center} \begin{verbatim} ## ## > Sumtable <- data.frame(variable = c("low.qt", "mean", ## + "high.qt", "sd", "SE"), short = c(lowqtshort, meanshort, ## + highqtshort, sshort, SE .... [TRUNCATED] ## ## > n <- c(length(LowCI), length(MedCI), length(year2013$interval)) ## ## > mY <- c(mean(LowCI), mean(MedCI), mean(year2013$interval)) ## ## > interval <- c("Low", "Medium", "Observed") ## ## > low.qt <- c(lowqtshort, lowqtmed, low.qt2013) ## ## > high.qt <- c(highqtshort, highqtmed, high.qt2013) ## ## > sd <- c(sshort, smed, s2013) ## ## > Sumtable <- cbind(interval, n, mY, low.qt, high.qt, ## + sd) ## ## > Sumtable <- as.data.frame(Sumtable) ## ## > Sumtable$n <- as.numeric(as.character(Sumtable$n)) ## ## > Sumtable$mY <- as.numeric(as.character(Sumtable$mY)) ## ## > Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt)) ## ## > Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt)) ## ## > Sumtable$sd <- as.numeric(as.character(Sumtable$sd)) ## ## > Sumtable$interval <- as.character(Sumtable$interval) ## ## > ggplot(Sumtable, aes(y = mY, x = interval)) + geom_point(size = 5) + ## + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05, ## + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-6} \end{center} \begin{verbatim} ## ## > library(knitr) ## ## > kable(Sumtable, format = "markdown", col.names = c("Interval", ## + "Sample size", "Mean", "Lower limit", "Higher limit", "SD")) ## ## ## |Interval | Sample size| Mean| Lower limit| Higher limit| SD| ## |:--------|-----------:|--------:|-----------:|------------:|---------:| ## |Low | 58| 2.568966| 2.437666| 2.700265| 0.4995461| ## |Medium | 48| 3.104167| 2.943089| 3.265244| 0.5550382| ## |Observed | 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## ## > library(knitr) ## ## > srwdat <- read.csv(file = "./R/Data/srw_data.csv") ## ## > kable(srwdat, format = "markdown", col.names = c("Sample size", ## + "Mean", "Lower limit", "Higher limit", "SE", "Author", "Location")) ## ## ## | Sample size| Mean| Lower limit| Higher limit| SE|Author |Location | ## |-----------:|----:|-----------:|------------:|----:|:------------------|:---------------------------------| ## | NA| 3.12| 3.07| 3.17| NA|Best et al. 2001 |South Africa | ## | 1504| 3.15| 3.11| 3.18| NA|Best et al. 2005 |South Africa (1971-2003 Updated) | ## | NA| 3.16| 3.13| 3.19| NA|Brandao et al 2010 |South Africa ( 1971-2006 Updated) | ## | NA| 3.35| NA| NA| 0.05|Cooke et al. 2001 |Argentina | ## | 749| 3.42| NA| NA| 0.11|Cooke et al. 2003 |Argentina | ## | NA| 3.63| NA| NA| 0.13|Burnell 2001 |Australia | ## ## > SAreps <- 1500 ## ## > ARreps <- 800 ## ## > Aussiereps <- 2000 ## ## > low <- 1000 ## ## > verylow <- 100 ## ## > lowest <- 10 ## ## > par(mfrow = c(2, 3)) ## ## > plot(factor(sample(year2013$interval, lowest, replace = T)), ## + main = "3 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, verylow, replace = T)), ## + main = "10 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, low, replace = T)), ## + main = "30 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, Aussiereps, ## + replace = T)), main = "500 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, ARreps, replace = T)), ## + main = "800 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, SAreps, replace = T)), ## + main = "1500 intervals") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-7} \end{center} \begin{verbatim} ## ## > boots <- 1000 ## ## > n <- c(1:1000) ## ## > var10 <- paste0("n_", 1:10) ## ## > sample10 <- matrix(data = NA, ncol = lowest, nrow = boots) ## ## > colnames(sample10) <- as.list(var10) ## ## > for (i in 1:boots) { ## + sample10[i, ] <- sample(year2013$interval, lowest, replace = T) ## + } ## ## > sample10 <- as.data.frame(sample10) ## ## > sample10 <- sample10 %>% mutate(mean10 = rowMeans(sample10)) ## ## > sample10t <- as.matrix(sample10) ## ## > sample10t <- t(sample10t) ## ## > var100 <- paste0("n_", 1:100) ## ## > sample100 <- matrix(data = NA, ncol = verylow, nrow = boots) ## ## > colnames(sample100) <- as.list(var100) ## ## > for (i in 1:boots) { ## + sample100[i, ] <- sample(year2013$interval, verylow, replace = T) ## + } ## ## > sample100 <- as.data.frame(sample100) ## ## > sample100 <- sample100 %>% mutate(mean100 = rowMeans(sample100)) ## ## > var500 <- paste0("n_", 1:500) ## ## > sample500 <- matrix(data = NA, ncol = 500, nrow = boots) ## ## > colnames(sample500) <- as.list(var500) ## ## > for (i in 1:boots) { ## + sample500[i, ] <- sample(year2013$interval, 500, replace = T) ## + } ## ## > sample500 <- as.data.frame(sample500) ## ## > sample500 <- sample500 %>% mutate(mean500 = rowMeans(sample500)) ## ## > var1000 <- paste0("n_", 1:1000) ## ## > sample1000 <- matrix(data = NA, ncol = low, nrow = boots) ## ## > colnames(sample1000) <- as.list(var1000) ## ## > for (i in 1:boots) { ## + sample1000[i, ] <- sample(year2013$interval, low, replace = T) ## + } ## ## > sample1000 <- as.data.frame(sample1000) ## ## > sample1000 <- sample1000 %>% mutate(mean1000 = rowMeans(sample1000)) ## ## > varA <- paste0("n_", 1:2000) ## ## > sampleA <- matrix(data = NA, ncol = Aussiereps, nrow = boots) ## ## > colnames(sampleA) <- as.list(varA) ## ## > for (i in 1:boots) { ## + sampleA[i, ] <- sample(year2013$interval, Aussiereps, replace = T) ## + } ## ## > sampleA <- as.data.frame(sampleA) ## ## > sampleA <- sampleA %>% mutate(meanA = rowMeans(sampleA)) ## ## > sampleAt <- t(sampleA) ## ## > for (i in c(1:ncol(sampleA))) { ## + sampleA[, i] <- as.numeric(as.character(sampleA[, i])) ## + } ## ## > ab <- sort(sampleA$meanA) ## ## > nab <- length(ab) ## ## > ab2.5 <- ab[25] ## ## > ab0.97.5 <- ab[975] ## ## > ab <- sort(sampleA$meanA) ## ## > nab <- length(ab) ## ## > ab2.5 <- ab[25] ## ## > ab0.97.5 <- ab[975] ## ## > par(mfrow = c(1, 1)) ## ## > plot(density(sample10$mean10, bw = 0.05), col = "black", ## + lty = 1, main = "", lwd = 5, ylim = c(0, 8), xlim = c(2, ## + 4.5), axes = FAL .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-8} \end{center} \begin{verbatim} ## ## > lines(density(sample100$mean100, bw = 0.05), col = "black", ## + lty = 2, lwd = 4) ## ## > lines(density(sample500$mean500, bw = 0.05), col = "black", ## + lty = 3, lwd = 3) ## ## > lines(density(sample1000$mean1000, bw = 0.05), col = "black", ## + lty = 4, lwd = 2) ## ## > lines(density(sampleA$meanA, bw = 0.05), col = "black", ## + lty = 5, lwd = 1) ## ## > legend("topright", title = "Legend", c("n=10, cv=8.12 ", ## + "n=100, cv=2.43", "n=500, c.v=1.15", "n=1000, cv=0.79", "n=2000, cv=0.56"), ## + b .... [TRUNCATED] ## ## > axis(1, lwd = 2) ## ## > axis(2, lwd = 2) ## ## > plot(density(sample10$mean10, bw = 0.05), col = "black", ## + lty = 3, main = "", lwd = 1, ylim = c(0, 8), xlim = c(2.5, ## + 4.5), axes = F .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-9} \end{center} \begin{verbatim} ## ## > lines(density(sample100$mean100, bw = 0.05), col = "black", ## + lty = 4, lwd = 1) ## ## > lines(density(sample500$mean500, bw = 0.05), col = "black", ## + lty = 5, lwd = 1) ## ## > lines(density(sample1000$mean1000, bw = 0.05), col = "black", ## + lty = 2, lwd = 1) ## ## > lines(density(sampleA$meanA, bw = 0.05), col = "black", ## + lty = 1, lwd = 2) ## ## > legend(y = 8, x = 3.9, title = expression(bold("Sample size (n)")), ## + c(expression(italic("n") ~ "=" ~ "10"), expression(italic("n") ~ ## + .... [TRUNCATED] ## ## > axis(1, lwd = 2) ## ## > axis(2, lwd = 2) ## ## > rev.one <- bdata$RealCI[1:45] ## ## > sample.true <- year2013$interval ## ## > pwr.test.results <- power.t.test(n = 45, delta = seq(0, ## + 0.99, 0.001), sd = sd(sample.true), alternative = "one.sided", ## + sig.level = 0.0 .... [TRUNCATED] ## ## > pwr.analysis <- as.data.frame(cbind(pwr.test.results$power, ## + pwr.test.results$delta)) ## ## > colnames(pwr.analysis) <- c("Power", "Mean.difference") ## ## > pwr.analysis.1 <- pwr.analysis %>% mutate(Alpha = 1 - ## + Power, Mean.estimate = 3.31 + Mean.difference) ## ## > a <- filter(pwr.analysis.1, Alpha < 0.05) ## ## > a[1, ] ## Power Mean.difference Alpha Mean.estimate ## 1 0.9501505 0.593 0.04984946 3.903 ## ## > ggplot(data = pwr.analysis.1, aes(x = Mean.estimate, ## + y = Alpha)) + geom_line(size = 1.5) + geom_vline(xintercept = 3.903, ## + col = "blue" .... [TRUNCATED] \end{verbatim} \begin{verbatim} ## ## > rev.one <- bdata$RealCI[1:45] ## ## > sample.true <- year2013$interval ## ## > diff <- 3.63 - 3.31 ## ## > pwr.test.results <- power.t.test(n = seq(1, 200, 1), ## + delta = diff, sd = sd(sample.true), alternative = "one.sided", ## + sig.level = 0.05) \end{verbatim} \begin{verbatim} ## Warning in qt(sig.level/tside, nu, lower.tail = FALSE): NaNs produced \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-10} \end{center} \begin{verbatim} ## ## > pwr.analysis <- as.data.frame(cbind(pwr.test.results$power, ## + pwr.test.results$n)) ## ## > colnames(pwr.analysis) <- c("Power", "Sample.size") ## ## > pwr.analysis.1 <- pwr.analysis %>% mutate(Alpha = 1 - ## + Power) ## ## > a <- filter(pwr.analysis.1, Alpha < 0.05) ## ## > a[1, ] ## Power Sample.size Alpha ## 1 0.9503366 153 0.0496634 ## ## > ggplot(data = pwr.analysis.1, aes(x = Sample.size, ## + y = Alpha)) + geom_line(size = 1.5) + geom_vline(xintercept = 45, ## + col = "red") + ge .... [TRUNCATED] \end{verbatim} \begin{verbatim} ## Warning: Removed 1 rows containing missing values (geom_path). \end{verbatim} \begin{verbatim} ## ## > dat <- read.csv("./R/Data/raw_observations_2012.csv") ## ## > glimpse(dat) ## Observations: 180 ## Variables: 10 ## $ ID <fct> AI06006, AI06007, AI06015, AI06022, AI06038, AI100... ## $ X2006 <int> 1, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2007 <int> 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2008 <int> 1, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,... ## $ X2009 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2010 <int> 0, 0, 2, 0, 0, 6, 6, 5, 5, 3, 4, 2, 5, 4, 5, 3, 2,... ## $ X2011 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2012 <int> 0, 1, 2, 4, 0, 0, 0, 0, 0, 0, 5, 0, 0, 12, 0, 0, 0... ## $ total <int> 2, 4, 8, 5, 3, 6, 7, 5, 5, 3, 9, 2, 5, 16, 6, 3, 2... ## $ X..yrs.seen <int> 2, 3, 5, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1,... ## ## > head(dat) ## ID X2006 X2007 X2008 X2009 X2010 X2011 X2012 total X..yrs.seen ## 1 AI06006 1 0 1 0 0 0 0 2 2 ## 2 AI06007 2 1 0 0 0 0 1 4 3 ## 3 AI06015 2 1 1 0 2 0 2 8 5 ## 4 AI06022 1 0 0 0 0 0 4 5 2 ## 5 AI06038 1 0 2 0 0 0 0 3 2 ## 6 AI10040 0 0 0 0 6 0 0 6 1 ## ## > dat1 <- read.csv("./R/Data/RawCI.csv", header = T, ## + quote = "\"") ## ## > glimpse(dat1) ## Observations: 41 ## Variables: 8 ## $ ID <fct> AI10124, AI10070, AI10086, AI08340, AI08341, AI0... ## $ Yr.first.seen <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves.1 <int> 2010, 2010, 2010, 2011, 2011, 2011, 2011, 2011, ... ## $ Calves.2 <int> 2013, 2013, 2013, NA, NA, NA, NA, NA, NA, NA, NA... ## $ Interval.1 <int> 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 6, ... ## $ Interval.2 <int> 3, 3, 3, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,... ## $ X <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, ... ## ## > dat3 <- dplyr::select(dat, ID, X2006:X2012) %>% gather(year, ## + count, X2006:X2012) ## ## > dat4 <- full_join(dat3, dat1, by = "ID") \end{verbatim} \begin{verbatim} ## Warning: Column `ID` joining factors with different levels, coercing to ## character vector \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-11} \end{center} \begin{verbatim} ## ## > dat5 <- dplyr::select(dat4, ID, year, count, Yr.first.seen, ## + Calves, Calves.1, Calves.2) ## ## > dat6 <- filter(dat5, count > 0) ## ## > glimpse(dat6) ## Observations: 237 ## Variables: 7 ## $ ID <chr> "AI06006", "AI06007", "AI06015", "AI06022", "AI0... ## $ year <chr> "X2006", "X2006", "X2006", "X2006", "X2006", "X2... ## $ count <int> 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ## $ Yr.first.seen <int> NA, NA, NA, 2006, NA, NA, NA, 2007, 2007, NA, NA... ## $ Calves <int> NA, NA, NA, 2006, NA, NA, NA, 2007, 2007, NA, NA... ## $ Calves.1 <int> NA, NA, NA, 2012, NA, NA, NA, 2013, 2010, NA, NA... ## $ Calves.2 <int> NA, NA, NA, NA, NA, NA, NA, NA, 2013, NA, NA, 20... ## ## > dat7 <- mutate(dat6, year = ifelse(year == "X2006", ## + "2006", year), year = ifelse(year == "X2007", "2007", year), ## + year = ifelse(year == .... [TRUNCATED] ## ## > a <- group_by(dat7, ID, Yr.first.seen) %>% mutate(mother = ifelse(Yr.first.seen > ## + 0, 1, 0)) %>% filter(mother == 1) %>% ungroup() %>% dplyr:: .... [TRUNCATED] ## ## > a ## # A tibble: 1 x 4 ## ID year Calves Calves.1 ## <chr> <chr> <int> <int> ## 1 AI09216 2007 2009 2011 ## ## > greater.than.2 <- sample.true[sample.true > 2] ## ## > mean.2 <- sum(greater.than.2)/length(greater.than.2) ## ## > s.2 <- sd(greater.than.2) ## ## > SE.2 <- s2013/(sqrt(length(greater.than.2))) ## ## > n.2 <- length(greater.than.2) ## ## > low.qt.2 <- mean.2 - (qt(0.975, length(greater.than.2)) * ## + SE.2) ## ## > high.qt.2 <- mean.2 + (qt(0.975, length(greater.than.2)) * ## + SE.2) ## ## > Sumtable[4, ] <- c("miss2year", n.2, mean.2, low.qt.2, ## + high.qt.2, sd(greater.than.2)) ## ## > boots <- 1000 ## ## > n <- c(1:1000) ## ## > detect1 <- 44 ## ## > detect2 <- 42 ## ## > detect3 <- 40 ## ## > sample2 <- rep(NA, 1000) ## ## > sample5 <- rep(NA, 1000) ## ## > sample10 <- rep(NA, 1000) ## ## > for (i in 1:boots) { ## + sample2[i] <- mean(sample(year2013$interval, detect1, replace = T)) ## + sample5[i] <- mean(sample(year2013$interval, de .... [TRUNCATED] ## ## > sample2 <- sort(sample2) ## ## > sample2.2.5 <- sample2[25] ## ## > sample2.50 <- sample2[500] ## ## > sample2.975 <- sample2[975] ## ## > sample5 <- sort(sample5) ## ## > sample5.2.5 <- sample5[25] ## ## > sample5.50 <- sample5[500] ## ## > sample5.975 <- sample5[975] ## ## > sample10 <- sort(sample10) ## ## > sample10.2.5 <- sample10[25] ## ## > sample10.50 <- sample10[500] ## ## > sample10.975 <- sample10[975] ## ## > Sumtable[5, ] <- c("detect1", detect1, sample2.50, ## + sample2.2.5, sample2.975, NA) ## ## > Sumtable[6, ] <- c("detect2", detect2, sample5.50, ## + sample5.2.5, sample5.975, NA) ## ## > Sumtable[7, ] <- c("detect5", detect3, sample10.50, ## + sample10.2.5, sample10.975, NA) ## ## > length(Data$ID) ## [1] 41 ## ## > length(dat$ID) ## [1] 180 ## ## > glimpse(Data) ## Observations: 41 ## Variables: 8 ## $ ID <fct> AI10124, AI10070, AI10086, AI08340, AI08341, AI0... ## $ Yr.first.seen <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves.1 <int> 2010, 2010, 2010, 2011, 2011, 2011, 2011, 2011, ... ## $ Calves.2 <int> 2013, 2013, 2013, NA, NA, NA, NA, NA, NA, NA, NA... ## $ Interval.1 <int> 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 6, ... ## $ Interval.2 <int> 3, 3, 3, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,... ## $ X <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, ... ## ## > dat.detect <- dplyr::select(Data, ID, Calves, Calves.1, ## + Calves.2) %>% mutate(Calves = factor(Calves), Calves.1 = factor(Calves.1), ## + Cal .... [TRUNCATED] ## ## > a <- as.data.frame.matrix(table(Data$ID, Data$Calves)) ## ## > head(a) ## 2006 2007 2008 2009 2010 2011 ## AI06022 1 0 0 0 0 0 ## AI08340 0 0 1 0 0 0 ## AI08341 0 0 1 0 0 0 ## AI08343 0 0 1 0 0 0 ## AI08355 0 0 1 0 0 0 ## AI08362 0 0 1 0 0 0 ## ## > a[, 7] <- row.names(a) ## ## > colnames(a)[1] <- "y2006" ## ## > colnames(a)[2] <- "y2007" ## ## > colnames(a)[3] <- "y2008" ## ## > colnames(a)[4] <- "y2009" ## ## > colnames(a)[5] <- "y2010" ## ## > colnames(a)[6] <- "y2011" ## ## > colnames(a)[7] <- "ID" ## ## > a[, 8] <- 0 ## ## > colnames(a)[8] <- "y2012" ## ## > a[, 9] <- 0 ## ## > colnames(a)[9] <- "y2013" ## ## > a <- dplyr::select(a, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > b <- as.data.frame.matrix(table(Data$ID, Data$Calves.1)) ## ## > head(b) ## 2010 2011 2012 2013 ## AI06022 0 0 1 0 ## AI08340 0 1 0 0 ## AI08341 0 1 0 0 ## AI08343 0 0 0 1 ## AI08355 0 1 0 0 ## AI08362 0 1 0 0 ## ## > b[, 5] <- row.names(b) ## ## > colnames(b)[5] <- "ID" ## ## > b[, 6] <- 0 ## ## > colnames(b)[6] <- "y2006" ## ## > b[, 7] <- 0 ## ## > colnames(b)[7] <- "y2007" ## ## > b[, 8] <- 0 ## ## > colnames(b)[8] <- "y2008" ## ## > b[, 9] <- 0 ## ## > colnames(b)[9] <- "y2009" ## ## > colnames(b)[1] <- "y2010" ## ## > colnames(b)[2] <- "y2011" ## ## > colnames(b)[3] <- "y2012" ## ## > colnames(b)[4] <- "y2013" ## ## > b <- dplyr::select(b, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > c <- as.data.frame.matrix(table(Data$ID, Data$Calves.2)) ## ## > head(c) ## 2013 ## AI06022 0 ## AI08340 0 ## AI08341 0 ## AI08343 0 ## AI08355 0 ## AI08362 0 ## ## > colnames(c)[1] <- "y2013" ## ## > c[, 2] <- row.names(c) ## ## > colnames(c)[2] <- "ID" ## ## > c[, 3] <- 0 ## ## > colnames(c)[3] <- "y2006" ## ## > c[, 4] <- 0 ## ## > colnames(c)[4] <- "y2007" ## ## > c[, 5] <- 0 ## ## > colnames(c)[5] <- "y2008" ## ## > c[, 6] <- 0 ## ## > colnames(c)[6] <- "y2009" ## ## > c[, 7] <- 0 ## ## > colnames(c)[7] <- "y2010" ## ## > c[, 8] <- 0 ## ## > colnames(c)[8] <- "y2011" ## ## > c[, 9] <- 0 ## ## > colnames(c)[9] <- "y2012" ## ## > c <- dplyr::select(c, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > countdat <- rbind(a, b, c) ## ## > glimpse(countdat) ## Observations: 123 ## Variables: 9 ## $ ID <chr> "AI06022", "AI08340", "AI08341", "AI08343", "AI08355", "... ## $ y2006 <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2007 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2008 <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,... ## $ y2009 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,... ## $ y2010 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2011 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2012 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2013 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## ## > full.dat <- group_by(countdat, ID) %>% summarise(y2006 = sum(y2006), ## + y2007 = sum(y2007), y2008 = sum(y2008), y2009 = sum(y2009), ## + y2010 .... [TRUNCATED] ## ## > 2012 - 2006 ## [1] 6 ## ## > sort(Data$ID) ## [1] AI06022 AI08340 AI08341 AI08343 AI08355 AI08362 AI08364 AI08365 ## [9] AI08372 AI08378 AI08379 AI08383 AI08386 AI08387 AI08390 AI08395 ## [17] AI08403 AI09216 AI09217 AI09221 AI09224 AI09225 AI09247 AI09249 ## [25] AI09259 AI09265 AI09289 AI10043 AI10056 AI10070 AI10085 AI10086 ## [33] AI10102 AI10124 AI10144 AI10160 AI10167 AI10170 AI10177 AI11408 ## [41] AI11430 ## 41 Levels: AI06022 AI08340 AI08341 AI08343 AI08355 AI08362 ... AI11430 ## ## > filter(Data, ID == "AI06022") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI06022 2006 2006 2012 NA 6 NA NA ## ## > filter(Data, ID == "AI08340") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI08340 2008 2008 2011 NA 3 NA NA ## ## > filter(Data, ID == "AI08343") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI08343 2008 2008 2013 NA 5 NA NA ## ## > head(Data) ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI10124 2007 2007 2010 2013 3 3 NA ## 2 AI10070 2008 2008 2010 2013 2 3 NA ## 3 AI10086 2007 2007 2010 2013 3 3 NA ## 4 AI08340 2008 2008 2011 NA 3 NA NA ## 5 AI08341 2008 2008 2011 NA 3 NA NA ## 6 AI08355 2008 2008 2011 NA 3 NA NA ## ## > longer5.6 <- c(sample.true, 5, 6, 6) ## ## > mean.56 <- sum(longer5.6)/length(longer5.6) ## ## > s.56 <- sd(longer5.6) ## ## > SE.56 <- s.56/(sqrt(length(longer5.6))) ## ## > n.56 <- (length(longer5.6)) ## ## > low.qt.56 <- mean.56 - (qt(0.975, length(longer5.6)) * ## + SE.56) ## ## > high.qt.56 <- mean.56 + (qt(0.975, length(longer5.6)) * ## + SE.56) ## ## > Sumtable[8, ] <- c("longer.56", n.56, mean.56, low.qt.56, ## + high.qt.56, sd(longer5.6)) ## ## > Sumtable <- as.data.frame(Sumtable) ## ## > Sumtable$n <- as.numeric(as.character(Sumtable$n)) ## ## > Sumtable$mY <- as.numeric(as.character(Sumtable$mY)) ## ## > Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt)) ## ## > Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt)) ## ## > Sumtable$sd <- as.numeric(as.character(Sumtable$sd)) ## ## > Sumtable$interval <- as.character(Sumtable$interval) ## ## > library(knitr) ## ## > kable(Sumtable, format = "markdown", col.names = c("Interval", ## + "Sample size", "Mean", "Lower limit", "Higher limit", "SD")) ## ## ## |Interval | Sample size| Mean| Lower limit| Higher limit| SD| ## |:---------|-----------:|--------:|-----------:|------------:|---------:| ## |Low | 58| 2.568966| 2.437666| 2.700265| 0.4995461| ## |Medium | 48| 3.104167| 2.943089| 3.265244| 0.5550382| ## |Observed | 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## |miss2year | 41| 3.439024| 3.171549| 3.706499| 0.7761695| ## |detect1 | 44| 3.318182| 3.068182| 3.568182| NA| ## |detect2 | 42| 3.285714| 3.071429| 3.571429| NA| ## |detect5 | 40| 3.325000| 3.050000| 3.600000| NA| ## |longer.56 | 48| 3.458333| 3.165307| 3.751360| 1.0097047| ## ## > ggplot(Sumtable, aes(y = mY, x = interval)) + geom_point(size = 5) + ## + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05, ## + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-81-12} \end{center} \hypertarget{plot-steps-1}{% \subsection{Plot steps}\label{plot-steps-1}} \begin{Shaded} \begin{Highlighting}[] \CommentTok{## ----raw graph, echo=FALSE, message=FALSE, warning=FALSE-----------------} \CommentTok{#plot data} \KeywordTok{ggplot}\NormalTok{(sum.dat, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{y =}\NormalTok{ mY, }\DataTypeTok{x =}\NormalTok{ year)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_point}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_line}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_errorbar}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{ymin =}\NormalTok{ low.qt, }\DataTypeTok{ymax =}\NormalTok{ high.qt), }\DataTypeTok{width =} \FloatTok{0.1}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_bw}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-82-1} \end{center} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# ## ----raw graph 2, echo=FALSE, fig.height=6, fig.width=6, message=FALSE, warning=FALSE----} \CommentTok{# } \CommentTok{# #PLOTS} \CommentTok{# par(mfrow=c(2,2))} \CommentTok{# } \CommentTok{# plot(factor(year2010),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="a)",sub="Sample size 3", ylab="Frequency",xlab="Calving interval",} \CommentTok{# cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2011),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="b)",sub="Sample size 15", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2012),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="c)",sub="Sample size 25", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2013),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="d)",sub="Sample size 45", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----raw graph 3, echo=FALSE, fig.height=6, fig.width=6, message=TRUE, warning=TRUE----} \CommentTok{# library(qpcR)} \CommentTok{# #data in one way for plot} \CommentTok{# rawdata <- qpcR:::cbind.na(year2010,year2011,year2012,year2013)} \CommentTok{# rawdata <- as.data.frame(rawdata)} \CommentTok{# } \CommentTok{# #in correct format for ggplot2} \CommentTok{# year2010 <- data.frame(year2010,year = c("2010"))} \CommentTok{# year2010 <- rename(year2010, interval = year2010, year = year )} \CommentTok{# year2011 <- data.frame(year2011,year = c("2011"))} \CommentTok{# year2011 <- rename(year2011, interval = year2011, year = year )} \CommentTok{# year2012 <- data.frame(year2012,year = c("2012"))} \CommentTok{# year2012 <- rename(year2012, interval = year2012, year = year )} \CommentTok{# year2013 <- data.frame(year2013,year = c("2013"))} \CommentTok{# year2013 <- rename(year2013, interval = year2013, year = year )} \CommentTok{# ggplotraw <- rbind(year2010,year2011,year2012, year2013)} \CommentTok{# ggplotraw$interval <- as.numeric(as.character(ggplotraw$interval))} \CommentTok{# } \CommentTok{# #sort(year2013$interval) - sort(sample.true)} \CommentTok{# } \CommentTok{# } \CommentTok{# ggplot(year2013,aes(x = interval)) +} \CommentTok{# geom_bar(alpha = 1, width = 0.9,fill = "black") +} \CommentTok{# xlab(expression("Calving"~"interval"~(italic("years")))) +} \CommentTok{# ylab(expression("Total"~"number"~"of"~"observations"~(italic("n")))) +} \CommentTok{# scale_y_continuous(breaks = c(0,5,10,15,20,25,30), limits = c(0,30)) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black"))} \CommentTok{# #PLOTS} \CommentTok{# #code to store figure} \CommentTok{# # png("Figure_2_NZSRW_calving_interval_2017_highres.png", width = 12, height = 14.8, units = 'cm', res = 1200)} \CommentTok{# # dev.off()} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals, echo=FALSE, fig.height=10, message=FALSE, warning=FALSE----} \CommentTok{# #################################Missing calving intervals################} \CommentTok{# #Intervals modified by accounting for missed intervals} \CommentTok{# #Bradford et al. 2008} \CommentTok{# } \CommentTok{# #Raw Data} \CommentTok{# RealCI <- as.numeric(year2013$interval)} \CommentTok{# } \CommentTok{# #Confidence interval} \CommentTok{# xlong <- RealCI} \CommentTok{# meanlong<-sum(xlong)/length(xlong)} \CommentTok{# slong<-sd(xlong)} \CommentTok{# SElong<-slong/(sqrt(length(xlong)))} \CommentTok{# nlong<-(length(xlong))} \CommentTok{# #Standard error and confidence intervals} \CommentTok{# #2 sided t value at the 95% level = 2.093} \CommentTok{# lowqtlong <- meanlong-(qt(0.975,nlong)*SElong)} \CommentTok{# highqtlong <- meanlong+(qt(0.975,nlong)*SElong)} \CommentTok{# } \CommentTok{# ####################MED CI########################################} \CommentTok{# # 2x 6's and 1x 5 replaced with 3threes} \CommentTok{# MedCI <- c(RealCI[RealCI < 5],3,3,3,3,2,3)} \CommentTok{# #sort(MedCI)} \CommentTok{# xmed<-MedCI} \CommentTok{# meanmed<-sum(xmed)/length(xmed)} \CommentTok{# smed<-sd(xmed)} \CommentTok{# SEmed<-smed/(sqrt(length(xmed)))} \CommentTok{# nmed<-(length(xmed))} \CommentTok{# } \CommentTok{# #Standard error and confidence intervals} \CommentTok{# lowqtmed <- meanmed-(qt(0.975,length(xmed))*SEmed)} \CommentTok{# highqtmed <- meanmed+(qt(0.975,length(xmed))*SEmed)} \CommentTok{# } \CommentTok{# } \CommentTok{# ############################SHORT CI##################################} \CommentTok{# #6,5 replaced with 2 year intervals} \CommentTok{# } \CommentTok{# LowCI <- c(RealCI[RealCI < 4],3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2)} \CommentTok{# xshort<-LowCI} \CommentTok{# meanshort<-mean(xshort)} \CommentTok{# sshort<-sd(xshort)} \CommentTok{# SEshort<-sshort/(sqrt(length(xshort)))} \CommentTok{# } \CommentTok{# #Standard error and confidence intervals} \CommentTok{# lowqtshort <- meanshort-(qt(0.975,length(xshort))*SEshort)} \CommentTok{# highqtshort <- meanshort+(qt(0.975,length(xshort))*SEshort)} \CommentTok{# } \CommentTok{# bdata <-qpcR:::cbind.na(RealCI,MedCI,LowCI)} \CommentTok{# bdata <- as.data.frame(bdata)} \CommentTok{# } \CommentTok{# #Structure of data set} \CommentTok{# #str(bdata)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot, echo=FALSE, fig.height=3.5, fig.width=5.5, message=FALSE, warning=FALSE----} \CommentTok{# #Basic plots} \CommentTok{# par(mfrow=c(1,3))} \CommentTok{# plot(factor(bdata$LowCI),main="Lowest possible interval")} \CommentTok{# plot(factor(bdata$MedCI), main="Medium possible interval")} \CommentTok{# plot(factor(bdata$RealCI),main="Observed interval")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot2, fig.height=5.5, fig.width=4.5, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# #Density basic plots} \CommentTok{# par(mfrow=c(3,1))} \CommentTok{# plot(density(as.numeric(as.character(LowCI)),bw=.5), main="Lowest possible interval")} \CommentTok{# plot(density(as.numeric(as.character(MedCI)),bw= 0.5), main="Medium possible interval")} \CommentTok{# plot(density(as.numeric(as.character(RealCI)),bw = 0.5),main="Observed interval")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals table, fig.height=8, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# } \CommentTok{# ###################################SUMMARY############################} \CommentTok{# #Pull out important information} \CommentTok{# Sumtable<-data.frame(variable = c("low.qt","mean","high.qt","sd", "SE"), short=c(lowqtshort,meanshort,highqtshort,sshort,SEshort),} \CommentTok{# medium=c(lowqtmed,meanmed,highqtmed,smed,SEmed),} \CommentTok{# real=c(lowqtlong,meanlong,highqtlong,slong,SElong))} \CommentTok{# } \CommentTok{# #Make dataframe to plot} \CommentTok{# n <- c(length(LowCI),length(MedCI),length(year2013$interval))} \CommentTok{# mY <- c(mean(LowCI),mean(MedCI),mean(year2013$interval))} \CommentTok{# interval <-c("Low", "Medium","Observed")} \CommentTok{# low.qt <- c(lowqtshort,lowqtmed,low.qt2013)} \CommentTok{# high.qt <- c(highqtshort,highqtmed,high.qt2013)} \CommentTok{# sd <- c(sshort,smed,s2013)} \CommentTok{# Sumtable <- cbind(interval,n,mY,low.qt,high.qt,sd)} \CommentTok{# Sumtable <- as.data.frame(Sumtable)} \CommentTok{# } \CommentTok{# Sumtable$n <- as.numeric(as.character(Sumtable$n))} \CommentTok{# Sumtable$mY <- as.numeric(as.character(Sumtable$mY))} \CommentTok{# Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt))} \CommentTok{# Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt))} \CommentTok{# Sumtable$sd <- as.numeric(as.character(Sumtable$sd))} \CommentTok{# Sumtable$interval <- as.character(Sumtable$interval)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot3, echo=FALSE, fig.height=4, message=FALSE, warning=FALSE----} \CommentTok{# ggplot(Sumtable, aes(y = mY, x = interval)) +} \CommentTok{# geom_point(size = 5) +} \CommentTok{# geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05,size = 1, alpha = 0.5) +} \CommentTok{# scale_y_continuous(breaks = round(seq(2.3, 3.6, by = 0.2),1)) +} \CommentTok{# labs(y = "Mean calving interval",x = "Calving interval modification" ) +} \CommentTok{# geom_point(size = 3) +} \CommentTok{# theme_classic() +} \CommentTok{# theme_hc() +} \CommentTok{# theme(legend.position="none")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing_data_table, echo=FALSE--------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# } \CommentTok{# kable(Sumtable, format = "markdown",col.names = c("Interval","Sample size", "Mean", "Lower limit", "Higher limit", "SD"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----srw_data_table, echo=FALSE------------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# setwd("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data")} \CommentTok{# srwdat <- read.csv(file = "srw_data.csv")} \CommentTok{# } \CommentTok{# #str(srwdat)} \CommentTok{# kable(srwdat, format = "markdown",col.names = c("Sample size","Mean", "Lower limit", "Higher limit", "SE","Author", "Location"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap single, echo=FALSE, fig.height=5--------------------------} \CommentTok{# ############################NZ Simple sample##############################} \CommentTok{# #WITH replacement} \CommentTok{# } \CommentTok{# # to try and match number of intervals observed in other populations} \CommentTok{# # find references} \CommentTok{# SAreps <- 1500} \CommentTok{# ARreps <- 800} \CommentTok{# Aussiereps <- 2000} \CommentTok{# low <- 1000} \CommentTok{# verylow <- 100} \CommentTok{# lowest <- 10} \CommentTok{# } \CommentTok{# #Very raw plots} \CommentTok{# par(mfrow=c(2,3))} \CommentTok{# plot(factor(sample(year2013$interval,lowest,replace=T)),main = "3 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,verylow,replace=T)),main = "10 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,low,replace=T)),main = "30 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,Aussiereps,replace=T)),main = "500 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,ARreps,replace=T)),main = "800 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,SAreps,replace=T)),main = "1500 intervals")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap_multiple, echo=FALSE--------------------------------------} \CommentTok{# #do each one 1000 times} \CommentTok{# boots <- 1000} \CommentTok{# n <- c(1:1000)} \CommentTok{# } \CommentTok{# } \CommentTok{# ###########################n10} \CommentTok{# var10 <- paste0("n_", 1:10)} \CommentTok{# sample10 <-matrix(data = NA, ncol = lowest, nrow = boots)} \CommentTok{# colnames(sample10) <- as.list(var10)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample10 [i, ] <- sample(year2013$interval,lowest,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample10 <- as.data.frame(sample10)} \CommentTok{# sample10 <- sample10 %>%} \CommentTok{# mutate(mean10 = rowMeans(sample10))} \CommentTok{# } \CommentTok{# sample10t <- as.matrix(sample10)} \CommentTok{# sample10t <-t(sample10t)} \CommentTok{# } \CommentTok{# #########################verylow sample size} \CommentTok{# #set up variable names} \CommentTok{# var100 <- paste0("n_", 1:100)} \CommentTok{# } \CommentTok{# sample100 <-matrix(data = NA, ncol = verylow, nrow = boots)} \CommentTok{# colnames(sample100) <- as.list(var100)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample100 [i, ] <- sample(year2013$interval,verylow,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample100 <- as.data.frame(sample100)} \CommentTok{# sample100 <- sample100 %>%} \CommentTok{# mutate(mean100 = rowMeans(sample100))} \CommentTok{# } \CommentTok{# #########################middle one} \CommentTok{# #set up variable names} \CommentTok{# var500 <- paste0("n_", 1:500)} \CommentTok{# } \CommentTok{# sample500 <-matrix(data = NA, ncol = 500, nrow = boots)} \CommentTok{# colnames(sample500) <- as.list(var500)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample500 [i, ] <- sample(year2013$interval,500,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample500 <- as.data.frame(sample500)} \CommentTok{# sample500 <- sample500 %>%} \CommentTok{# mutate(mean500 = rowMeans(sample500))} \CommentTok{# } \CommentTok{# } \CommentTok{# #########################low sample size} \CommentTok{# #set up variable names} \CommentTok{# var1000 <- paste0("n_", 1:1000)} \CommentTok{# } \CommentTok{# sample1000 <-matrix(data = NA, ncol = low, nrow = boots)} \CommentTok{# colnames(sample1000) <- as.list(var1000)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample1000 [i, ] <- sample(year2013$interval,low,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample1000 <- as.data.frame(sample1000)} \CommentTok{# sample1000 <- sample1000 %>%} \CommentTok{# mutate(mean1000 = rowMeans(sample1000))} \CommentTok{# } \CommentTok{# #########################AUS sample size} \CommentTok{# #set up variable names} \CommentTok{# varA <- paste0("n_", 1:2000)} \CommentTok{# } \CommentTok{# sampleA <-matrix(data = NA, ncol = Aussiereps, nrow = boots)} \CommentTok{# colnames(sampleA) <- as.list(varA)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sampleA [i, ] <- sample(year2013$interval,Aussiereps,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sampleA <- as.data.frame(sampleA)} \CommentTok{# sampleA <- sampleA %>%} \CommentTok{# mutate(meanA = rowMeans(sampleA))} \CommentTok{# } \CommentTok{# sampleAt <- t(sampleA)} \CommentTok{# } \CommentTok{# for(i in c(1:ncol(sampleA))) \{} \CommentTok{# sampleA[,i] <- as.numeric(as.character(sampleA[,i]))} \CommentTok{# \}} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# #COnfidence intervals} \CommentTok{# } \CommentTok{# ab <- sort(sampleA$meanA)} \CommentTok{# nab <- length(ab)} \CommentTok{# #low = 25/1000} \CommentTok{# ab2.5 <- ab[25]} \CommentTok{# #high = 975/1000} \CommentTok{# ab0.97.5 <- ab[975]} \CommentTok{# } \CommentTok{# ab <- sort(sampleA$meanA)} \CommentTok{# nab <- length(ab)} \CommentTok{# #low = 25/1000} \CommentTok{# ab2.5 <- ab[25]} \CommentTok{# #high = 975/1000} \CommentTok{# ab0.97.5 <- ab[975]} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap plot2, fig.height=5, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# #plot the data over each other to look at change in density} \CommentTok{# par(mfrow=c(1,1))} \CommentTok{# #plot(density(sample3$mean3,bw = .15),lwd = 3,lyt = 5, main = "", xlab = "Calving interval", box = FALSE,axis = FALSE)} \CommentTok{# } \CommentTok{# plot(density(sample10$mean10,bw = .05),col ="black", lty = 1, main = "", lwd = 5,ylim = c(0,8),xlim = c(2,4.5), axes=FALSE,xlab = "Calving interval")} \CommentTok{# lines(density(sample100$mean100,bw = .05),col ="black", lty = 2, lwd = 4)} \CommentTok{# lines(density(sample500$mean500,bw = .05),col ="black", lty = 3, lwd = 3)} \CommentTok{# lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 4, lwd = 2)} \CommentTok{# lines(density(sampleA$meanA,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# legend('topright',title = "Legend", c("n=10, cv=8.12 ", "n=100, cv=2.43", "n=500, c.v=1.15", "n=1000, cv=0.79", "n=2000, cv=0.56"),bty = "n",} \CommentTok{# lty = c(1,2,3,4,5), lwd = c(5,4,3,2,1), cex=.75)} \CommentTok{# axis(1,lwd=2)} \CommentTok{# axis(2,lwd=2)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----final plot for publication1, echo=FALSE-----------------------------} \CommentTok{# #final [plot]} \CommentTok{# #size defined by NZJFMR} \CommentTok{# # 195 mm (h) ? 148 mm (w).} \CommentTok{# #ylab(expression("Total"~"number"~"of"~"observations"~(italic("n")))) +} \CommentTok{# } \CommentTok{# plot(density(sample10$mean10,bw = .05),col ="black", lty = 3, main = "", lwd = 1,ylim = c(0,8),xlim = c(2.5,4.5), axes=FALSE, xlab = expression("Calving"~"interval"~(italic("years"))))} \CommentTok{# lines(density(sample100$mean100,bw = .05),col ="black", lty = 4, lwd = 1)} \CommentTok{# lines(density(sample500$mean500,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 2, lwd = 1)} \CommentTok{# lines(density(sampleA$meanA,bw = .05),col ="black", lty = 1, lwd = 2)} \CommentTok{# legend(y = 8, x = 3.9,title = expression(bold("Sample size (n)")), c(expression(italic("n")~"="~"10"), expression(italic("n")~"="~"100"), expression(italic("n")~"="~"500"), expression(italic("n")~"="~"1000"), expression(italic("n")~"="~"2000")),bty = "n",} \CommentTok{# lty = c(3,4,5,2,1), lwd = c(1,1,1,1,2), cex=1)} \CommentTok{# axis(1,lwd=2)} \CommentTok{# axis(2,lwd=2)} \CommentTok{# } \CommentTok{# # PLOT CODE FOR PUBLICATION} \CommentTok{# # png("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Figures/Figure_3_NZSRW_calving_interval_2017_lowres.png", width = 14.8, height = 14.8, units = 'cm', res = 400)} \CommentTok{# # dev.off()} \CommentTok{# #} \CommentTok{# #} \CommentTok{# # png("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Figures/Figure_3_NZSRW_calving_interval_2017_highres.png", width = 14.8, height = 14.8, units = 'cm', res = 1200)} \CommentTok{# #} \CommentTok{# # plot(density(sample10$mean10,bw = .05),col ="black", lty = 3, main = "", lwd = 1,ylim = c(0,8),xlim = c(2.5,4.5), axes=FALSE,xlab = expression("Calving"~"interval"~(italic("years"))))} \CommentTok{# # lines(density(sample100$mean100,bw = .05),col ="black", lty = 4, lwd = 1)} \CommentTok{# # lines(density(sample500$mean500,bw = .05),col ="black", lty = 2, lwd = 1)} \CommentTok{# # lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# # lines(density(sampleA$meanA,bw = .05),col ="black", lty = 1, lwd = 2)} \CommentTok{# # legend(y = 8, x = 3.9,title = expression(bold("Sample size (n)")), c(expression(italic("n")~"="~"10"), expression(italic("n")~"="~"100"), expression(italic("n")~"="~"500"), expression(italic("n")~"="~"1000"), expression(italic("n")~"="~"2000")),bty = "n",} \CommentTok{# # lty = c(3,4,2,5,1), lwd = c(1,1,1,1,2), cex=1)} \CommentTok{# # axis(1,lwd=2)} \CommentTok{# # axis(2,lwd=2)} \CommentTok{# #} \CommentTok{# # dev.off()} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_1, echo=TRUE----------------------------------------} \CommentTok{# #observed sample} \CommentTok{# rev.one <- bdata$RealCI[1:45]} \CommentTok{# } \CommentTok{# #sample 45 times} \CommentTok{# sample.true <- year2013$interval} \CommentTok{# } \CommentTok{# #power analysis} \CommentTok{# pwr.test.results <- power.t.test(n = 45,# sample size} \CommentTok{# delta = seq(0,0.99,0.001), #difference between means} \CommentTok{# sd = sd(sample.true), #observed variation} \CommentTok{# alternative = "one.sided", #observed test type} \CommentTok{# sig.level = 0.05) #significance level} \CommentTok{# } \CommentTok{# #additional packages are avaliable for more complex analysis} \CommentTok{# #but have not done this as don't think it is needed} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_1_plot, echo=FALSE, message=FALSE, warning=FALSE----} \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis <- as.data.frame(cbind(} \CommentTok{# pwr.test.results$power,} \CommentTok{# pwr.test.results$delta))} \CommentTok{# } \CommentTok{# colnames(pwr.analysis) <- c("Power","Mean.difference")} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis.1 <- pwr.analysis %>%} \CommentTok{# mutate(Alpha = 1- Power,} \CommentTok{# Mean.estimate = 3.31 + Mean.difference)} \CommentTok{# # %>%} \CommentTok{# # select(Alpha,Mean.estimate)} \CommentTok{# } \CommentTok{# #work out where the cut-off is} \CommentTok{# a <- filter(pwr.analysis.1, Alpha < 0.05)} \CommentTok{# a[1,]} \CommentTok{# } \CommentTok{# #plot data} \CommentTok{# ggplot(data = pwr.analysis.1, aes(x = Mean.estimate, y = Alpha)) +} \CommentTok{# geom_line(size = 1.5) +} \CommentTok{# geom_vline(xintercept = 3.903, col = "blue") +} \CommentTok{# geom_hline(yintercept = 0.05) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black")) +} \CommentTok{# ggtitle("Raw data result plot (n = 45)")} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_2_plot, echo=FALSE, message=FALSE, warning=FALSE----} \CommentTok{# #observed sample} \CommentTok{# rev.one <- bdata$RealCI[1:45]} \CommentTok{# } \CommentTok{# #sample 45 times} \CommentTok{# sample.true <- year2013$interval} \CommentTok{# } \CommentTok{# #difference} \CommentTok{# diff <- 3.63-3.31 #observed mean of australian population} \CommentTok{# } \CommentTok{# #power analysis} \CommentTok{# pwr.test.results <- power.t.test(n = seq(1,200,1),# sample size} \CommentTok{# delta = diff, #difference between means} \CommentTok{# sd = sd(sample.true), #observed variation} \CommentTok{# alternative = "one.sided", #observed test type} \CommentTok{# sig.level = 0.05) #significance level} \CommentTok{# } \CommentTok{# #additional packages are avaliable for more complex analysis} \CommentTok{# #but have not done this as don't think it is needed} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis <- as.data.frame(cbind(} \CommentTok{# pwr.test.results$power,} \CommentTok{# pwr.test.results$n))} \CommentTok{# } \CommentTok{# colnames(pwr.analysis) <- c("Power","Sample.size")} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis.1 <- pwr.analysis %>%} \CommentTok{# mutate(Alpha = 1- Power)} \CommentTok{# # %>%} \CommentTok{# # select(Alpha,Mean.estimate)} \CommentTok{# } \CommentTok{# #work out where the cut-off is} \CommentTok{# a <- filter(pwr.analysis.1, Alpha < 0.05)} \CommentTok{# a[1,]} \CommentTok{# } \CommentTok{# #plot data} \CommentTok{# ggplot(data = pwr.analysis.1, aes(x = Sample.size, y = Alpha)) +} \CommentTok{# geom_line(size = 1.5) +} \CommentTok{# geom_vline(xintercept = 45, col = "red") +} \CommentTok{# geom_vline(xintercept = 153, col = "blue") +} \CommentTok{# geom_hline(yintercept = 0.05) +} \CommentTok{# scale_y_continuous(limits = c(0,1)) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black")) +} \CommentTok{# ggtitle("Observed difference between Australian and NZ mean")} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missed individuals 1, echo=FALSE------------------------------------} \CommentTok{# dat <- read.csv("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data/raw_observations_2012.csv")} \CommentTok{# #data structure} \CommentTok{# glimpse(dat)} \CommentTok{# head(dat)} \CommentTok{# #And the second dataset} \CommentTok{# dat1<- read.csv("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data/RawCI.csv", header=T, quote="\textbackslash{}"")} \CommentTok{# #data structure} \CommentTok{# glimpse(dat1)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missed individuals 2, echo=FALSE, message=FALSE, warning=FALSE------} \CommentTok{# ##I can then modify this data to} \CommentTok{# #restructure dataset of capture to long dataset} \CommentTok{# dat3 <- dplyr::select(dat, ID, X2006:X2012)%>%} \CommentTok{# gather(year, count,X2006:X2012)} \CommentTok{# } \CommentTok{# #add data on calves} \CommentTok{# dat4 <- full_join(dat3,dat1, by = "ID")} \CommentTok{# dat5 <- dplyr::select(dat4,ID,year,count,Yr.first.seen,Calves,Calves.1,Calves.2)} \CommentTok{# } \CommentTok{# dat6 <- filter(dat5,count >0)} \CommentTok{# glimpse(dat6)} \CommentTok{# } \CommentTok{# dat7 <- mutate(dat6, year = ifelse(year == "X2006","2006", year),} \CommentTok{# year = ifelse(year == "X2007","2007", year),} \CommentTok{# year = ifelse(year == "X2008","2008", year),} \CommentTok{# year = ifelse(year == "X2009","2009", year),} \CommentTok{# year = ifelse(year == "X2010","2010", year),} \CommentTok{# year = ifelse(year == "X2011","2011", year),} \CommentTok{# year = ifelse(year == "X2012","2012", year))} \CommentTok{# } \CommentTok{# a <- group_by(dat7, ID, Yr.first.seen) %>%} \CommentTok{# mutate(mother = ifelse(Yr.first.seen > 0, 1, 0)) %>%} \CommentTok{# filter(mother == 1) %>%} \CommentTok{# ungroup() %>%} \CommentTok{# dplyr::select(ID,year,Calves,Calves.1) %>%} \CommentTok{# filter(Calves.1<2013) %>%} \CommentTok{# filter(!year == Calves) %>%} \CommentTok{# filter(!year ==Calves.1)} \CommentTok{# } \CommentTok{# a} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment3, echo=TRUE, message=FALSE, warning=FALSE-----------} \CommentTok{# greater.than.2 <- sample.true[sample.true>2]} \CommentTok{# } \CommentTok{# #greater.than.2} \CommentTok{# mean.2<-sum(greater.than.2)/length(greater.than.2)} \CommentTok{# s.2<-sd(greater.than.2)} \CommentTok{# SE.2<-s2013/(sqrt(length(greater.than.2)))} \CommentTok{# n.2<-length(greater.than.2)} \CommentTok{# low.qt.2<- mean.2-(qt(0.975,length(greater.than.2))*SE.2)} \CommentTok{# high.qt.2 <- mean.2+(qt(0.975,length(greater.than.2))*SE.2)} \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[4,] <- c("miss2year",n.2,mean.2,low.qt.2,} \CommentTok{# high.qt.2,sd(greater.than.2))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----different missing intervals 1, echo=TRUE----------------------------} \CommentTok{# ########################### 2.2%} \CommentTok{# #parameters} \CommentTok{# boots <- 1000} \CommentTok{# n <- c(1:1000)} \CommentTok{# } \CommentTok{# ###round all percentages upwards} \CommentTok{# detect1 <- 44 # (45*1.02) - 45 = 0.9} \CommentTok{# detect2 <- 42 # (45*1.05) - 45 = 2.25} \CommentTok{# detect3 <- 40 # (45*1.10) - 45 = 4.5} \CommentTok{# } \CommentTok{# sample2 <-rep(NA, 1000)} \CommentTok{# sample5 <-rep(NA, 1000)} \CommentTok{# sample10 <-rep(NA, 1000)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample2[i]<-mean(sample(year2013$interval,detect1,replace=T))} \CommentTok{# sample5[i]<-mean(sample(year2013$interval,detect2,replace=T))} \CommentTok{# sample10[i]<-mean(sample(year2013$interval,detect3,replace=T))} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# ######################estimates##############} \CommentTok{# sample2 <- sort(sample2)} \CommentTok{# #low = 25/1000} \CommentTok{# sample2.2.5 <- sample2[25]} \CommentTok{# #median} \CommentTok{# sample2.50 <- sample2[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample2.975 <- sample2[975]} \CommentTok{# } \CommentTok{# sample5 <- sort(sample5)} \CommentTok{# #low = 25/1000} \CommentTok{# sample5.2.5 <- sample5[25]} \CommentTok{# #median} \CommentTok{# sample5.50 <- sample5[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample5.975 <- sample5[975]} \CommentTok{# } \CommentTok{# sample10 <- sort(sample10)} \CommentTok{# #low = 25/1000} \CommentTok{# sample10.2.5 <- sample10[25]} \CommentTok{# #median} \CommentTok{# sample10.50 <- sample10[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample10.975 <- sample10[975]} \CommentTok{# } \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[5,] <- c("detect1",detect1,sample2.50,sample2.2.5,sample2.975,NA)} \CommentTok{# Sumtable[6,] <- c("detect2",detect2,sample5.50,sample5.2.5,sample5.975,NA)} \CommentTok{# Sumtable[7,] <- c("detect5",detect3,sample10.50,sample10.2.5,sample10.975,NA)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----detection sim.2-----------------------------------------------------} \CommentTok{# } \CommentTok{# #be very careful as Dat is just IDS and no id of females with calves} \CommentTok{# #BUT Data is identified females...} \CommentTok{# length(Data$ID)} \CommentTok{# length(dat$ID)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# glimpse(Data)} \CommentTok{# dat.detect <- dplyr::select(Data,ID,Calves,Calves.1, Calves.2) %>%} \CommentTok{# mutate(Calves = factor(Calves),} \CommentTok{# Calves.1 = factor(Calves.1),} \CommentTok{# Calves.2 = factor(Calves.2))} \CommentTok{# } \CommentTok{# a <- as.data.frame.matrix(table(Data$ID,Data$Calves))} \CommentTok{# head(a)} \CommentTok{# a[,7] <-row.names(a)} \CommentTok{# colnames(a)[1] <- "y2006"} \CommentTok{# colnames(a)[2] <- "y2007"} \CommentTok{# colnames(a)[3] <- "y2008"} \CommentTok{# colnames(a)[4] <- "y2009"} \CommentTok{# colnames(a)[5] <- "y2010"} \CommentTok{# colnames(a)[6] <- "y2011"} \CommentTok{# colnames(a)[7] <- "ID"} \CommentTok{# a[,8] <- 0} \CommentTok{# colnames(a)[8] <- "y2012"} \CommentTok{# a[,9] <- 0} \CommentTok{# colnames(a)[9] <- "y2013"} \CommentTok{# a <- dplyr::select(a,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012, y2013)} \CommentTok{# } \CommentTok{# } \CommentTok{# b <- as.data.frame.matrix(table(Data$ID,Data$Calves.1))} \CommentTok{# head(b)} \CommentTok{# b[,5] <-row.names(b)} \CommentTok{# colnames(b)[5] <- "ID"} \CommentTok{# b[,6] <- 0} \CommentTok{# colnames(b)[6] <- "y2006"} \CommentTok{# b[,7] <- 0} \CommentTok{# colnames(b)[7] <- "y2007"} \CommentTok{# b[,8] <- 0} \CommentTok{# colnames(b)[8] <- "y2008"} \CommentTok{# b[,9] <- 0} \CommentTok{# colnames(b)[9] <- "y2009"} \CommentTok{# colnames(b)[1] <- "y2010"} \CommentTok{# colnames(b)[2] <- "y2011"} \CommentTok{# colnames(b)[3] <- "y2012"} \CommentTok{# colnames(b)[4] <- "y2013"} \CommentTok{# b <- dplyr::select(b,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012, y2013)} \CommentTok{# } \CommentTok{# } \CommentTok{# c <- as.data.frame.matrix(table(Data$ID,Data$Calves.2))} \CommentTok{# head(c)} \CommentTok{# colnames(c)[1] <- "y2013"} \CommentTok{# c[,2] <-row.names(c)} \CommentTok{# colnames(c)[2] <- "ID"} \CommentTok{# c[,3] <- 0} \CommentTok{# colnames(c)[3] <- "y2006"} \CommentTok{# c[,4] <- 0} \CommentTok{# colnames(c)[4] <- "y2007"} \CommentTok{# c[,5] <- 0} \CommentTok{# colnames(c)[5] <- "y2008"} \CommentTok{# c[,6] <- 0} \CommentTok{# colnames(c)[6] <- "y2009"} \CommentTok{# c[,7] <- 0} \CommentTok{# colnames(c)[7] <- "y2010"} \CommentTok{# c[,8] <- 0} \CommentTok{# colnames(c)[8] <- "y2011"} \CommentTok{# c[,9] <- 0} \CommentTok{# colnames(c)[9] <- "y2012"} \CommentTok{# } \CommentTok{# c <- dplyr::select(c,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012,y2013)} \CommentTok{# } \CommentTok{# countdat <- rbind(a,b,c)} \CommentTok{# glimpse(countdat)} \CommentTok{# head(full.dat)} \CommentTok{# } \CommentTok{# full.dat <- group_by(countdat, ID) %>%} \CommentTok{# summarise(y2006 = sum(y2006),} \CommentTok{# y2007 = sum(y2007),} \CommentTok{# y2008 = sum(y2008),} \CommentTok{# y2009 = sum(y2009),} \CommentTok{# y2010 = sum(y2010),} \CommentTok{# y2011 = sum(y2011),} \CommentTok{# y2012 = sum(y2012),} \CommentTok{# y2013 = sum(y2013))} \CommentTok{# } \CommentTok{# 2012-2006} \CommentTok{# } \CommentTok{# ##checking....} \CommentTok{# } \CommentTok{# sort(Data$ID)} \CommentTok{# filter(Data, ID == "AI06022")} \CommentTok{# filter(Data, ID == "AI08340")} \CommentTok{# filter(Data, ID == "AI08343")} \CommentTok{# } \CommentTok{# head(Data)} \CommentTok{# } \CommentTok{# } \CommentTok{# # glimpse(c)} \CommentTok{# # Data$Calves.1,} \CommentTok{# # # Spread and gather are complements} \CommentTok{# # df <- data.frame(x = c("a", "b"), y = c(3, 4), z = c(5, 6))} \CommentTok{# # df %>% spread(x, y) %>% gather(x, y, a:b, na.rm = TRUE)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----different missing intervals 2---------------------------------------} \CommentTok{# longer5.6 <- c(sample.true,5,6,6)} \CommentTok{# } \CommentTok{# #greater.than.2} \CommentTok{# mean.56<-sum(longer5.6)/length(longer5.6)} \CommentTok{# s.56<-sd(longer5.6)} \CommentTok{# SE.56<-s.56/(sqrt(length(longer5.6)))} \CommentTok{# n.56<-(length(longer5.6))} \CommentTok{# low.qt.56<- mean.56-(qt(0.975,length(longer5.6))*SE.56)} \CommentTok{# high.qt.56 <- mean.56+(qt(0.975,length(longer5.6))*SE.56)} \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[8,] <- c("longer.56",n.56,mean.56,low.qt.56,high.qt.56,sd(longer5.6))} \CommentTok{# } \CommentTok{# ###sort out numbering in dataframe} \CommentTok{# Sumtable <- as.data.frame(Sumtable)} \CommentTok{# } \CommentTok{# Sumtable$n <- as.numeric(as.character(Sumtable$n))} \CommentTok{# Sumtable$mY <- as.numeric(as.character(Sumtable$mY))} \CommentTok{# Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt))} \CommentTok{# Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt))} \CommentTok{# Sumtable$sd <- as.numeric(as.character(Sumtable$sd))} \CommentTok{# Sumtable$interval <- as.character(Sumtable$interval)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing_data_table 2, echo=FALSE------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# } \CommentTok{# kable(Sumtable, format = "markdown",col.names = c("Interval","Sample size", "Mean", "Lower limit", "Higher limit", "SD"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment3_plot, echo=FALSE-----------------------------------} \CommentTok{# ggplot(Sumtable, aes(y = mY, x = interval)) +} \CommentTok{# geom_point(size = 5) +} \CommentTok{# geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05,size = 1, alpha = 0.5) +} \CommentTok{# scale_y_continuous(breaks = round(seq(2.3, 5, by = 0.2),1)) +} \CommentTok{# labs(y = "Mean calving interval",x = "Calving interval modification" ) +} \CommentTok{# geom_point(size = 3) +} \CommentTok{# theme_classic() +} \CommentTok{# theme_hc() +} \CommentTok{# theme(legend.position="none")} \end{Highlighting} \end{Shaded} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \hypertarget{exercise-4-1}{% \section{Exercise 4}\label{exercise-4-1}} In these exercises, you will be adapting the code written in this chapter to investigate slightly different questions. You should create a new R script \texttt{Ex4.R} in your working directory for these exercises so your chapter code is left unchanged. Exercise 4A is based solely on the required material and Exercises 4B - 4F are based on the example cases. You should work through each example before attempting each of the later exercises. \emph{The solutions to this exercise are found at the end of this book (\protect\hyperlink{ex4a-answers}{here}). You are \textbf{strongly recommended} to make a good attempt at completing this exercise on your own and only look at the solutions when you are truly stumped.} \hypertarget{exercise-4a-required-material-only-1}{% \subsection*{Exercise 4A: Required Material Only}\label{exercise-4a-required-material-only-1}} \addcontentsline{toc}{subsection}{Exercise 4A: Required Material Only} These questions are based on the material in Sections \ref{randomness} - \ref{mc-summaries} only. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Simulate flipping an unfair coin (probability of heads = 0.6) 100 times using \texttt{rbinom()}. Count the number of heads and tails. \item Simulate flipping the same unfair coin 100 times, but using \texttt{sample()} instead. Determine what fraction of the flips resulted in heads. \item Simulate rolling a fair 6-sided die 100 times using \texttt{sample()}. Determine what fraction of the rolls resulted in an even number. \item Simulate rolling the same die 100 times, but use the function \texttt{rmultinom()} instead. Look at the help file for details on how to use this function. Determine what fraction of the rolls resulted in an odd number. \end{enumerate} \protect\hyperlink{ex4a-answers}{Solutions} \hypertarget{exercise-4b-test-rnorm-1}{% \subsection*{\texorpdfstring{Exercise 4B: Test \texttt{rnorm}}{Exercise 4B: Test rnorm}}\label{exercise-4b-test-rnorm-1}} \addcontentsline{toc}{subsection}{Exercise 4B: Test \texttt{rnorm}} These questions will require you to adapt the code written in Section \ref{rnorm-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Adapt this example to investigate another univariate probability distribution, like \texttt{-lnorm()}, \texttt{-pois()}, or \texttt{-beta()}. See the help files (e.g., \texttt{?rpois}) for details on how to use each function. \end{enumerate} \protect\hyperlink{ex4b-answers}{Solutions} \hypertarget{exercise-4c-stochastic-power-analysis-1}{% \subsection*{Exercise 4C: Stochastic Power Analysis}\label{exercise-4c-stochastic-power-analysis-1}} \addcontentsline{toc}{subsection}{Exercise 4C: Stochastic Power Analysis} These questions will require you to adapt the code written in Section \ref{power-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item What sample size \texttt{n} do you need to have a power of 0.8 of detecting a significant difference between the two tagging methods? \item How do the inferences from the power analysis change if you are interested in \texttt{p\_new\ =\ 0.4} instead of \texttt{p\_new\ =\ 0.25}? Do you need to tag more or fewer fish in this case? \item Your analysis takes a bit of time to run so you are interested in tracking its progress. Add a progress message to your nested \texttt{for()} loop that will print the sample size currently being analyzed: \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \KeywordTok{cat}\NormalTok{(}\StringTok{"}\CharTok{\textbackslash{}r}\StringTok{"}\NormalTok{, }\StringTok{"Sample Size = "}\NormalTok{, n_try[n])} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ ...} \NormalTok{ \}} \NormalTok{\}} \end{Highlighting} \end{Shaded} \protect\hyperlink{ex4c-answers}{Solutions} \hypertarget{exercise-4d-harvest-policy-analysis-1}{% \subsection*{Exercise 4D: Harvest Policy Analysis}\label{exercise-4d-harvest-policy-analysis-1}} \addcontentsline{toc}{subsection}{Exercise 4D: Harvest Policy Analysis} These questions will require you to adapt the code written in Section \ref{harv-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Add an argument to \texttt{ricker\_sim()} that will give the user an option to create a plot that shows the time series of recruitment, harvest, and escapement all on the same plot. Set the default to be to not plot the result, in case you forget to turn it off before performing the Monte Carlo analysis. \item Add an \emph{error handler} to \texttt{ricker\_sim()} that will cause the function to return an error \texttt{if()} the names of the vector passed to the \texttt{param} argument aren't what the function is expecting. You can use \texttt{stop("Error\ Message\ Goes\ Here")} to have your function stop and return an error. \item How do the results of the trade-off analysis differ if the process error was larger (a larger value of \(\sigma\))? \item Add implementation error to the harvest policy. That is, if the target exploitation rate is \(U\), make the real exploitation rate in year \(y\) be: \(U_y \sim Beta(a,b)\), where \(a = 100U\) and \(b = 100(1-U)\). You can make there be more implementation error by inserting a smaller number other than 100 here. How does this affect the trade-off analysis? \end{enumerate} \protect\hyperlink{ex4d-answers}{Solutions} \hypertarget{exercise-4e-the-bootstrap-1}{% \subsection*{Exercise 4E: The Bootstrap}\label{exercise-4e-the-bootstrap-1}} \addcontentsline{toc}{subsection}{Exercise 4E: The Bootstrap} These questions will require you to adapt the code written in Section \ref{boot-test-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Replicate the bootstrap analysis, but adapt it for the linear regression example in Section \ref{regression}. Stop at the step where you summarize the 95\% interval range. \item Compare the 95\% bootstrap confidence intervals to the intervals you get by running the \texttt{predict()} function on the original data set with the argument \texttt{interval\ =\ "confidence"}. \end{enumerate} \protect\hyperlink{ex4e-answers}{Solutions} \hypertarget{exercise-4f-permutation-tests-1}{% \subsection*{Exercise 4F: Permutation Tests}\label{exercise-4f-permutation-tests-1}} \addcontentsline{toc}{subsection}{Exercise 4F: Permutation Tests} These questions will require you to adapt the code written in Section \ref{perm-test-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Adapt the code to perform a permutation test for the difference in each of the zooplankton densities between treatments. Don't forget to fix the missing value in the \texttt{chao} variable. See \protect\hyperlink{ex1b}{Exercise 2} for more details on this. \item Adapt the code to perform a permutation test for another data set used in this book where there are observations of both a categorical variable and a continuous variable. The data sets \texttt{sockeye.csv}, \texttt{growth.csv}, or \texttt{creel.csv} should be good starting points. \item Add a calculation of the p-value for a one-tailed test (i.e., that the difference in means is greater or less than zero). Steps 1 - 4 are the same: all you need is \texttt{Dnull} and \texttt{Dobs}. Don't be afraid to Google this if you are confused. \end{enumerate} \protect\hyperlink{ex4f-answers}{Solutions} \hypertarget{ch6}{% \chapter{Matrix population models}\label{ch6}} \hypertarget{sim-examples}{% \chapter{Simulation-Based Examples}\label{sim-examples}} \hypertarget{rnorm-ex}{% \subsection{\texorpdfstring{Test \texttt{rnorm}}{Test rnorm}}\label{rnorm-ex}} In this example, you will verify that the function \texttt{rnorm()} works the same way that \texttt{qnorm()} and \texttt{pnorm()} indicate that it should work. That is, you will verify that random deviates generated using \texttt{rnorm()} have the same properties as the true normal distribution given by \texttt{qnorm()} and \texttt{pnorm()}. Hopefully it will also reinforce the way the random, quantile, and cumulative distribution functions work in R. First, specify the mean and standard deviation for this example: \begin{Shaded} \begin{Highlighting}[] \NormalTok{mu =}\StringTok{ }\DecValTok{500}\NormalTok{; sig =}\StringTok{ }\DecValTok{30} \end{Highlighting} \end{Shaded} Now make up \texttt{n} (any number of your choosing, something greater than 10) random deviates from this normal distribution: \begin{Shaded} \begin{Highlighting}[] \NormalTok{random =}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{100}\NormalTok{, mu, sig)} \end{Highlighting} \end{Shaded} Test the quantiles (obtain the values that \texttt{p} * 100\% of the quantities fall below, both for random numbers and from the \texttt{qnorm()} function): \begin{Shaded} \begin{Highlighting}[] \NormalTok{p =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\FloatTok{0.01}\NormalTok{, }\FloatTok{0.99}\NormalTok{, }\FloatTok{0.01}\NormalTok{)} \NormalTok{random_q =}\StringTok{ }\KeywordTok{quantile}\NormalTok{(random, p)} \NormalTok{normal_q =}\StringTok{ }\KeywordTok{qnorm}\NormalTok{(p, mu, sig)} \KeywordTok{plot}\NormalTok{(normal_q }\OperatorTok{~}\StringTok{ }\NormalTok{random_q); }\KeywordTok{abline}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-87-1} \end{center} The fact that all the quantiles fall around the 1:1 line suggests the \texttt{n} random samples are indeed from a normal distribution. Any deviations you see are due to sampling errors. If you increase \texttt{n} to \texttt{n\ =\ 1e6} (one million), you'll see no deviations. This is called a \textbf{q-q plot}, and is frequently used to assess the fit of data to a distribution. Now test the random values in their agreement with the \texttt{pnorm()} function. Plot the cumulative density functions for the truly normal curve and the one approximated by the random deviates: \begin{Shaded} \begin{Highlighting}[] \NormalTok{q =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\DecValTok{400}\NormalTok{, }\DecValTok{600}\NormalTok{, }\DecValTok{10}\NormalTok{)} \NormalTok{random_cdf =}\StringTok{ }\KeywordTok{ecdf}\NormalTok{(random)} \NormalTok{random_p =}\StringTok{ }\KeywordTok{random_cdf}\NormalTok{(q)} \NormalTok{normal_p =}\StringTok{ }\KeywordTok{pnorm}\NormalTok{(q, mu, sig)} \KeywordTok{plot}\NormalTok{(normal_p }\OperatorTok{~}\StringTok{ }\NormalTok{q, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{col =} \StringTok{"blue"}\NormalTok{)} \KeywordTok{points}\NormalTok{(random_p }\OperatorTok{~}\StringTok{ }\NormalTok{q, }\DataTypeTok{col =} \StringTok{"red"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-89-1} \end{center} The \texttt{ecdf()} function obtains the empirical cumulative density function (which is just \texttt{pnorm()} for a sample). It allows you to plug in any random variable and obtain the probability of having one less than it. \hypertarget{power-ex}{% \subsection{Stochastic Power Analysis}\label{power-ex}} A \textbf{power analysis} is one where the analyst wishes to determine how much power they will have to detect an effect. Power is inversely related to the probability of making a Type II Error: failing to reject a false null hypothesis\footnote{English: concluding there is no effect when there truly is one}. In other words, having high power means that you have a high chance of detecting an effect if an effect truly exists. Power is a function of the effect size, the sample size \texttt{n}, and the variability in the data. Strong effects are easier to detect than weak ones, more samples increase the test's sensitivity (the ability to detect weak effects), and lower variability results in more power. You can conduct a power analysis using stochastic simulation (i.e., a Monte Carlo analysis). Here, you will write a power analysis to determine how likely are you to be able to correctly identify what you deem to be a biologically-meaningful difference in survival between two tagging procedures. You know one tagging procedure has approximately a 10\% mortality rate (10\% of tagged fish die within the first 12 hours as result of the tagging process). Another cheaper, and less labor-intensive method has been proposed, but before implementing it, your agency wishes to determine if it will have a meaningful impact on the reliability of the study or on the ability of the crew to tag enough individuals that will survive long enough to be useful. You and your colleagues determine that if the mortality rate of the new tagging method reaches 25\%, then gains in time and cost-efficiency would be offset by needing to tag more fish (because more will die). You have decided to perform a small-scale study to determine if using the new method could result in 25\% or more mortality. The study will tag \texttt{n} individuals using both methods (new and old) and track the fraction that survived after 12 hours. Before performing the study however, you deem it important to determine how large \texttt{n} needs to be to answer this question. You decide to use a stochastic power analysis to help your research group. The small-scale study can tag a total of at most 100 fish with the currently available resources. Could you tag fewer than 100 total individuals and still have a high probability of detecting a statistically significant difference in mortality? The stochastic power analysis approach works like this (this is called \textbf{psuedocode}): \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Simulate data under the reality that the difference is real with \texttt{n} observations per treatment, where \texttt{n\ \textless{}\ 100/2} \item Fit the model that will be used when the real data are collected to the simulated data \item Determine if the difference was detected with a significant p-value \item Replicate steps 1 - 3 many times \item Replicate step 4 while varying \texttt{n} over the interval from 10 to 50 \item Determine what fraction of the p-values were deemed significant at each \texttt{n} \end{enumerate} Step 2 will require fitting a generalized linear model; for a review, revisit Section \ref{glms} (specifically Section \ref{logis-regression} on logistic regression). First, create a function that will generate data, fit the model, and determine if the p-value is significant (steps 1-3 above): \begin{Shaded} \begin{Highlighting}[] \NormalTok{sim_fit =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(n, }\DataTypeTok{p_old =} \FloatTok{0.10}\NormalTok{, }\DataTypeTok{p_new =} \FloatTok{0.25}\NormalTok{) \{} \CommentTok{### step 1: create the data }\AlertTok{###} \CommentTok{# generate random response data} \NormalTok{ dead_old =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_old)} \NormalTok{ dead_new =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_new)} \CommentTok{# create the predictor variable} \NormalTok{ method =}\StringTok{ }\KeywordTok{rep}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"old"}\NormalTok{, }\StringTok{"new"}\NormalTok{), }\DataTypeTok{each =}\NormalTok{ n)} \CommentTok{# create a data.frame to pass to glm} \NormalTok{ df =}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{dead =} \KeywordTok{c}\NormalTok{(dead_old, dead_new), }\DataTypeTok{method =}\NormalTok{ method)} \CommentTok{# relevel so old is the reference} \NormalTok{ df}\OperatorTok{$}\NormalTok{method =}\StringTok{ }\KeywordTok{relevel}\NormalTok{(df}\OperatorTok{$}\NormalTok{method, }\DataTypeTok{ref =} \StringTok{"old"}\NormalTok{)} \CommentTok{### step 2: fit the model }\AlertTok{###} \NormalTok{ fit =}\StringTok{ }\KeywordTok{glm}\NormalTok{(dead }\OperatorTok{~}\StringTok{ }\NormalTok{method, }\DataTypeTok{data =}\NormalTok{ df, }\DataTypeTok{family =}\NormalTok{ binomial)} \CommentTok{### step 3: determine if a sig. p-value was found }\AlertTok{###} \CommentTok{# extract the p-value} \NormalTok{ pval =}\StringTok{ }\KeywordTok{summary}\NormalTok{(fit)}\OperatorTok{$}\NormalTok{coef[}\DecValTok{2}\NormalTok{,}\DecValTok{4}\NormalTok{]} \CommentTok{# determine if it was found to be significant} \NormalTok{ pval }\OperatorTok{<}\StringTok{ }\FloatTok{0.05} \NormalTok{\}} \end{Highlighting} \end{Shaded} Next, for steps 4 and 5, set up a \textbf{nested \texttt{for} loop}. This will have two loops: one that loops over sample sizes (step 5) and one that loops over replicates of each sample size (step 4). First, create the looping objects and containers: \begin{Shaded} \begin{Highlighting}[] \NormalTok{I =}\StringTok{ }\DecValTok{500} \CommentTok{# the number of replicates at each sample size} \NormalTok{n_try =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{50}\NormalTok{, }\DecValTok{10}\NormalTok{) }\CommentTok{# the test sample sizes} \NormalTok{N =}\StringTok{ }\KeywordTok{length}\NormalTok{(n_try) }\CommentTok{# count them} \CommentTok{# container: } \NormalTok{out =}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{, I, N) }\CommentTok{# matrix with I rows and N columns} \end{Highlighting} \end{Shaded} Now perform the nested loop. The inner-loop iterations will be completed for each element of \texttt{n} in the sequence \texttt{1:N}. The output (which is one element: \texttt{TRUE} or \texttt{FALSE} based on the significance of the p-value) is stored in the corresponding row and column for that iteration of that sample size. \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ out[i,n] =}\StringTok{ }\KeywordTok{sim_fit}\NormalTok{(}\DataTypeTok{n =}\NormalTok{ n_try[n])} \NormalTok{ \}} \NormalTok{\}} \end{Highlighting} \end{Shaded} You now have a matrix of \texttt{TRUE} and \texttt{FALSE} elements that indicates whether a significant difference was found at the \(\alpha = 0.05\) level if the effect was truly as large as you care about. You can obtain the proportion of all the replicates at each sample size that resulted in a significant difference using the \texttt{mean()} function with \texttt{apply()}: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(}\KeywordTok{apply}\NormalTok{(out, }\DecValTok{2}\NormalTok{, mean) }\OperatorTok{~}\StringTok{ }\NormalTok{n_try, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Tagged Fish per Treatment"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of Finding Effect (Power)"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-95-1} \end{center} Even if you tagged 100 fish total, you would only have a 49\% chance of saying the effect (which truly is there!) is present under the null hypothesis testing framework. Suppose you and your colleagues aren't relying on p-values in this case, and are purely interested in how precisely the \textbf{effect size} would be estimated. Adapt your function to determine how frequently you would be able to estimate the true mortality of the new method within +/- 5\% based on the point estimate only (the estimate for the tagging mortality of the new method must be between 0.2 and 0.3 for a successful study). Change your function to calculate this additional metric and re-run the analysis: \begin{Shaded} \begin{Highlighting}[] \NormalTok{sim_fit =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(n, }\DataTypeTok{p_old =} \FloatTok{0.10}\NormalTok{, }\DataTypeTok{p_new =} \FloatTok{0.25}\NormalTok{) \{} \CommentTok{# create the data} \NormalTok{ dead_old =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_old)} \NormalTok{ dead_new =}\StringTok{ }\KeywordTok{rbinom}\NormalTok{(n, }\DataTypeTok{size =} \DecValTok{1}\NormalTok{, }\DataTypeTok{prob =}\NormalTok{ p_new)} \CommentTok{# create the predictor variable} \NormalTok{ method =}\StringTok{ }\KeywordTok{rep}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"old"}\NormalTok{, }\StringTok{"new"}\NormalTok{), }\DataTypeTok{each =}\NormalTok{ n)} \CommentTok{# create a data.frame to pass to glm} \NormalTok{ df =}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{dead =} \KeywordTok{c}\NormalTok{(dead_old, dead_new), }\DataTypeTok{method =}\NormalTok{ method)} \CommentTok{# relevel so old is the reference} \NormalTok{ df}\OperatorTok{$}\NormalTok{method =}\StringTok{ }\KeywordTok{relevel}\NormalTok{(df}\OperatorTok{$}\NormalTok{method, }\DataTypeTok{ref =} \StringTok{"old"}\NormalTok{)} \CommentTok{# fit the model} \NormalTok{ fit =}\StringTok{ }\KeywordTok{glm}\NormalTok{(dead }\OperatorTok{~}\StringTok{ }\NormalTok{method, }\DataTypeTok{data =}\NormalTok{ df, }\DataTypeTok{family =}\NormalTok{ binomial)} \CommentTok{# extract the p-value} \NormalTok{ pval =}\StringTok{ }\KeywordTok{summary}\NormalTok{(fit)}\OperatorTok{$}\NormalTok{coef[}\DecValTok{2}\NormalTok{,}\DecValTok{4}\NormalTok{]} \CommentTok{# determine if it was found to be significant} \NormalTok{ sig_pval =}\StringTok{ }\NormalTok{pval }\OperatorTok{<}\StringTok{ }\FloatTok{0.05} \CommentTok{# obtain the estimated mortality rate for the new method} \NormalTok{ p_new_est =}\StringTok{ }\KeywordTok{predict}\NormalTok{(fit, }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{method =} \KeywordTok{c}\NormalTok{(}\StringTok{"new"}\NormalTok{)),} \DataTypeTok{type =} \StringTok{"response"}\NormalTok{)} \CommentTok{# determine if it is +/- 5% from the true value} \NormalTok{ prc_est =}\StringTok{ }\NormalTok{p_new_est }\OperatorTok{>=}\StringTok{ }\NormalTok{(p_new }\OperatorTok{-}\StringTok{ }\FloatTok{0.05}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{p_new_est }\OperatorTok{<=}\StringTok{ }\NormalTok{(p_new }\OperatorTok{+}\StringTok{ }\FloatTok{0.05}\NormalTok{)} \CommentTok{# return a vector with these two elements} \KeywordTok{c}\NormalTok{(}\DataTypeTok{sig_pval =}\NormalTok{ sig_pval, }\DataTypeTok{prc_est =} \KeywordTok{unname}\NormalTok{(prc_est))} \NormalTok{\}} \CommentTok{# containers: } \NormalTok{out_sig =}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{, I, N) }\CommentTok{# matrix with I rows and N columns} \NormalTok{out_prc =}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\OtherTok{NA}\NormalTok{, I, N) }\CommentTok{# matrix with I rows and N columns} \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ tmp =}\StringTok{ }\KeywordTok{sim_fit}\NormalTok{(}\DataTypeTok{n =}\NormalTok{ n_try[n]) }\CommentTok{# run sim} \NormalTok{ out_sig[i,n] =}\StringTok{ }\NormalTok{tmp[}\StringTok{"sig_pval"}\NormalTok{] }\CommentTok{# extract and store significance metric} \NormalTok{ out_prc[i,n] =}\StringTok{ }\NormalTok{tmp[}\StringTok{"prc_est"}\NormalTok{] }\CommentTok{# extract and store precision metric} \NormalTok{ \}} \NormalTok{\}} \KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{), }\DataTypeTok{mar =} \KeywordTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{,}\DecValTok{4}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{))} \KeywordTok{plot}\NormalTok{(}\KeywordTok{apply}\NormalTok{(out_sig, }\DecValTok{2}\NormalTok{, mean) }\OperatorTok{~}\StringTok{ }\NormalTok{n_try, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Tagged Fish per Treatment"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of Finding Effect (Power)"}\NormalTok{)} \KeywordTok{plot}\NormalTok{(}\KeywordTok{apply}\NormalTok{(out_prc, }\DecValTok{2}\NormalTok{, mean) }\OperatorTok{~}\StringTok{ }\NormalTok{n_try, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Tagged Fish per Treatment"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of a Precise Estimate"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-96-1} \end{center} It seems that even if you tagged 50 fish per treatment, you would have a 63\% chance of estimating that the mortality rate is between 0.2 and 0.3 if it was truly 0.25. You and your colleagues consider these results and determine that you will need to somehow acquire more funds to tag more fish in the small-scale study in order to have a high level of confidence in the results. \hypertarget{harv-ex}{% \subsection{Harvest Policy Analysis}\label{harv-ex}} In this example, you will simulate population dynamics under a more realistic model than in Sections \ref{for-loops} and \ref{adv-funcs} for the purpose of evaluating different harvest policies. Suppose you are a fisheries research biologist, and a commercial fishery for pink salmon (\emph{Oncorhynchus gorbuscha}) takes place in your district. For the past 10 years, it has been fished with an exploitation rate of 40\% (40\% of the fish that return each year have been harvested, exploitation rate is abbreviated by \(U\)), resulting in an average annual harvest of 8.5 million fish. The management plan is up for evaluation this year, and your supervisor has asked you to prepare an analysis that determines if more harvest could be sustained if a different exploitation rate were to be used in the future. Based on historical data, your best understanding implies that the stock is driven by Ricker spawner-recruit dynamics. That is, the total number of fish that return this year (recruits) is a function of the total number of fish that spawned (spawners) in the year of their birth. The Ricker model can be written this way: \begin{equation} R_t = \alpha S_{t-1} e^{-\beta S_{t-1} + \varepsilon_t} ,\varepsilon_t \sim N(0,\sigma) \label{eq:ricker-ch4} \end{equation} where \(\alpha\) is a parameter representing the maximum recruits per spawner (obtained at very low spawner abundances) and \(\beta\) is a measure of the strength of density-dependent mortality. Notice that the error term is in the exponent, which makes \(e^{\varepsilon_t}\) lognormal. You have estimates of the parameters\footnote{In reality, these estimates would have substantial uncertainty that you would need to propagate through your harvest policy analysis. In this example, you will ignore this complication}: \begin{itemize} \tightlist \item \(\alpha = 6\) \item \(\beta = 1 \times 10^{-7}\) \item \(\sigma = 0.4\) \end{itemize} You decide that you can build a policy analysis by simulating the stock forward through time under different exploitation rates. With enough iterations of the simulation, you will be able to see whether a different exploitation rate can provide more harvest than what is currently being extracted. First, write a function for your population model. Your function must: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item take the parameters, dimensions (number of years), and the policy variable (\(U\)) as input arguments \item simulate the population using Ricker dynamics \item calculate and return the average harvest and escapement over the number of future years you simulated. \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Step #1: name the function and give it some arguments} \NormalTok{ricker_sim =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(ny, params, U) \{} \CommentTok{# extract the parameters out by name:} \NormalTok{ alpha =}\StringTok{ }\NormalTok{params[}\StringTok{"alpha"}\NormalTok{]} \NormalTok{ beta =}\StringTok{ }\NormalTok{params[}\StringTok{"beta"}\NormalTok{]} \NormalTok{ sigma =}\StringTok{ }\NormalTok{params[}\StringTok{"sigma"}\NormalTok{]} \CommentTok{# create containers} \CommentTok{# this is a neat trick to condense your code:} \NormalTok{ R =}\StringTok{ }\NormalTok{S =}\StringTok{ }\NormalTok{H =}\StringTok{ }\OtherTok{NULL} \CommentTok{# initialize the population in the first year} \CommentTok{# start the population at being fished at 40%} \CommentTok{# with lognormal error} \NormalTok{ R[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\KeywordTok{log}\NormalTok{(alpha }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\FloatTok{0.4}\NormalTok{))}\OperatorTok{/}\NormalTok{(beta }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\FloatTok{0.4}\NormalTok{)) }\OperatorTok{*}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, sigma))} \NormalTok{ S[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\NormalTok{R[}\DecValTok{1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{U)} \NormalTok{ H[}\DecValTok{1}\NormalTok{] =}\StringTok{ }\NormalTok{R[}\DecValTok{1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{U} \CommentTok{# carry simulation forward through time} \ControlFlowTok{for}\NormalTok{ (y }\ControlFlowTok{in} \DecValTok{2}\OperatorTok{:}\NormalTok{ny) \{} \CommentTok{# use the ricker function with random lognormal noise} \NormalTok{ R[y] =}\StringTok{ }\NormalTok{S[y}\DecValTok{-1}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{alpha }\OperatorTok{*}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{beta }\OperatorTok{*}\StringTok{ }\NormalTok{S[y}\DecValTok{-1}\NormalTok{] }\OperatorTok{+}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, sigma))} \CommentTok{#harvest and spawners are the same as before} \NormalTok{ S[y] =}\StringTok{ }\NormalTok{R[y] }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{U)} \NormalTok{ H[y] =}\StringTok{ }\NormalTok{R[y] }\OperatorTok{*}\StringTok{ }\NormalTok{U} \NormalTok{ \}} \CommentTok{# wrap output in a list object} \KeywordTok{list}\NormalTok{(} \DataTypeTok{mean_H =} \KeywordTok{mean}\NormalTok{(H),} \DataTypeTok{mean_S =} \KeywordTok{mean}\NormalTok{(S)} \NormalTok{ )} \NormalTok{\}} \end{Highlighting} \end{Shaded} Use the function once: \begin{Shaded} \begin{Highlighting}[] \NormalTok{params =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DataTypeTok{alpha =} \DecValTok{6}\NormalTok{, }\DataTypeTok{beta =} \FloatTok{1e-7}\NormalTok{, }\DataTypeTok{sigma =} \FloatTok{0.4}\NormalTok{)} \NormalTok{out =}\StringTok{ }\KeywordTok{ricker_sim}\NormalTok{(}\DataTypeTok{U =} \FloatTok{0.4}\NormalTok{, }\DataTypeTok{ny =} \DecValTok{20}\NormalTok{, }\DataTypeTok{params =}\NormalTok{ params)} \CommentTok{#average annual harvest (in millions)} \KeywordTok{round}\NormalTok{(out}\OperatorTok{$}\NormalTok{mean_H}\OperatorTok{/}\FloatTok{1e6}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 8.88 \end{verbatim} If you completed the stochastic power analysis example (Section \ref{power-ex}), you might see where this is going. You are going to replicate applying a fixed policy many times to a random system. This is the Monte Carlo part of the analysis. The policy part is that you will compare the output from several candidate exploitation rates to inform a decision about which is best. This time, set up your analysis using \texttt{sapply()} (to iterate over different values of \(U\)) and \texttt{replicate()} (to iterate over different random populations fished at each \(U\)) instead of performing a nested \texttt{for()} loop as in previous examples: \begin{Shaded} \begin{Highlighting}[] \NormalTok{U_try =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\FloatTok{0.4}\NormalTok{, }\FloatTok{0.6}\NormalTok{, }\FloatTok{0.01}\NormalTok{)} \NormalTok{n_rep =}\StringTok{ }\DecValTok{2000} \NormalTok{H_out =}\StringTok{ }\KeywordTok{sapply}\NormalTok{(U_try, }\ControlFlowTok{function}\NormalTok{(u) \{} \KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =}\NormalTok{ n_rep, }\DataTypeTok{expr =}\NormalTok{ \{} \KeywordTok{ricker_sim}\NormalTok{(}\DataTypeTok{U =}\NormalTok{ u, }\DataTypeTok{ny =} \DecValTok{20}\NormalTok{, }\DataTypeTok{params =}\NormalTok{ params)}\OperatorTok{$}\NormalTok{mean_H}\OperatorTok{/}\FloatTok{1e6} \NormalTok{ \})} \NormalTok{\})} \end{Highlighting} \end{Shaded} The nested \texttt{replicate()} and \texttt{sapply()} method is a bit cleaner than a nested \texttt{for()} loop, but you have less control over the format of the output. Plot the output of your simulations using a boxplot. To make things easier, give \texttt{H\_out} column names representing the exploitation rate: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{colnames}\NormalTok{(H_out) =}\StringTok{ }\NormalTok{U_try} \KeywordTok{boxplot}\NormalTok{(H_out, }\DataTypeTok{outline =}\NormalTok{ F,} \DataTypeTok{xlab =} \StringTok{"U"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"Harvest (Millions of Fish)"}\NormalTok{,} \DataTypeTok{col =} \StringTok{"tomato"}\NormalTok{, }\DataTypeTok{las =} \DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-102-1} \end{center} It appears the stock could produce more harvest than its current 8.5 million fish per year if it was fished harder. However, your supervisors also do not want to see the escapement drop below three-quarters of what it has been in recent history (75\% of approximately 13 million fish). They ask you to obtain the expected average annual escapement as well as harvest. You can simply re-run the code above, but extracting \texttt{S\_mean} rather than \texttt{H\_mean}. Call this output \texttt{S\_out} and plot it just like harvest (if you're curious, this blue color is \texttt{col\ =\ "skyblue"}): \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-103-1} \end{center} After seeing this information, your supervisor realizes they are faced with a trade-off: the stock could produce more with high exploitation rates, but they are concerned about pushing the stock too low would be unsustainable. They tell you to determine the probability that the average escapement would not be pushed below 75\% of 13 million at each exploitation rate, as well as the probability that the average annual harvests will be at least 20\% greater than they are currently (approximately 8.5 million fish). Given your output, this is easy: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# determine if each element meets escapement criterion} \NormalTok{Smeet =}\StringTok{ }\NormalTok{S_out }\OperatorTok{>}\StringTok{ }\NormalTok{(}\FloatTok{0.75} \OperatorTok{*}\StringTok{ }\DecValTok{13}\NormalTok{)} \CommentTok{# determine if each element meets harvest criterion} \NormalTok{Hmeet =}\StringTok{ }\NormalTok{H_out }\OperatorTok{>}\StringTok{ }\NormalTok{(}\FloatTok{1.2} \OperatorTok{*}\StringTok{ }\FloatTok{8.5}\NormalTok{)} \CommentTok{# calculate the probability of each occuring at a given exploitation rate} \CommentTok{# remember, mean of a logical vector calculate the proportion of TRUEs} \NormalTok{p_Smeet =}\StringTok{ }\KeywordTok{apply}\NormalTok{(Smeet, }\DecValTok{2}\NormalTok{, mean)} \NormalTok{p_Hmeet =}\StringTok{ }\KeywordTok{apply}\NormalTok{(Hmeet, }\DecValTok{2}\NormalTok{, mean)} \end{Highlighting} \end{Shaded} You plot this for your supervisor as follows: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# the U levels to highlight on plot} \NormalTok{plot_U =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\FloatTok{0.4}\NormalTok{, }\FloatTok{0.6}\NormalTok{, }\FloatTok{0.05}\NormalTok{)} \CommentTok{# create an empty plot} \KeywordTok{par}\NormalTok{(}\DataTypeTok{mar =} \KeywordTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{,}\DecValTok{4}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \KeywordTok{plot}\NormalTok{(p_Smeet }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet, }\DataTypeTok{type =} \StringTok{"n"}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Probability of Meeting Harvest Criterion"}\NormalTok{,} \DataTypeTok{ylab =} \StringTok{"Probability of Meeting Escapement Criterion"}\NormalTok{)} \CommentTok{# add gridlines} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{v =} \KeywordTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{h =} \KeywordTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \CommentTok{#draw on the tradeoff curve} \KeywordTok{lines}\NormalTok{(p_Smeet }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet, }\DataTypeTok{type =} \StringTok{"l"}\NormalTok{, }\DataTypeTok{lwd =} \DecValTok{2}\NormalTok{)} \CommentTok{# add points and text for particular U policies} \KeywordTok{points}\NormalTok{(p_Smeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U] }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U],} \DataTypeTok{pch =} \DecValTok{16}\NormalTok{, }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \KeywordTok{text}\NormalTok{(p_Smeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U] }\OperatorTok{~}\StringTok{ }\NormalTok{p_Hmeet[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U],} \DataTypeTok{labels =}\NormalTok{ U_try[U_try }\OperatorTok{%in%}\StringTok{ }\NormalTok{plot_U], }\DataTypeTok{pos =} \KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-105-1} \end{center} Equipped with this analysis, your supervisor plans to go to the policy-makers with the recommendation of adjusting the exploitation rate policy to use \(U = 0.5\), because they think it balances the trade-off. Notice how if the status quo was maintained, your model suggests you would have complete certainty of staying where you are now: escapement will remain above 75\% of its current level with a 100\% chance, but you would have no chance of improving harvests to greater than 20\% of their current level. Small increases in the exploitation rate (e.g., from 0.4 to 0.45) have a reasonably large gain in harvest performance, but hardly any losses for the escapement criterion. Your supervisor is willing to live with a 90\% chance that the escapement will stay where they desire in order to gain a \textgreater80\% chance of obtaining the desired amount of increases in harvest. The utility of using Monte Carlo methods in this example is the ability to calculate the probability of some event you are interested in. There are analytical (i.e., not simulation-based) solutions to predict the annual harvest and escapement from a fixed \(U\) from a population with parameters \(\alpha\) and \(\beta\), but by incorporating randomness, you were able to obtain the relative weights of outcomes other than the expectation under the deterministic Ricker model, thereby allowing the assignment of probabilities to meeting the two criteria. \hypertarget{resample-examples}{% \section{Resampling-Based Examples}\label{resample-examples}} \hypertarget{boot-test-ex}{% \subsection{The Bootstrap}\label{boot-test-ex}} Say you have a fitted model from which you want to propagate the uncertainty in some derived quantity. Consider the case of the \textbf{von Bertalanffy growth model}. This is a non-linear model used to predict the size of an organism (weight or length) based on its age. The model can be written for a non-linear regression model (see Section \ref{nls}) as: \begin{equation} L_i = L_{\infty}\left(1 - e^{-k(age_i-t_0)}\right) + \varepsilon_i, \varepsilon_i \sim N(0, \sigma) \label{eq:vonB} \end{equation} where \(L_i\) and \(age_i\) are the observed length and age of individual \(i\), respectively, and \(L_{\infty}\), \(k\), and \(t_0\) are parameters to be estimated. The interpretations of the parameters are as follows: \begin{itemize} \tightlist \item \(L_{\infty}\): the maximum average length achieved \item \(k\): a growth coefficient linked to metabolic rate. It specifies the rate of increase in length as the fish ages early in life \item \(t_0\): the theoretical age when length equals zero (the x-intercept). \end{itemize} Use the data set \texttt{growth.csv} for this example (see the \protect\hyperlink{data-sets}{instructions} on acquiring data files). Read in and plot the data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{dat =}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"../Data/growth.csv"}\NormalTok{)} \KeywordTok{plot}\NormalTok{(length }\OperatorTok{~}\StringTok{ }\NormalTok{age, }\DataTypeTok{data =}\NormalTok{ dat, }\DataTypeTok{pch =} \DecValTok{16}\NormalTok{, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-107-1} \end{center} Due to a large amount of variability in individual growth rates, the relationship looks pretty noisy. Notice how you have mostly young fish in your sample: this is characteristic of ``random'' sampling of fish populations. Suppose you would like to obtain the probability that an average-sized fish of each age is sexually mature. You know that fish of this species mature at approximately 450 mm, and you simply need to determine the fraction of all fish at each age that are greater than 450 mm. However, you don't have any observations for some ages (e.g., age 8), so you cannot simply calculate this fraction based on your raw data. You need to fit the von Bertalanffy growth model, then carry the statistical uncertainty from the fitted model forward to the predicted length-at-age. This would be difficult to obtain using only the coefficient estimates and their standard errors, because of the non-linear relationship between the \(x\) and \(y\) variables. Enter the \textbf{bootstrap}, which is a Monte Carlo analysis using an observed data set and a model. The \textbf{pseudocode} for a bootstrap analysis is: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Resample from the original data (with replacement) \item Fit a model of interest \item Derive some quantity of interest from the fitted model \item Repeat steps 1 - 3 many times \item Summarize the randomized quantities from step 4 \end{enumerate} In this example, you will apply a bootstrap approach to obtain the distribution of expected fish lengths at each age, then use these distributions to quantify the probability that an averaged-sized fish of each age is mature (i.e., greater than 450 mm). You will write a function for each of steps 1 - 3 above. The first is to resample the data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{randomize =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(dat) \{} \CommentTok{# number of observed pairs} \NormalTok{ n =}\StringTok{ }\KeywordTok{nrow}\NormalTok{(dat)} \CommentTok{# sample the rows to determine which will be kept} \NormalTok{ keep =}\StringTok{ }\KeywordTok{sample}\NormalTok{(}\DataTypeTok{x =} \DecValTok{1}\OperatorTok{:}\NormalTok{n, }\DataTypeTok{size =}\NormalTok{ n, }\DataTypeTok{replace =}\NormalTok{ T)} \CommentTok{# retreive these rows from the data} \NormalTok{ dat[keep,]} \NormalTok{\}} \end{Highlighting} \end{Shaded} Notice the use of \texttt{replace\ =\ T} here: without this, there would be no bootstrap. You would just sample the same observations over and over, their order in the rows would just be shuffled. Next, write a function to fit the model (revisit Section \ref{nls} for more details on \texttt{nls()}): \begin{Shaded} \begin{Highlighting}[] \NormalTok{fit_vonB =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(dat) \{} \KeywordTok{nls}\NormalTok{(length }\OperatorTok{~}\StringTok{ }\NormalTok{linf }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{k }\OperatorTok{*}\StringTok{ }\NormalTok{(age }\OperatorTok{-}\StringTok{ }\NormalTok{t0))),} \DataTypeTok{data =}\NormalTok{ dat,} \DataTypeTok{start =} \KeywordTok{c}\NormalTok{(}\DataTypeTok{linf =} \DecValTok{600}\NormalTok{, }\DataTypeTok{k =} \FloatTok{0.3}\NormalTok{, }\DataTypeTok{t0 =} \FloatTok{-0.2}\NormalTok{)} \NormalTok{ )} \NormalTok{\}} \end{Highlighting} \end{Shaded} This function will return a fitted model object when executed. Next, write a function to predict mean length-at-age: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# create a vector of ages} \NormalTok{ages =}\StringTok{ }\KeywordTok{min}\NormalTok{(dat}\OperatorTok{$}\NormalTok{age)}\OperatorTok{:}\KeywordTok{max}\NormalTok{(dat}\OperatorTok{$}\NormalTok{age)} \NormalTok{pred_vonB =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(fit) \{} \CommentTok{# extract the coefficients} \NormalTok{ ests =}\StringTok{ }\KeywordTok{coef}\NormalTok{(fit)} \CommentTok{# predict length-at-age} \NormalTok{ ests[}\StringTok{"linf"}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{ests[}\StringTok{"k"}\NormalTok{] }\OperatorTok{*}\StringTok{ }\NormalTok{(ages }\OperatorTok{-}\StringTok{ }\NormalTok{ests[}\StringTok{"t0"}\NormalTok{])))} \NormalTok{\}} \end{Highlighting} \end{Shaded} Notice your function will use the object \texttt{ages} even though it was not defined in the function. This has to do with \textbf{lexical scoping} and \textbf{environments}, which are beyond the scope of this introductory material. If you'd like more details, see the section in \citet{adv-r-cite} on it\footnote{The section on \textbf{lexical scoping} is found here: \url{http://adv-r.had.co.nz/Functions.html\#lexical-scoping}}. Basically, if an object with the same name as one defined in the function exists outside of the function, the function will use the one that is defined within the function. If there is no object defined in the function with that name, it will look outside of the function for that object. Now, use these three functions to perform one iteration: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{pred_vonB}\NormalTok{(}\DataTypeTok{fit =} \KeywordTok{fit_vonB}\NormalTok{(}\DataTypeTok{dat =} \KeywordTok{randomize}\NormalTok{(}\DataTypeTok{dat =}\NormalTok{ dat)))} \end{Highlighting} \end{Shaded} You can wrap this inside of a \texttt{replicate()} call to perform step 4 above: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{2}\NormalTok{)} \NormalTok{out =}\StringTok{ }\KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =} \DecValTok{100}\NormalTok{, }\DataTypeTok{expr =}\NormalTok{ \{} \KeywordTok{pred_vonB}\NormalTok{(}\DataTypeTok{fit =} \KeywordTok{fit_vonB}\NormalTok{(}\DataTypeTok{dat =} \KeywordTok{randomize}\NormalTok{(}\DataTypeTok{dat =}\NormalTok{ dat)))} \NormalTok{\})} \KeywordTok{dim}\NormalTok{(out)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10 100 \end{verbatim} It appears the rows are different ages and the columns are different bootstrapped iterations. Summarize the random lengths at each age: \begin{Shaded} \begin{Highlighting}[] \NormalTok{summ =}\StringTok{ }\KeywordTok{apply}\NormalTok{(out, }\DecValTok{1}\NormalTok{, }\ControlFlowTok{function}\NormalTok{(x) }\KeywordTok{c}\NormalTok{(}\DataTypeTok{mean =} \KeywordTok{mean}\NormalTok{(x), }\KeywordTok{quantile}\NormalTok{(x, }\KeywordTok{c}\NormalTok{(}\FloatTok{0.025}\NormalTok{, }\FloatTok{0.975}\NormalTok{))))} \end{Highlighting} \end{Shaded} Plot the data, the summarized ranges of mean lengths, and the length at which all fish are assumed to be mature (450 mm) \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(length }\OperatorTok{~}\StringTok{ }\NormalTok{age, }\DataTypeTok{data =}\NormalTok{ dat, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{, }\DataTypeTok{pch =} \DecValTok{16}\NormalTok{,} \DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, }\KeywordTok{max}\NormalTok{(dat}\OperatorTok{$}\NormalTok{length, summ[}\StringTok{"97.5%"}\NormalTok{,])),} \DataTypeTok{ylab =} \StringTok{"Length (mm)"}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Age (years)"}\NormalTok{)} \KeywordTok{lines}\NormalTok{(summ[}\StringTok{"mean"}\NormalTok{,] }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{lwd =} \DecValTok{2}\NormalTok{)} \KeywordTok{lines}\NormalTok{(summ[}\StringTok{"2.5%"}\NormalTok{,] }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{lines}\NormalTok{(summ[}\StringTok{"97.5%"}\NormalTok{,] }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{h =} \DecValTok{450}\NormalTok{, }\DataTypeTok{col =} \StringTok{"blue"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-115-1} \end{center} Obtain the fraction of iterations that resulted in the mean length-at-age being greater than 450 mm. This is interpreted as the probability that the average-sized fish of each age is mature: \begin{Shaded} \begin{Highlighting}[] \NormalTok{p_mat =}\StringTok{ }\KeywordTok{apply}\NormalTok{(out, }\DecValTok{1}\NormalTok{, }\ControlFlowTok{function}\NormalTok{(x) }\KeywordTok{mean}\NormalTok{(x }\OperatorTok{>}\StringTok{ }\DecValTok{450}\NormalTok{))} \KeywordTok{plot}\NormalTok{(p_mat }\OperatorTok{~}\StringTok{ }\NormalTok{ages, }\DataTypeTok{type =} \StringTok{"b"}\NormalTok{, }\DataTypeTok{pch =} \DecValTok{17}\NormalTok{,} \DataTypeTok{xlab =} \StringTok{"Age (years)"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"Probability of Average Fish Mature"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-117-1} \end{center} This \textbf{maturity schedule} can be used by fishery managers in attempting to decide which ages should be allowed to be harvested and which should be allowed to grow more\footnote{possibly in a \textbf{yield-per-recruit} analysis}. Because each age has an associated expected length, managers can use what they know about the size selectivity of various gear types to set policies that attempt to target some ages more than others. \hypertarget{perm-test-ex}{% \subsection{Permutation Test}\label{perm-test-ex}} In the previous example (Section \ref{boot-test-ex}), you learned about the bootstrap. A related Monte Carlo analysis is the \textbf{permutation test}. This is a non-parametric statistical test used to determine if there is a statistically-significant difference in the mean of some quantity between two populations. It is used in cases where the assumptions of a generalized linear model may not be met, but a p-value is still required. The \textbf{pseudocode} for the permutation test is: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Calculate the difference between means based on the original data set \item Shuffle the group assignments randomly among the observations \item Calculate the difference between the randomly-assigned groups \item Repeat steps 2 - 3 many times. This builds the \textbf{null distribution}: the distribution of the test statistic (the difference) assuming the null hypothesis (that there is no difference) in means is true \item Determine what fraction of the absolute differences were larger than the original difference. This constitutes a \textbf{two-tailed} p-value. One-tailed tests can also be derived using the same steps 1 - 4, which is left as an exercise. \end{enumerate} Use the data set \texttt{ponds.csv} for this example (see the \protect\hyperlink{data-sets}{instructions} on acquiring data files). This is the same data set used for \protect\hyperlink{ex1b}{Exercise 1B}, revisit that exercise for details on this hypothetical data set. Read in and plot the data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{dat =}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"ponds.csv"}\NormalTok{)} \KeywordTok{plot}\NormalTok{(chl.a }\OperatorTok{~}\StringTok{ }\NormalTok{treatment, }\DataTypeTok{data =}\NormalTok{ dat)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-119-1} \end{center} It appears as though there is a relatively strong signal indicating a difference. Use the permutation test to determine if it is statistically significant. Step 1 from the pseudocode is to calculate the observed difference between groups: \begin{Shaded} \begin{Highlighting}[] \NormalTok{Dobs =}\StringTok{ }\KeywordTok{mean}\NormalTok{(dat}\OperatorTok{$}\NormalTok{chl.a[dat}\OperatorTok{$}\NormalTok{treatment }\OperatorTok{==}\StringTok{ "Add"}\NormalTok{]) }\OperatorTok{-}\StringTok{ }\KeywordTok{mean}\NormalTok{(dat}\OperatorTok{$}\NormalTok{chl.a[dat}\OperatorTok{$}\NormalTok{treatment }\OperatorTok{==}\StringTok{ "Control"}\NormalTok{])} \NormalTok{Dobs} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 26.166 \end{verbatim} Write a function to perform one iteration of steps 2 - 3 from the pseudocode: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# x is the group: Add or Control} \CommentTok{# y is chl.a} \NormalTok{perm =}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x, y) \{} \CommentTok{# turn x to a character, easier to deal with} \NormalTok{ x =}\StringTok{ }\KeywordTok{as.character}\NormalTok{(x)} \CommentTok{# shuffle the x values:} \NormalTok{ x_shuff =}\StringTok{ }\KeywordTok{sample}\NormalTok{(x)} \CommentTok{# calculate the mean of each group:} \NormalTok{ x_bar_add =}\StringTok{ }\KeywordTok{mean}\NormalTok{(y[x_shuff }\OperatorTok{==}\StringTok{ "Add"}\NormalTok{])} \NormalTok{ x_bar_ctl =}\StringTok{ }\KeywordTok{mean}\NormalTok{(y[x_shuff }\OperatorTok{==}\StringTok{ "Control"}\NormalTok{])} \CommentTok{# calculate the difference:} \NormalTok{ x_bar_add }\OperatorTok{-}\StringTok{ }\NormalTok{x_bar_ctl} \NormalTok{\}} \end{Highlighting} \end{Shaded} Use your function once: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{perm}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{treatment, }\DataTypeTok{y =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{chl.a)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10.648 \end{verbatim} Perform step 4 from the pseudocode by replicating your \texttt{perm()} function many times: \begin{Shaded} \begin{Highlighting}[] \NormalTok{Dnull =}\StringTok{ }\KeywordTok{replicate}\NormalTok{(}\DataTypeTok{n =} \DecValTok{5000}\NormalTok{, }\DataTypeTok{expr =} \KeywordTok{perm}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{treatment, }\DataTypeTok{y =}\NormalTok{ dat}\OperatorTok{$}\NormalTok{chl.a))} \end{Highlighting} \end{Shaded} Plot the distribution of the null test statistic and draw a line where the originally-observed difference falls: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{hist}\NormalTok{(Dnull, }\DataTypeTok{col =} \StringTok{"grey"}\NormalTok{)} \KeywordTok{abline}\NormalTok{(}\DataTypeTok{v =}\NormalTok{ Dobs, }\DataTypeTok{col =} \StringTok{"blue"}\NormalTok{, }\DataTypeTok{lwd =} \DecValTok{3}\NormalTok{, }\DataTypeTok{lty =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-125-1} \end{center} Notice the null distribution is centered on zero: this is because the null hypothesis is that there is no difference. The observation (blue line) falls way in the upper tail of the null distribution, indicating it is unlikely that an effect that large was observed by random chance. The two-tailed p-value can be calculated as: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{mean}\NormalTok{(}\KeywordTok{abs}\NormalTok{(Dnull) }\OperatorTok{>=}\StringTok{ }\NormalTok{Dobs)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0 \end{verbatim} Very few (or zero) of the random data sets resulted in a difference greater than what was observed, indicating there is statistical support to the hypothesis that there is a non-zero difference between the two nutrient treatments. \hypertarget{population-dynamics-1}{% \section{Population dynamics}\label{population-dynamics-1}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{source}\NormalTok{(}\StringTok{"./R/Rcode/Final_report_Davidson2017.R"}\NormalTok{, }\DataTypeTok{echo =} \OtherTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## > library(boot) ## ## > library(tidyverse) ## ## > library(dplyr) ## ## > library(ggplot2) ## ## > library(qpcR) ## ## > library(pwr) ## ## > library(ggthemes) ## ## > library(gridExtra) ## ## > Data <- read.csv("./R/Data/RawCI.csv", header = T, ## + quote = "\"") ## ## > Year <- unique(Data$Calves.1) ## ## > year2010a <- c(3, 3, 2) ## ## > year2010 <- filter(Data, Calves.1 < 2011) ## ## > year2010 <- year2010$Interval.1[!is.na(year2010$Interval.1)] ## ## > year2011a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2) ## ## > year2011 <- filter(Data, Calves.1 < 2012) ## ## > year2011 <- year2011$Interval.1[!is.na(year2011$Interval.1)] ## ## > year2012a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2, 6, 4, 4, 4, 4, 4, 3, 3, 3, 3) ## ## > year2012 <- filter(Data, Calves.1 < 2013) ## ## > year2012 <- year2012$Interval.1[!is.na(year2012$Interval.1)] ## ## > year2013a <- c(3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, ## + 3, 3, 2, 6, 4, 4, 4, 4, 4, 3, 3, 3, 3, 6, 5, 4, 4, 4, 4, ## + 4, 3, 3, 3, 3, 3, 3, 3, 3, .... [TRUNCATED] ## ## > full <- c(Data$Interval.1, Data$Interval.2) ## ## > year2013 <- full[!is.na(unlist(full))] ## ## > mean2010 <- sum(year2010)/length(year2010) ## ## > s2010 <- sd(year2010) ## ## > SE2010 <- s2010/(sqrt(length(year2010))) ## ## > n2010 <- (length(year2010)) ## ## > low.qt2010 <- mean2010 - (qt(0.975, length(year2010)) * ## + SE2010) ## ## > high.qt2010 <- mean2010 + (qt(0.975, length(year2010)) * ## + SE2010) ## ## > mean2011 <- sum(year2011)/length(year2011) ## ## > s2011 <- sd(year2011) ## ## > SE2011 <- s2011/(sqrt(length(year2011))) ## ## > n2011 <- (length(year2011)) ## ## > low.qt2011 <- mean2011 - (qt(0.975, length(year2011)) * ## + SE2011) ## ## > high.qt2011 <- mean2011 + (qt(0.975, length(year2011)) * ## + SE2011) ## ## > mean2012 <- sum(year2012)/length(year2012) ## ## > s2012 <- sd(year2012) ## ## > SE2012 <- s2012/(sqrt(length(year2012))) ## ## > n2012 <- (length(year2012)) ## ## > low.qt2012 <- mean2012 - (qt(0.975, length(year2012)) * ## + SE2012) ## ## > high.qt2012 <- mean2012 + (qt(0.975, length(year2012)) * ## + SE2012) ## ## > mean2013 <- sum(year2013)/length(year2013) ## ## > s2013 <- sd(year2013) ## ## > SE2013 <- s2013/(sqrt(length(year2013))) ## ## > n2013 <- (length(year2013)) ## ## > low.qt2013 <- mean2013 - (qt(0.975, length(year2013)) * ## + SE2013) ## ## > high.qt2013 <- mean2013 + (qt(0.975, length(year2013)) * ## + SE2013) ## ## > n <- c(length(year2010), length(year2011), length(year2012), ## + length(year2013)) ## ## > mY <- c(mean(year2010), mean(year2011), mean(year2012), ## + mean(year2013)) ## ## > year <- Year ## ## > low.qt <- c(low.qt2010, low.qt2011, low.qt2012, low.qt2013) ## ## > high.qt <- c(high.qt2010, high.qt2011, high.qt2012, ## + high.qt2013) ## ## > sd <- c(s2010, s2011, s2012, s2013) ## ## > sum.dat <- cbind(year, n, mY, low.qt, high.qt, sd) ## ## > sum.dat <- as.data.frame(sum.dat) ## ## > library(knitr) ## ## > kable(sum.dat, format = "markdown") ## ## ## | year| n| mY| low.qt| high.qt| sd| ## |----:|--:|--------:|--------:|--------:|---------:| ## | 2010| 3| 2.666667| 1.605851| 3.727482| 0.5773503| ## | 2011| 15| 2.866667| 2.673022| 3.060312| 0.3518658| ## | 2012| 25| 3.240000| 2.919170| 3.560830| 0.7788881| ## | 2013| 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## ## > ggplot(sum.dat, aes(y = mY, x = year)) + geom_point() + ## + geom_line() + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), ## + width = 0.1) + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-1} \end{center} \begin{verbatim} ## ## > par(mfrow = c(2, 2)) ## ## > plot(factor(year2010), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "a)", sub = "Sample size 3", ylab = "Frequency", ## + xlab = "Calving interval", cex.main = 1.5, font.main = 4, ## + col.main = "bl ..." ... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2011), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "b)", sub = "Sample size 15", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2012), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{verbatim} ## ## > title(main = "c)", sub = "Sample size 25", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > plot(factor(year2013), xlim = c(0, 6), ylim = c(0, ## + 40)) \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-2} \end{center} \begin{verbatim} ## ## > title(main = "d)", sub = "Sample size 45", ylab = "Frequency", ## + xlab = "Calving interval", col.main = 4, cex.main = 1.5, ## + font.main = 4, .... [TRUNCATED] ## ## > box() ## ## > library(qpcR) ## ## > rawdata <- qpcR:::cbind.na(year2010, year2011, year2012, ## + year2013) ## ## > rawdata <- as.data.frame(rawdata) ## ## > year2010 <- data.frame(year2010, year = c("2010")) ## ## > year2010 <- rename(year2010, interval = year2010, ## + year = year) ## ## > year2011 <- data.frame(year2011, year = c("2011")) ## ## > year2011 <- rename(year2011, interval = year2011, ## + year = year) ## ## > year2012 <- data.frame(year2012, year = c("2012")) ## ## > year2012 <- rename(year2012, interval = year2012, ## + year = year) ## ## > year2013 <- data.frame(year2013, year = c("2013")) ## ## > year2013 <- rename(year2013, interval = year2013, ## + year = year) ## ## > ggplotraw <- rbind(year2010, year2011, year2012, year2013) ## ## > ggplotraw$interval <- as.numeric(as.character(ggplotraw$interval)) ## ## > ggplot(year2013, aes(x = interval)) + geom_bar(alpha = 1, ## + width = 0.9, fill = "black") + xlab(expression("Calving" ~ ## + "interval" ~ (ita .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-3} \end{center} \begin{verbatim} ## ## > RealCI <- as.numeric(year2013$interval) ## ## > xlong <- RealCI ## ## > meanlong <- sum(xlong)/length(xlong) ## ## > slong <- sd(xlong) ## ## > SElong <- slong/(sqrt(length(xlong))) ## ## > nlong <- (length(xlong)) ## ## > lowqtlong <- meanlong - (qt(0.975, nlong) * SElong) ## ## > highqtlong <- meanlong + (qt(0.975, nlong) * SElong) ## ## > MedCI <- c(RealCI[RealCI < 5], 3, 3, 3, 3, 2, 3) ## ## > xmed <- MedCI ## ## > meanmed <- sum(xmed)/length(xmed) ## ## > smed <- sd(xmed) ## ## > SEmed <- smed/(sqrt(length(xmed))) ## ## > nmed <- (length(xmed)) ## ## > lowqtmed <- meanmed - (qt(0.975, length(xmed)) * SEmed) ## ## > highqtmed <- meanmed + (qt(0.975, length(xmed)) * ## + SEmed) ## ## > LowCI <- c(RealCI[RealCI < 4], 3, 3, 3, 3, 3, 2, 2, ## + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2) ## ## > xshort <- LowCI ## ## > meanshort <- mean(xshort) ## ## > sshort <- sd(xshort) ## ## > SEshort <- sshort/(sqrt(length(xshort))) ## ## > lowqtshort <- meanshort - (qt(0.975, length(xshort)) * ## + SEshort) ## ## > highqtshort <- meanshort + (qt(0.975, length(xshort)) * ## + SEshort) ## ## > bdata <- qpcR:::cbind.na(RealCI, MedCI, LowCI) ## ## > bdata <- as.data.frame(bdata) ## ## > par(mfrow = c(1, 3)) ## ## > plot(factor(bdata$LowCI), main = "Lowest possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(factor(bdata$MedCI), main = "Medium possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(factor(bdata$RealCI), main = "Observed interval") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-4} \end{center} \begin{verbatim} ## ## > par(mfrow = c(3, 1)) ## ## > plot(density(as.numeric(as.character(LowCI)), bw = 0.5), ## + main = "Lowest possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(density(as.numeric(as.character(MedCI)), bw = 0.5), ## + main = "Medium possible interval") \end{verbatim} \begin{verbatim} ## ## > plot(density(as.numeric(as.character(RealCI)), bw = 0.5), ## + main = "Observed interval") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-5} \end{center} \begin{verbatim} ## ## > Sumtable <- data.frame(variable = c("low.qt", "mean", ## + "high.qt", "sd", "SE"), short = c(lowqtshort, meanshort, ## + highqtshort, sshort, SE .... [TRUNCATED] ## ## > n <- c(length(LowCI), length(MedCI), length(year2013$interval)) ## ## > mY <- c(mean(LowCI), mean(MedCI), mean(year2013$interval)) ## ## > interval <- c("Low", "Medium", "Observed") ## ## > low.qt <- c(lowqtshort, lowqtmed, low.qt2013) ## ## > high.qt <- c(highqtshort, highqtmed, high.qt2013) ## ## > sd <- c(sshort, smed, s2013) ## ## > Sumtable <- cbind(interval, n, mY, low.qt, high.qt, ## + sd) ## ## > Sumtable <- as.data.frame(Sumtable) ## ## > Sumtable$n <- as.numeric(as.character(Sumtable$n)) ## ## > Sumtable$mY <- as.numeric(as.character(Sumtable$mY)) ## ## > Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt)) ## ## > Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt)) ## ## > Sumtable$sd <- as.numeric(as.character(Sumtable$sd)) ## ## > Sumtable$interval <- as.character(Sumtable$interval) ## ## > ggplot(Sumtable, aes(y = mY, x = interval)) + geom_point(size = 5) + ## + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05, ## + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-6} \end{center} \begin{verbatim} ## ## > library(knitr) ## ## > kable(Sumtable, format = "markdown", col.names = c("Interval", ## + "Sample size", "Mean", "Lower limit", "Higher limit", "SD")) ## ## ## |Interval | Sample size| Mean| Lower limit| Higher limit| SD| ## |:--------|-----------:|--------:|-----------:|------------:|---------:| ## |Low | 58| 2.568966| 2.437666| 2.700265| 0.4995461| ## |Medium | 48| 3.104167| 2.943089| 3.265244| 0.5550382| ## |Observed | 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## ## > library(knitr) ## ## > srwdat <- read.csv(file = "./R/Data/srw_data.csv") ## ## > kable(srwdat, format = "markdown", col.names = c("Sample size", ## + "Mean", "Lower limit", "Higher limit", "SE", "Author", "Location")) ## ## ## | Sample size| Mean| Lower limit| Higher limit| SE|Author |Location | ## |-----------:|----:|-----------:|------------:|----:|:------------------|:---------------------------------| ## | NA| 3.12| 3.07| 3.17| NA|Best et al. 2001 |South Africa | ## | 1504| 3.15| 3.11| 3.18| NA|Best et al. 2005 |South Africa (1971-2003 Updated) | ## | NA| 3.16| 3.13| 3.19| NA|Brandao et al 2010 |South Africa ( 1971-2006 Updated) | ## | NA| 3.35| NA| NA| 0.05|Cooke et al. 2001 |Argentina | ## | 749| 3.42| NA| NA| 0.11|Cooke et al. 2003 |Argentina | ## | NA| 3.63| NA| NA| 0.13|Burnell 2001 |Australia | ## ## > SAreps <- 1500 ## ## > ARreps <- 800 ## ## > Aussiereps <- 2000 ## ## > low <- 1000 ## ## > verylow <- 100 ## ## > lowest <- 10 ## ## > par(mfrow = c(2, 3)) ## ## > plot(factor(sample(year2013$interval, lowest, replace = T)), ## + main = "3 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, verylow, replace = T)), ## + main = "10 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, low, replace = T)), ## + main = "30 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, Aussiereps, ## + replace = T)), main = "500 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, ARreps, replace = T)), ## + main = "800 intervals") \end{verbatim} \begin{verbatim} ## ## > plot(factor(sample(year2013$interval, SAreps, replace = T)), ## + main = "1500 intervals") \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-7} \end{center} \begin{verbatim} ## ## > boots <- 1000 ## ## > n <- c(1:1000) ## ## > var10 <- paste0("n_", 1:10) ## ## > sample10 <- matrix(data = NA, ncol = lowest, nrow = boots) ## ## > colnames(sample10) <- as.list(var10) ## ## > for (i in 1:boots) { ## + sample10[i, ] <- sample(year2013$interval, lowest, replace = T) ## + } ## ## > sample10 <- as.data.frame(sample10) ## ## > sample10 <- sample10 %>% mutate(mean10 = rowMeans(sample10)) ## ## > sample10t <- as.matrix(sample10) ## ## > sample10t <- t(sample10t) ## ## > var100 <- paste0("n_", 1:100) ## ## > sample100 <- matrix(data = NA, ncol = verylow, nrow = boots) ## ## > colnames(sample100) <- as.list(var100) ## ## > for (i in 1:boots) { ## + sample100[i, ] <- sample(year2013$interval, verylow, replace = T) ## + } ## ## > sample100 <- as.data.frame(sample100) ## ## > sample100 <- sample100 %>% mutate(mean100 = rowMeans(sample100)) ## ## > var500 <- paste0("n_", 1:500) ## ## > sample500 <- matrix(data = NA, ncol = 500, nrow = boots) ## ## > colnames(sample500) <- as.list(var500) ## ## > for (i in 1:boots) { ## + sample500[i, ] <- sample(year2013$interval, 500, replace = T) ## + } ## ## > sample500 <- as.data.frame(sample500) ## ## > sample500 <- sample500 %>% mutate(mean500 = rowMeans(sample500)) ## ## > var1000 <- paste0("n_", 1:1000) ## ## > sample1000 <- matrix(data = NA, ncol = low, nrow = boots) ## ## > colnames(sample1000) <- as.list(var1000) ## ## > for (i in 1:boots) { ## + sample1000[i, ] <- sample(year2013$interval, low, replace = T) ## + } ## ## > sample1000 <- as.data.frame(sample1000) ## ## > sample1000 <- sample1000 %>% mutate(mean1000 = rowMeans(sample1000)) ## ## > varA <- paste0("n_", 1:2000) ## ## > sampleA <- matrix(data = NA, ncol = Aussiereps, nrow = boots) ## ## > colnames(sampleA) <- as.list(varA) ## ## > for (i in 1:boots) { ## + sampleA[i, ] <- sample(year2013$interval, Aussiereps, replace = T) ## + } ## ## > sampleA <- as.data.frame(sampleA) ## ## > sampleA <- sampleA %>% mutate(meanA = rowMeans(sampleA)) ## ## > sampleAt <- t(sampleA) ## ## > for (i in c(1:ncol(sampleA))) { ## + sampleA[, i] <- as.numeric(as.character(sampleA[, i])) ## + } ## ## > ab <- sort(sampleA$meanA) ## ## > nab <- length(ab) ## ## > ab2.5 <- ab[25] ## ## > ab0.97.5 <- ab[975] ## ## > ab <- sort(sampleA$meanA) ## ## > nab <- length(ab) ## ## > ab2.5 <- ab[25] ## ## > ab0.97.5 <- ab[975] ## ## > par(mfrow = c(1, 1)) ## ## > plot(density(sample10$mean10, bw = 0.05), col = "black", ## + lty = 1, main = "", lwd = 5, ylim = c(0, 8), xlim = c(2, ## + 4.5), axes = FAL .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-8} \end{center} \begin{verbatim} ## ## > lines(density(sample100$mean100, bw = 0.05), col = "black", ## + lty = 2, lwd = 4) ## ## > lines(density(sample500$mean500, bw = 0.05), col = "black", ## + lty = 3, lwd = 3) ## ## > lines(density(sample1000$mean1000, bw = 0.05), col = "black", ## + lty = 4, lwd = 2) ## ## > lines(density(sampleA$meanA, bw = 0.05), col = "black", ## + lty = 5, lwd = 1) ## ## > legend("topright", title = "Legend", c("n=10, cv=8.12 ", ## + "n=100, cv=2.43", "n=500, c.v=1.15", "n=1000, cv=0.79", "n=2000, cv=0.56"), ## + b .... [TRUNCATED] ## ## > axis(1, lwd = 2) ## ## > axis(2, lwd = 2) ## ## > plot(density(sample10$mean10, bw = 0.05), col = "black", ## + lty = 3, main = "", lwd = 1, ylim = c(0, 8), xlim = c(2.5, ## + 4.5), axes = F .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-9} \end{center} \begin{verbatim} ## ## > lines(density(sample100$mean100, bw = 0.05), col = "black", ## + lty = 4, lwd = 1) ## ## > lines(density(sample500$mean500, bw = 0.05), col = "black", ## + lty = 5, lwd = 1) ## ## > lines(density(sample1000$mean1000, bw = 0.05), col = "black", ## + lty = 2, lwd = 1) ## ## > lines(density(sampleA$meanA, bw = 0.05), col = "black", ## + lty = 1, lwd = 2) ## ## > legend(y = 8, x = 3.9, title = expression(bold("Sample size (n)")), ## + c(expression(italic("n") ~ "=" ~ "10"), expression(italic("n") ~ ## + .... [TRUNCATED] ## ## > axis(1, lwd = 2) ## ## > axis(2, lwd = 2) ## ## > rev.one <- bdata$RealCI[1:45] ## ## > sample.true <- year2013$interval ## ## > pwr.test.results <- power.t.test(n = 45, delta = seq(0, ## + 0.99, 0.001), sd = sd(sample.true), alternative = "one.sided", ## + sig.level = 0.0 .... [TRUNCATED] ## ## > pwr.analysis <- as.data.frame(cbind(pwr.test.results$power, ## + pwr.test.results$delta)) ## ## > colnames(pwr.analysis) <- c("Power", "Mean.difference") ## ## > pwr.analysis.1 <- pwr.analysis %>% mutate(Alpha = 1 - ## + Power, Mean.estimate = 3.31 + Mean.difference) ## ## > a <- filter(pwr.analysis.1, Alpha < 0.05) ## ## > a[1, ] ## Power Mean.difference Alpha Mean.estimate ## 1 0.9501505 0.593 0.04984946 3.903 ## ## > ggplot(data = pwr.analysis.1, aes(x = Mean.estimate, ## + y = Alpha)) + geom_line(size = 1.5) + geom_vline(xintercept = 3.903, ## + col = "blue" .... [TRUNCATED] \end{verbatim} \begin{verbatim} ## ## > rev.one <- bdata$RealCI[1:45] ## ## > sample.true <- year2013$interval ## ## > diff <- 3.63 - 3.31 ## ## > pwr.test.results <- power.t.test(n = seq(1, 200, 1), ## + delta = diff, sd = sd(sample.true), alternative = "one.sided", ## + sig.level = 0.05) \end{verbatim} \begin{verbatim} ## Warning in qt(sig.level/tside, nu, lower.tail = FALSE): NaNs produced \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-10} \end{center} \begin{verbatim} ## ## > pwr.analysis <- as.data.frame(cbind(pwr.test.results$power, ## + pwr.test.results$n)) ## ## > colnames(pwr.analysis) <- c("Power", "Sample.size") ## ## > pwr.analysis.1 <- pwr.analysis %>% mutate(Alpha = 1 - ## + Power) ## ## > a <- filter(pwr.analysis.1, Alpha < 0.05) ## ## > a[1, ] ## Power Sample.size Alpha ## 1 0.9503366 153 0.0496634 ## ## > ggplot(data = pwr.analysis.1, aes(x = Sample.size, ## + y = Alpha)) + geom_line(size = 1.5) + geom_vline(xintercept = 45, ## + col = "red") + ge .... [TRUNCATED] \end{verbatim} \begin{verbatim} ## Warning: Removed 1 rows containing missing values (geom_path). \end{verbatim} \begin{verbatim} ## ## > dat <- read.csv("./R/Data/raw_observations_2012.csv") ## ## > glimpse(dat) ## Observations: 180 ## Variables: 10 ## $ ID <fct> AI06006, AI06007, AI06015, AI06022, AI06038, AI100... ## $ X2006 <int> 1, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2007 <int> 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2008 <int> 1, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,... ## $ X2009 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2010 <int> 0, 0, 2, 0, 0, 6, 6, 5, 5, 3, 4, 2, 5, 4, 5, 3, 2,... ## $ X2011 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ X2012 <int> 0, 1, 2, 4, 0, 0, 0, 0, 0, 0, 5, 0, 0, 12, 0, 0, 0... ## $ total <int> 2, 4, 8, 5, 3, 6, 7, 5, 5, 3, 9, 2, 5, 16, 6, 3, 2... ## $ X..yrs.seen <int> 2, 3, 5, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1,... ## ## > head(dat) ## ID X2006 X2007 X2008 X2009 X2010 X2011 X2012 total X..yrs.seen ## 1 AI06006 1 0 1 0 0 0 0 2 2 ## 2 AI06007 2 1 0 0 0 0 1 4 3 ## 3 AI06015 2 1 1 0 2 0 2 8 5 ## 4 AI06022 1 0 0 0 0 0 4 5 2 ## 5 AI06038 1 0 2 0 0 0 0 3 2 ## 6 AI10040 0 0 0 0 6 0 0 6 1 ## ## > dat1 <- read.csv("./R/Data/RawCI.csv", header = T, ## + quote = "\"") ## ## > glimpse(dat1) ## Observations: 41 ## Variables: 8 ## $ ID <fct> AI10124, AI10070, AI10086, AI08340, AI08341, AI0... ## $ Yr.first.seen <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves.1 <int> 2010, 2010, 2010, 2011, 2011, 2011, 2011, 2011, ... ## $ Calves.2 <int> 2013, 2013, 2013, NA, NA, NA, NA, NA, NA, NA, NA... ## $ Interval.1 <int> 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 6, ... ## $ Interval.2 <int> 3, 3, 3, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,... ## $ X <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, ... ## ## > dat3 <- dplyr::select(dat, ID, X2006:X2012) %>% gather(year, ## + count, X2006:X2012) ## ## > dat4 <- full_join(dat3, dat1, by = "ID") \end{verbatim} \begin{verbatim} ## Warning: Column `ID` joining factors with different levels, coercing to ## character vector \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-11} \end{center} \begin{verbatim} ## ## > dat5 <- dplyr::select(dat4, ID, year, count, Yr.first.seen, ## + Calves, Calves.1, Calves.2) ## ## > dat6 <- filter(dat5, count > 0) ## ## > glimpse(dat6) ## Observations: 237 ## Variables: 7 ## $ ID <chr> "AI06006", "AI06007", "AI06015", "AI06022", "AI0... ## $ year <chr> "X2006", "X2006", "X2006", "X2006", "X2006", "X2... ## $ count <int> 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ## $ Yr.first.seen <int> NA, NA, NA, 2006, NA, NA, NA, 2007, 2007, NA, NA... ## $ Calves <int> NA, NA, NA, 2006, NA, NA, NA, 2007, 2007, NA, NA... ## $ Calves.1 <int> NA, NA, NA, 2012, NA, NA, NA, 2013, 2010, NA, NA... ## $ Calves.2 <int> NA, NA, NA, NA, NA, NA, NA, NA, 2013, NA, NA, 20... ## ## > dat7 <- mutate(dat6, year = ifelse(year == "X2006", ## + "2006", year), year = ifelse(year == "X2007", "2007", year), ## + year = ifelse(year == .... [TRUNCATED] ## ## > a <- group_by(dat7, ID, Yr.first.seen) %>% mutate(mother = ifelse(Yr.first.seen > ## + 0, 1, 0)) %>% filter(mother == 1) %>% ungroup() %>% dplyr:: .... [TRUNCATED] ## ## > a ## # A tibble: 1 x 4 ## ID year Calves Calves.1 ## <chr> <chr> <int> <int> ## 1 AI09216 2007 2009 2011 ## ## > greater.than.2 <- sample.true[sample.true > 2] ## ## > mean.2 <- sum(greater.than.2)/length(greater.than.2) ## ## > s.2 <- sd(greater.than.2) ## ## > SE.2 <- s2013/(sqrt(length(greater.than.2))) ## ## > n.2 <- length(greater.than.2) ## ## > low.qt.2 <- mean.2 - (qt(0.975, length(greater.than.2)) * ## + SE.2) ## ## > high.qt.2 <- mean.2 + (qt(0.975, length(greater.than.2)) * ## + SE.2) ## ## > Sumtable[4, ] <- c("miss2year", n.2, mean.2, low.qt.2, ## + high.qt.2, sd(greater.than.2)) ## ## > boots <- 1000 ## ## > n <- c(1:1000) ## ## > detect1 <- 44 ## ## > detect2 <- 42 ## ## > detect3 <- 40 ## ## > sample2 <- rep(NA, 1000) ## ## > sample5 <- rep(NA, 1000) ## ## > sample10 <- rep(NA, 1000) ## ## > for (i in 1:boots) { ## + sample2[i] <- mean(sample(year2013$interval, detect1, replace = T)) ## + sample5[i] <- mean(sample(year2013$interval, de .... [TRUNCATED] ## ## > sample2 <- sort(sample2) ## ## > sample2.2.5 <- sample2[25] ## ## > sample2.50 <- sample2[500] ## ## > sample2.975 <- sample2[975] ## ## > sample5 <- sort(sample5) ## ## > sample5.2.5 <- sample5[25] ## ## > sample5.50 <- sample5[500] ## ## > sample5.975 <- sample5[975] ## ## > sample10 <- sort(sample10) ## ## > sample10.2.5 <- sample10[25] ## ## > sample10.50 <- sample10[500] ## ## > sample10.975 <- sample10[975] ## ## > Sumtable[5, ] <- c("detect1", detect1, sample2.50, ## + sample2.2.5, sample2.975, NA) ## ## > Sumtable[6, ] <- c("detect2", detect2, sample5.50, ## + sample5.2.5, sample5.975, NA) ## ## > Sumtable[7, ] <- c("detect5", detect3, sample10.50, ## + sample10.2.5, sample10.975, NA) ## ## > length(Data$ID) ## [1] 41 ## ## > length(dat$ID) ## [1] 180 ## ## > glimpse(Data) ## Observations: 41 ## Variables: 8 ## $ ID <fct> AI10124, AI10070, AI10086, AI08340, AI08341, AI0... ## $ Yr.first.seen <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves <int> 2007, 2008, 2007, 2008, 2008, 2008, 2008, 2008, ... ## $ Calves.1 <int> 2010, 2010, 2010, 2011, 2011, 2011, 2011, 2011, ... ## $ Calves.2 <int> 2013, 2013, 2013, NA, NA, NA, NA, NA, NA, NA, NA... ## $ Interval.1 <int> 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 6, ... ## $ Interval.2 <int> 3, 3, 3, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,... ## $ X <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, ... ## ## > dat.detect <- dplyr::select(Data, ID, Calves, Calves.1, ## + Calves.2) %>% mutate(Calves = factor(Calves), Calves.1 = factor(Calves.1), ## + Cal .... [TRUNCATED] ## ## > a <- as.data.frame.matrix(table(Data$ID, Data$Calves)) ## ## > head(a) ## 2006 2007 2008 2009 2010 2011 ## AI06022 1 0 0 0 0 0 ## AI08340 0 0 1 0 0 0 ## AI08341 0 0 1 0 0 0 ## AI08343 0 0 1 0 0 0 ## AI08355 0 0 1 0 0 0 ## AI08362 0 0 1 0 0 0 ## ## > a[, 7] <- row.names(a) ## ## > colnames(a)[1] <- "y2006" ## ## > colnames(a)[2] <- "y2007" ## ## > colnames(a)[3] <- "y2008" ## ## > colnames(a)[4] <- "y2009" ## ## > colnames(a)[5] <- "y2010" ## ## > colnames(a)[6] <- "y2011" ## ## > colnames(a)[7] <- "ID" ## ## > a[, 8] <- 0 ## ## > colnames(a)[8] <- "y2012" ## ## > a[, 9] <- 0 ## ## > colnames(a)[9] <- "y2013" ## ## > a <- dplyr::select(a, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > b <- as.data.frame.matrix(table(Data$ID, Data$Calves.1)) ## ## > head(b) ## 2010 2011 2012 2013 ## AI06022 0 0 1 0 ## AI08340 0 1 0 0 ## AI08341 0 1 0 0 ## AI08343 0 0 0 1 ## AI08355 0 1 0 0 ## AI08362 0 1 0 0 ## ## > b[, 5] <- row.names(b) ## ## > colnames(b)[5] <- "ID" ## ## > b[, 6] <- 0 ## ## > colnames(b)[6] <- "y2006" ## ## > b[, 7] <- 0 ## ## > colnames(b)[7] <- "y2007" ## ## > b[, 8] <- 0 ## ## > colnames(b)[8] <- "y2008" ## ## > b[, 9] <- 0 ## ## > colnames(b)[9] <- "y2009" ## ## > colnames(b)[1] <- "y2010" ## ## > colnames(b)[2] <- "y2011" ## ## > colnames(b)[3] <- "y2012" ## ## > colnames(b)[4] <- "y2013" ## ## > b <- dplyr::select(b, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > c <- as.data.frame.matrix(table(Data$ID, Data$Calves.2)) ## ## > head(c) ## 2013 ## AI06022 0 ## AI08340 0 ## AI08341 0 ## AI08343 0 ## AI08355 0 ## AI08362 0 ## ## > colnames(c)[1] <- "y2013" ## ## > c[, 2] <- row.names(c) ## ## > colnames(c)[2] <- "ID" ## ## > c[, 3] <- 0 ## ## > colnames(c)[3] <- "y2006" ## ## > c[, 4] <- 0 ## ## > colnames(c)[4] <- "y2007" ## ## > c[, 5] <- 0 ## ## > colnames(c)[5] <- "y2008" ## ## > c[, 6] <- 0 ## ## > colnames(c)[6] <- "y2009" ## ## > c[, 7] <- 0 ## ## > colnames(c)[7] <- "y2010" ## ## > c[, 8] <- 0 ## ## > colnames(c)[8] <- "y2011" ## ## > c[, 9] <- 0 ## ## > colnames(c)[9] <- "y2012" ## ## > c <- dplyr::select(c, ID, y2006, y2007, y2008, y2009, ## + y2010, y2011, y2012, y2013) ## ## > countdat <- rbind(a, b, c) ## ## > glimpse(countdat) ## Observations: 123 ## Variables: 9 ## $ ID <chr> "AI06022", "AI08340", "AI08341", "AI08343", "AI08355", "... ## $ y2006 <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2007 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2008 <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,... ## $ y2009 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,... ## $ y2010 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2011 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2012 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ y2013 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## ## > full.dat <- group_by(countdat, ID) %>% summarise(y2006 = sum(y2006), ## + y2007 = sum(y2007), y2008 = sum(y2008), y2009 = sum(y2009), ## + y2010 .... [TRUNCATED] ## ## > 2012 - 2006 ## [1] 6 ## ## > sort(Data$ID) ## [1] AI06022 AI08340 AI08341 AI08343 AI08355 AI08362 AI08364 AI08365 ## [9] AI08372 AI08378 AI08379 AI08383 AI08386 AI08387 AI08390 AI08395 ## [17] AI08403 AI09216 AI09217 AI09221 AI09224 AI09225 AI09247 AI09249 ## [25] AI09259 AI09265 AI09289 AI10043 AI10056 AI10070 AI10085 AI10086 ## [33] AI10102 AI10124 AI10144 AI10160 AI10167 AI10170 AI10177 AI11408 ## [41] AI11430 ## 41 Levels: AI06022 AI08340 AI08341 AI08343 AI08355 AI08362 ... AI11430 ## ## > filter(Data, ID == "AI06022") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI06022 2006 2006 2012 NA 6 NA NA ## ## > filter(Data, ID == "AI08340") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI08340 2008 2008 2011 NA 3 NA NA ## ## > filter(Data, ID == "AI08343") ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI08343 2008 2008 2013 NA 5 NA NA ## ## > head(Data) ## ID Yr.first.seen Calves Calves.1 Calves.2 Interval.1 Interval.2 X ## 1 AI10124 2007 2007 2010 2013 3 3 NA ## 2 AI10070 2008 2008 2010 2013 2 3 NA ## 3 AI10086 2007 2007 2010 2013 3 3 NA ## 4 AI08340 2008 2008 2011 NA 3 NA NA ## 5 AI08341 2008 2008 2011 NA 3 NA NA ## 6 AI08355 2008 2008 2011 NA 3 NA NA ## ## > longer5.6 <- c(sample.true, 5, 6, 6) ## ## > mean.56 <- sum(longer5.6)/length(longer5.6) ## ## > s.56 <- sd(longer5.6) ## ## > SE.56 <- s.56/(sqrt(length(longer5.6))) ## ## > n.56 <- (length(longer5.6)) ## ## > low.qt.56 <- mean.56 - (qt(0.975, length(longer5.6)) * ## + SE.56) ## ## > high.qt.56 <- mean.56 + (qt(0.975, length(longer5.6)) * ## + SE.56) ## ## > Sumtable[8, ] <- c("longer.56", n.56, mean.56, low.qt.56, ## + high.qt.56, sd(longer5.6)) ## ## > Sumtable <- as.data.frame(Sumtable) ## ## > Sumtable$n <- as.numeric(as.character(Sumtable$n)) ## ## > Sumtable$mY <- as.numeric(as.character(Sumtable$mY)) ## ## > Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt)) ## ## > Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt)) ## ## > Sumtable$sd <- as.numeric(as.character(Sumtable$sd)) ## ## > Sumtable$interval <- as.character(Sumtable$interval) ## ## > library(knitr) ## ## > kable(Sumtable, format = "markdown", col.names = c("Interval", ## + "Sample size", "Mean", "Lower limit", "Higher limit", "SD")) ## ## ## |Interval | Sample size| Mean| Lower limit| Higher limit| SD| ## |:---------|-----------:|--------:|-----------:|------------:|---------:| ## |Low | 58| 2.568966| 2.437666| 2.700265| 0.4995461| ## |Medium | 48| 3.104167| 2.943089| 3.265244| 0.5550382| ## |Observed | 45| 3.311111| 3.056488| 3.565734| 0.8480518| ## |miss2year | 41| 3.439024| 3.171549| 3.706499| 0.7761695| ## |detect1 | 44| 3.295454| 3.090909| 3.545454| NA| ## |detect2 | 42| 3.309524| 3.071429| 3.571429| NA| ## |detect5 | 40| 3.300000| 3.075000| 3.575000| NA| ## |longer.56 | 48| 3.458333| 3.165307| 3.751360| 1.0097047| ## ## > ggplot(Sumtable, aes(y = mY, x = interval)) + geom_point(size = 5) + ## + geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05, ## + .... [TRUNCATED] \end{verbatim} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-127-12} \end{center} \hypertarget{plot-steps-2}{% \subsection{Plot steps}\label{plot-steps-2}} \begin{Shaded} \begin{Highlighting}[] \CommentTok{## ----raw graph, echo=FALSE, message=FALSE, warning=FALSE-----------------} \CommentTok{#plot data} \KeywordTok{ggplot}\NormalTok{(sum.dat, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{y =}\NormalTok{ mY, }\DataTypeTok{x =}\NormalTok{ year)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_point}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_line}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_errorbar}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{ymin =}\NormalTok{ low.qt, }\DataTypeTok{ymax =}\NormalTok{ high.qt), }\DataTypeTok{width =} \FloatTok{0.1}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_bw}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics{A-beginners-guide-to-population-dynamics_files/figure-latex/unnamed-chunk-128-1} \end{center} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# ## ----raw graph 2, echo=FALSE, fig.height=6, fig.width=6, message=FALSE, warning=FALSE----} \CommentTok{# } \CommentTok{# #PLOTS} \CommentTok{# par(mfrow=c(2,2))} \CommentTok{# } \CommentTok{# plot(factor(year2010),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="a)",sub="Sample size 3", ylab="Frequency",xlab="Calving interval",} \CommentTok{# cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2011),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="b)",sub="Sample size 15", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2012),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="c)",sub="Sample size 25", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# plot(factor(year2013),xlim=c(0,6),ylim=c(0,40))} \CommentTok{# title(main="d)",sub="Sample size 45", ylab="Frequency",xlab="Calving interval",col.main=4,cex.main = 1.5, font.main= 4, col.main= "blue",} \CommentTok{# cex.sub = 1, font.sub = 3, col.sub = "red")} \CommentTok{# box()} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----raw graph 3, echo=FALSE, fig.height=6, fig.width=6, message=TRUE, warning=TRUE----} \CommentTok{# library(qpcR)} \CommentTok{# #data in one way for plot} \CommentTok{# rawdata <- qpcR:::cbind.na(year2010,year2011,year2012,year2013)} \CommentTok{# rawdata <- as.data.frame(rawdata)} \CommentTok{# } \CommentTok{# #in correct format for ggplot2} \CommentTok{# year2010 <- data.frame(year2010,year = c("2010"))} \CommentTok{# year2010 <- rename(year2010, interval = year2010, year = year )} \CommentTok{# year2011 <- data.frame(year2011,year = c("2011"))} \CommentTok{# year2011 <- rename(year2011, interval = year2011, year = year )} \CommentTok{# year2012 <- data.frame(year2012,year = c("2012"))} \CommentTok{# year2012 <- rename(year2012, interval = year2012, year = year )} \CommentTok{# year2013 <- data.frame(year2013,year = c("2013"))} \CommentTok{# year2013 <- rename(year2013, interval = year2013, year = year )} \CommentTok{# ggplotraw <- rbind(year2010,year2011,year2012, year2013)} \CommentTok{# ggplotraw$interval <- as.numeric(as.character(ggplotraw$interval))} \CommentTok{# } \CommentTok{# #sort(year2013$interval) - sort(sample.true)} \CommentTok{# } \CommentTok{# } \CommentTok{# ggplot(year2013,aes(x = interval)) +} \CommentTok{# geom_bar(alpha = 1, width = 0.9,fill = "black") +} \CommentTok{# xlab(expression("Calving"~"interval"~(italic("years")))) +} \CommentTok{# ylab(expression("Total"~"number"~"of"~"observations"~(italic("n")))) +} \CommentTok{# scale_y_continuous(breaks = c(0,5,10,15,20,25,30), limits = c(0,30)) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black"))} \CommentTok{# #PLOTS} \CommentTok{# #code to store figure} \CommentTok{# # png("Figure_2_NZSRW_calving_interval_2017_highres.png", width = 12, height = 14.8, units = 'cm', res = 1200)} \CommentTok{# # dev.off()} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals, echo=FALSE, fig.height=10, message=FALSE, warning=FALSE----} \CommentTok{# #################################Missing calving intervals################} \CommentTok{# #Intervals modified by accounting for missed intervals} \CommentTok{# #Bradford et al. 2008} \CommentTok{# } \CommentTok{# #Raw Data} \CommentTok{# RealCI <- as.numeric(year2013$interval)} \CommentTok{# } \CommentTok{# #Confidence interval} \CommentTok{# xlong <- RealCI} \CommentTok{# meanlong<-sum(xlong)/length(xlong)} \CommentTok{# slong<-sd(xlong)} \CommentTok{# SElong<-slong/(sqrt(length(xlong)))} \CommentTok{# nlong<-(length(xlong))} \CommentTok{# #Standard error and confidence intervals} \CommentTok{# #2 sided t value at the 95% level = 2.093} \CommentTok{# lowqtlong <- meanlong-(qt(0.975,nlong)*SElong)} \CommentTok{# highqtlong <- meanlong+(qt(0.975,nlong)*SElong)} \CommentTok{# } \CommentTok{# ####################MED CI########################################} \CommentTok{# # 2x 6's and 1x 5 replaced with 3threes} \CommentTok{# MedCI <- c(RealCI[RealCI < 5],3,3,3,3,2,3)} \CommentTok{# #sort(MedCI)} \CommentTok{# xmed<-MedCI} \CommentTok{# meanmed<-sum(xmed)/length(xmed)} \CommentTok{# smed<-sd(xmed)} \CommentTok{# SEmed<-smed/(sqrt(length(xmed)))} \CommentTok{# nmed<-(length(xmed))} \CommentTok{# } \CommentTok{# #Standard error and confidence intervals} \CommentTok{# lowqtmed <- meanmed-(qt(0.975,length(xmed))*SEmed)} \CommentTok{# highqtmed <- meanmed+(qt(0.975,length(xmed))*SEmed)} \CommentTok{# } \CommentTok{# } \CommentTok{# ############################SHORT CI##################################} \CommentTok{# #6,5 replaced with 2 year intervals} \CommentTok{# } \CommentTok{# LowCI <- c(RealCI[RealCI < 4],3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2)} \CommentTok{# xshort<-LowCI} \CommentTok{# meanshort<-mean(xshort)} \CommentTok{# sshort<-sd(xshort)} \CommentTok{# SEshort<-sshort/(sqrt(length(xshort)))} \CommentTok{# } \CommentTok{# #Standard error and confidence intervals} \CommentTok{# lowqtshort <- meanshort-(qt(0.975,length(xshort))*SEshort)} \CommentTok{# highqtshort <- meanshort+(qt(0.975,length(xshort))*SEshort)} \CommentTok{# } \CommentTok{# bdata <-qpcR:::cbind.na(RealCI,MedCI,LowCI)} \CommentTok{# bdata <- as.data.frame(bdata)} \CommentTok{# } \CommentTok{# #Structure of data set} \CommentTok{# #str(bdata)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot, echo=FALSE, fig.height=3.5, fig.width=5.5, message=FALSE, warning=FALSE----} \CommentTok{# #Basic plots} \CommentTok{# par(mfrow=c(1,3))} \CommentTok{# plot(factor(bdata$LowCI),main="Lowest possible interval")} \CommentTok{# plot(factor(bdata$MedCI), main="Medium possible interval")} \CommentTok{# plot(factor(bdata$RealCI),main="Observed interval")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot2, fig.height=5.5, fig.width=4.5, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# #Density basic plots} \CommentTok{# par(mfrow=c(3,1))} \CommentTok{# plot(density(as.numeric(as.character(LowCI)),bw=.5), main="Lowest possible interval")} \CommentTok{# plot(density(as.numeric(as.character(MedCI)),bw= 0.5), main="Medium possible interval")} \CommentTok{# plot(density(as.numeric(as.character(RealCI)),bw = 0.5),main="Observed interval")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals table, fig.height=8, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# } \CommentTok{# ###################################SUMMARY############################} \CommentTok{# #Pull out important information} \CommentTok{# Sumtable<-data.frame(variable = c("low.qt","mean","high.qt","sd", "SE"), short=c(lowqtshort,meanshort,highqtshort,sshort,SEshort),} \CommentTok{# medium=c(lowqtmed,meanmed,highqtmed,smed,SEmed),} \CommentTok{# real=c(lowqtlong,meanlong,highqtlong,slong,SElong))} \CommentTok{# } \CommentTok{# #Make dataframe to plot} \CommentTok{# n <- c(length(LowCI),length(MedCI),length(year2013$interval))} \CommentTok{# mY <- c(mean(LowCI),mean(MedCI),mean(year2013$interval))} \CommentTok{# interval <-c("Low", "Medium","Observed")} \CommentTok{# low.qt <- c(lowqtshort,lowqtmed,low.qt2013)} \CommentTok{# high.qt <- c(highqtshort,highqtmed,high.qt2013)} \CommentTok{# sd <- c(sshort,smed,s2013)} \CommentTok{# Sumtable <- cbind(interval,n,mY,low.qt,high.qt,sd)} \CommentTok{# Sumtable <- as.data.frame(Sumtable)} \CommentTok{# } \CommentTok{# Sumtable$n <- as.numeric(as.character(Sumtable$n))} \CommentTok{# Sumtable$mY <- as.numeric(as.character(Sumtable$mY))} \CommentTok{# Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt))} \CommentTok{# Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt))} \CommentTok{# Sumtable$sd <- as.numeric(as.character(Sumtable$sd))} \CommentTok{# Sumtable$interval <- as.character(Sumtable$interval)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing intervals plot3, echo=FALSE, fig.height=4, message=FALSE, warning=FALSE----} \CommentTok{# ggplot(Sumtable, aes(y = mY, x = interval)) +} \CommentTok{# geom_point(size = 5) +} \CommentTok{# geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05,size = 1, alpha = 0.5) +} \CommentTok{# scale_y_continuous(breaks = round(seq(2.3, 3.6, by = 0.2),1)) +} \CommentTok{# labs(y = "Mean calving interval",x = "Calving interval modification" ) +} \CommentTok{# geom_point(size = 3) +} \CommentTok{# theme_classic() +} \CommentTok{# theme_hc() +} \CommentTok{# theme(legend.position="none")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing_data_table, echo=FALSE--------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# } \CommentTok{# kable(Sumtable, format = "markdown",col.names = c("Interval","Sample size", "Mean", "Lower limit", "Higher limit", "SD"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----srw_data_table, echo=FALSE------------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# setwd("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data")} \CommentTok{# srwdat <- read.csv(file = "srw_data.csv")} \CommentTok{# } \CommentTok{# #str(srwdat)} \CommentTok{# kable(srwdat, format = "markdown",col.names = c("Sample size","Mean", "Lower limit", "Higher limit", "SE","Author", "Location"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap single, echo=FALSE, fig.height=5--------------------------} \CommentTok{# ############################NZ Simple sample##############################} \CommentTok{# #WITH replacement} \CommentTok{# } \CommentTok{# # to try and match number of intervals observed in other populations} \CommentTok{# # find references} \CommentTok{# SAreps <- 1500} \CommentTok{# ARreps <- 800} \CommentTok{# Aussiereps <- 2000} \CommentTok{# low <- 1000} \CommentTok{# verylow <- 100} \CommentTok{# lowest <- 10} \CommentTok{# } \CommentTok{# #Very raw plots} \CommentTok{# par(mfrow=c(2,3))} \CommentTok{# plot(factor(sample(year2013$interval,lowest,replace=T)),main = "3 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,verylow,replace=T)),main = "10 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,low,replace=T)),main = "30 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,Aussiereps,replace=T)),main = "500 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,ARreps,replace=T)),main = "800 intervals")} \CommentTok{# plot(factor(sample(year2013$interval,SAreps,replace=T)),main = "1500 intervals")} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap_multiple, echo=FALSE--------------------------------------} \CommentTok{# #do each one 1000 times} \CommentTok{# boots <- 1000} \CommentTok{# n <- c(1:1000)} \CommentTok{# } \CommentTok{# } \CommentTok{# ###########################n10} \CommentTok{# var10 <- paste0("n_", 1:10)} \CommentTok{# sample10 <-matrix(data = NA, ncol = lowest, nrow = boots)} \CommentTok{# colnames(sample10) <- as.list(var10)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample10 [i, ] <- sample(year2013$interval,lowest,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample10 <- as.data.frame(sample10)} \CommentTok{# sample10 <- sample10 %>%} \CommentTok{# mutate(mean10 = rowMeans(sample10))} \CommentTok{# } \CommentTok{# sample10t <- as.matrix(sample10)} \CommentTok{# sample10t <-t(sample10t)} \CommentTok{# } \CommentTok{# #########################verylow sample size} \CommentTok{# #set up variable names} \CommentTok{# var100 <- paste0("n_", 1:100)} \CommentTok{# } \CommentTok{# sample100 <-matrix(data = NA, ncol = verylow, nrow = boots)} \CommentTok{# colnames(sample100) <- as.list(var100)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample100 [i, ] <- sample(year2013$interval,verylow,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample100 <- as.data.frame(sample100)} \CommentTok{# sample100 <- sample100 %>%} \CommentTok{# mutate(mean100 = rowMeans(sample100))} \CommentTok{# } \CommentTok{# #########################middle one} \CommentTok{# #set up variable names} \CommentTok{# var500 <- paste0("n_", 1:500)} \CommentTok{# } \CommentTok{# sample500 <-matrix(data = NA, ncol = 500, nrow = boots)} \CommentTok{# colnames(sample500) <- as.list(var500)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample500 [i, ] <- sample(year2013$interval,500,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample500 <- as.data.frame(sample500)} \CommentTok{# sample500 <- sample500 %>%} \CommentTok{# mutate(mean500 = rowMeans(sample500))} \CommentTok{# } \CommentTok{# } \CommentTok{# #########################low sample size} \CommentTok{# #set up variable names} \CommentTok{# var1000 <- paste0("n_", 1:1000)} \CommentTok{# } \CommentTok{# sample1000 <-matrix(data = NA, ncol = low, nrow = boots)} \CommentTok{# colnames(sample1000) <- as.list(var1000)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample1000 [i, ] <- sample(year2013$interval,low,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sample1000 <- as.data.frame(sample1000)} \CommentTok{# sample1000 <- sample1000 %>%} \CommentTok{# mutate(mean1000 = rowMeans(sample1000))} \CommentTok{# } \CommentTok{# #########################AUS sample size} \CommentTok{# #set up variable names} \CommentTok{# varA <- paste0("n_", 1:2000)} \CommentTok{# } \CommentTok{# sampleA <-matrix(data = NA, ncol = Aussiereps, nrow = boots)} \CommentTok{# colnames(sampleA) <- as.list(varA)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sampleA [i, ] <- sample(year2013$interval,Aussiereps,replace=T)} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# sampleA <- as.data.frame(sampleA)} \CommentTok{# sampleA <- sampleA %>%} \CommentTok{# mutate(meanA = rowMeans(sampleA))} \CommentTok{# } \CommentTok{# sampleAt <- t(sampleA)} \CommentTok{# } \CommentTok{# for(i in c(1:ncol(sampleA))) \{} \CommentTok{# sampleA[,i] <- as.numeric(as.character(sampleA[,i]))} \CommentTok{# \}} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# #COnfidence intervals} \CommentTok{# } \CommentTok{# ab <- sort(sampleA$meanA)} \CommentTok{# nab <- length(ab)} \CommentTok{# #low = 25/1000} \CommentTok{# ab2.5 <- ab[25]} \CommentTok{# #high = 975/1000} \CommentTok{# ab0.97.5 <- ab[975]} \CommentTok{# } \CommentTok{# ab <- sort(sampleA$meanA)} \CommentTok{# nab <- length(ab)} \CommentTok{# #low = 25/1000} \CommentTok{# ab2.5 <- ab[25]} \CommentTok{# #high = 975/1000} \CommentTok{# ab0.97.5 <- ab[975]} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----bootstrap plot2, fig.height=5, message=FALSE, warning=FALSE, include=FALSE----} \CommentTok{# #plot the data over each other to look at change in density} \CommentTok{# par(mfrow=c(1,1))} \CommentTok{# #plot(density(sample3$mean3,bw = .15),lwd = 3,lyt = 5, main = "", xlab = "Calving interval", box = FALSE,axis = FALSE)} \CommentTok{# } \CommentTok{# plot(density(sample10$mean10,bw = .05),col ="black", lty = 1, main = "", lwd = 5,ylim = c(0,8),xlim = c(2,4.5), axes=FALSE,xlab = "Calving interval")} \CommentTok{# lines(density(sample100$mean100,bw = .05),col ="black", lty = 2, lwd = 4)} \CommentTok{# lines(density(sample500$mean500,bw = .05),col ="black", lty = 3, lwd = 3)} \CommentTok{# lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 4, lwd = 2)} \CommentTok{# lines(density(sampleA$meanA,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# legend('topright',title = "Legend", c("n=10, cv=8.12 ", "n=100, cv=2.43", "n=500, c.v=1.15", "n=1000, cv=0.79", "n=2000, cv=0.56"),bty = "n",} \CommentTok{# lty = c(1,2,3,4,5), lwd = c(5,4,3,2,1), cex=.75)} \CommentTok{# axis(1,lwd=2)} \CommentTok{# axis(2,lwd=2)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----final plot for publication1, echo=FALSE-----------------------------} \CommentTok{# #final [plot]} \CommentTok{# #size defined by NZJFMR} \CommentTok{# # 195 mm (h) ? 148 mm (w).} \CommentTok{# #ylab(expression("Total"~"number"~"of"~"observations"~(italic("n")))) +} \CommentTok{# } \CommentTok{# plot(density(sample10$mean10,bw = .05),col ="black", lty = 3, main = "", lwd = 1,ylim = c(0,8),xlim = c(2.5,4.5), axes=FALSE, xlab = expression("Calving"~"interval"~(italic("years"))))} \CommentTok{# lines(density(sample100$mean100,bw = .05),col ="black", lty = 4, lwd = 1)} \CommentTok{# lines(density(sample500$mean500,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 2, lwd = 1)} \CommentTok{# lines(density(sampleA$meanA,bw = .05),col ="black", lty = 1, lwd = 2)} \CommentTok{# legend(y = 8, x = 3.9,title = expression(bold("Sample size (n)")), c(expression(italic("n")~"="~"10"), expression(italic("n")~"="~"100"), expression(italic("n")~"="~"500"), expression(italic("n")~"="~"1000"), expression(italic("n")~"="~"2000")),bty = "n",} \CommentTok{# lty = c(3,4,5,2,1), lwd = c(1,1,1,1,2), cex=1)} \CommentTok{# axis(1,lwd=2)} \CommentTok{# axis(2,lwd=2)} \CommentTok{# } \CommentTok{# # PLOT CODE FOR PUBLICATION} \CommentTok{# # png("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Figures/Figure_3_NZSRW_calving_interval_2017_lowres.png", width = 14.8, height = 14.8, units = 'cm', res = 400)} \CommentTok{# # dev.off()} \CommentTok{# #} \CommentTok{# #} \CommentTok{# # png("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Figures/Figure_3_NZSRW_calving_interval_2017_highres.png", width = 14.8, height = 14.8, units = 'cm', res = 1200)} \CommentTok{# #} \CommentTok{# # plot(density(sample10$mean10,bw = .05),col ="black", lty = 3, main = "", lwd = 1,ylim = c(0,8),xlim = c(2.5,4.5), axes=FALSE,xlab = expression("Calving"~"interval"~(italic("years"))))} \CommentTok{# # lines(density(sample100$mean100,bw = .05),col ="black", lty = 4, lwd = 1)} \CommentTok{# # lines(density(sample500$mean500,bw = .05),col ="black", lty = 2, lwd = 1)} \CommentTok{# # lines(density(sample1000$mean1000,bw = .05),col ="black", lty = 5, lwd = 1)} \CommentTok{# # lines(density(sampleA$meanA,bw = .05),col ="black", lty = 1, lwd = 2)} \CommentTok{# # legend(y = 8, x = 3.9,title = expression(bold("Sample size (n)")), c(expression(italic("n")~"="~"10"), expression(italic("n")~"="~"100"), expression(italic("n")~"="~"500"), expression(italic("n")~"="~"1000"), expression(italic("n")~"="~"2000")),bty = "n",} \CommentTok{# # lty = c(3,4,2,5,1), lwd = c(1,1,1,1,2), cex=1)} \CommentTok{# # axis(1,lwd=2)} \CommentTok{# # axis(2,lwd=2)} \CommentTok{# #} \CommentTok{# # dev.off()} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_1, echo=TRUE----------------------------------------} \CommentTok{# #observed sample} \CommentTok{# rev.one <- bdata$RealCI[1:45]} \CommentTok{# } \CommentTok{# #sample 45 times} \CommentTok{# sample.true <- year2013$interval} \CommentTok{# } \CommentTok{# #power analysis} \CommentTok{# pwr.test.results <- power.t.test(n = 45,# sample size} \CommentTok{# delta = seq(0,0.99,0.001), #difference between means} \CommentTok{# sd = sd(sample.true), #observed variation} \CommentTok{# alternative = "one.sided", #observed test type} \CommentTok{# sig.level = 0.05) #significance level} \CommentTok{# } \CommentTok{# #additional packages are avaliable for more complex analysis} \CommentTok{# #but have not done this as don't think it is needed} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_1_plot, echo=FALSE, message=FALSE, warning=FALSE----} \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis <- as.data.frame(cbind(} \CommentTok{# pwr.test.results$power,} \CommentTok{# pwr.test.results$delta))} \CommentTok{# } \CommentTok{# colnames(pwr.analysis) <- c("Power","Mean.difference")} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis.1 <- pwr.analysis %>%} \CommentTok{# mutate(Alpha = 1- Power,} \CommentTok{# Mean.estimate = 3.31 + Mean.difference)} \CommentTok{# # %>%} \CommentTok{# # select(Alpha,Mean.estimate)} \CommentTok{# } \CommentTok{# #work out where the cut-off is} \CommentTok{# a <- filter(pwr.analysis.1, Alpha < 0.05)} \CommentTok{# a[1,]} \CommentTok{# } \CommentTok{# #plot data} \CommentTok{# ggplot(data = pwr.analysis.1, aes(x = Mean.estimate, y = Alpha)) +} \CommentTok{# geom_line(size = 1.5) +} \CommentTok{# geom_vline(xintercept = 3.903, col = "blue") +} \CommentTok{# geom_hline(yintercept = 0.05) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black")) +} \CommentTok{# ggtitle("Raw data result plot (n = 45)")} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment_2_plot, echo=FALSE, message=FALSE, warning=FALSE----} \CommentTok{# #observed sample} \CommentTok{# rev.one <- bdata$RealCI[1:45]} \CommentTok{# } \CommentTok{# #sample 45 times} \CommentTok{# sample.true <- year2013$interval} \CommentTok{# } \CommentTok{# #difference} \CommentTok{# diff <- 3.63-3.31 #observed mean of australian population} \CommentTok{# } \CommentTok{# #power analysis} \CommentTok{# pwr.test.results <- power.t.test(n = seq(1,200,1),# sample size} \CommentTok{# delta = diff, #difference between means} \CommentTok{# sd = sd(sample.true), #observed variation} \CommentTok{# alternative = "one.sided", #observed test type} \CommentTok{# sig.level = 0.05) #significance level} \CommentTok{# } \CommentTok{# #additional packages are avaliable for more complex analysis} \CommentTok{# #but have not done this as don't think it is needed} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis <- as.data.frame(cbind(} \CommentTok{# pwr.test.results$power,} \CommentTok{# pwr.test.results$n))} \CommentTok{# } \CommentTok{# colnames(pwr.analysis) <- c("Power","Sample.size")} \CommentTok{# } \CommentTok{# #sort data into ggplot format} \CommentTok{# pwr.analysis.1 <- pwr.analysis %>%} \CommentTok{# mutate(Alpha = 1- Power)} \CommentTok{# # %>%} \CommentTok{# # select(Alpha,Mean.estimate)} \CommentTok{# } \CommentTok{# #work out where the cut-off is} \CommentTok{# a <- filter(pwr.analysis.1, Alpha < 0.05)} \CommentTok{# a[1,]} \CommentTok{# } \CommentTok{# #plot data} \CommentTok{# ggplot(data = pwr.analysis.1, aes(x = Sample.size, y = Alpha)) +} \CommentTok{# geom_line(size = 1.5) +} \CommentTok{# geom_vline(xintercept = 45, col = "red") +} \CommentTok{# geom_vline(xintercept = 153, col = "blue") +} \CommentTok{# geom_hline(yintercept = 0.05) +} \CommentTok{# scale_y_continuous(limits = c(0,1)) +} \CommentTok{# theme(axis.line = element_line(colour = 'black', size = 0.65),} \CommentTok{# axis.ticks = element_line(colour = "black", size = 0.65),} \CommentTok{# panel.border = element_blank(),} \CommentTok{# panel.grid.major = element_blank(),} \CommentTok{# panel.grid.minor = element_blank(),} \CommentTok{# legend.key = element_blank(),} \CommentTok{# strip.background = element_rect(fill = "white", colour = "black", size = 1),} \CommentTok{# panel.background = element_rect(fill = "white",} \CommentTok{# colour = NA),} \CommentTok{# axis.text = element_text(size = rel(0.8),} \CommentTok{# colour = "black")) +} \CommentTok{# ggtitle("Observed difference between Australian and NZ mean")} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missed individuals 1, echo=FALSE------------------------------------} \CommentTok{# dat <- read.csv("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data/raw_observations_2012.csv")} \CommentTok{# #data structure} \CommentTok{# glimpse(dat)} \CommentTok{# head(dat)} \CommentTok{# #And the second dataset} \CommentTok{# dat1<- read.csv("C:/Users/s435389/R_packages/Davidson_2017_SRWrepro/Data/RawCI.csv", header=T, quote="\textbackslash{}"")} \CommentTok{# #data structure} \CommentTok{# glimpse(dat1)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missed individuals 2, echo=FALSE, message=FALSE, warning=FALSE------} \CommentTok{# ##I can then modify this data to} \CommentTok{# #restructure dataset of capture to long dataset} \CommentTok{# dat3 <- dplyr::select(dat, ID, X2006:X2012)%>%} \CommentTok{# gather(year, count,X2006:X2012)} \CommentTok{# } \CommentTok{# #add data on calves} \CommentTok{# dat4 <- full_join(dat3,dat1, by = "ID")} \CommentTok{# dat5 <- dplyr::select(dat4,ID,year,count,Yr.first.seen,Calves,Calves.1,Calves.2)} \CommentTok{# } \CommentTok{# dat6 <- filter(dat5,count >0)} \CommentTok{# glimpse(dat6)} \CommentTok{# } \CommentTok{# dat7 <- mutate(dat6, year = ifelse(year == "X2006","2006", year),} \CommentTok{# year = ifelse(year == "X2007","2007", year),} \CommentTok{# year = ifelse(year == "X2008","2008", year),} \CommentTok{# year = ifelse(year == "X2009","2009", year),} \CommentTok{# year = ifelse(year == "X2010","2010", year),} \CommentTok{# year = ifelse(year == "X2011","2011", year),} \CommentTok{# year = ifelse(year == "X2012","2012", year))} \CommentTok{# } \CommentTok{# a <- group_by(dat7, ID, Yr.first.seen) %>%} \CommentTok{# mutate(mother = ifelse(Yr.first.seen > 0, 1, 0)) %>%} \CommentTok{# filter(mother == 1) %>%} \CommentTok{# ungroup() %>%} \CommentTok{# dplyr::select(ID,year,Calves,Calves.1) %>%} \CommentTok{# filter(Calves.1<2013) %>%} \CommentTok{# filter(!year == Calves) %>%} \CommentTok{# filter(!year ==Calves.1)} \CommentTok{# } \CommentTok{# a} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment3, echo=TRUE, message=FALSE, warning=FALSE-----------} \CommentTok{# greater.than.2 <- sample.true[sample.true>2]} \CommentTok{# } \CommentTok{# #greater.than.2} \CommentTok{# mean.2<-sum(greater.than.2)/length(greater.than.2)} \CommentTok{# s.2<-sd(greater.than.2)} \CommentTok{# SE.2<-s2013/(sqrt(length(greater.than.2)))} \CommentTok{# n.2<-length(greater.than.2)} \CommentTok{# low.qt.2<- mean.2-(qt(0.975,length(greater.than.2))*SE.2)} \CommentTok{# high.qt.2 <- mean.2+(qt(0.975,length(greater.than.2))*SE.2)} \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[4,] <- c("miss2year",n.2,mean.2,low.qt.2,} \CommentTok{# high.qt.2,sd(greater.than.2))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----different missing intervals 1, echo=TRUE----------------------------} \CommentTok{# ########################### 2.2%} \CommentTok{# #parameters} \CommentTok{# boots <- 1000} \CommentTok{# n <- c(1:1000)} \CommentTok{# } \CommentTok{# ###round all percentages upwards} \CommentTok{# detect1 <- 44 # (45*1.02) - 45 = 0.9} \CommentTok{# detect2 <- 42 # (45*1.05) - 45 = 2.25} \CommentTok{# detect3 <- 40 # (45*1.10) - 45 = 4.5} \CommentTok{# } \CommentTok{# sample2 <-rep(NA, 1000)} \CommentTok{# sample5 <-rep(NA, 1000)} \CommentTok{# sample10 <-rep(NA, 1000)} \CommentTok{# } \CommentTok{# for (i in 1:boots) \{} \CommentTok{# sample2[i]<-mean(sample(year2013$interval,detect1,replace=T))} \CommentTok{# sample5[i]<-mean(sample(year2013$interval,detect2,replace=T))} \CommentTok{# sample10[i]<-mean(sample(year2013$interval,detect3,replace=T))} \CommentTok{# \} #i} \CommentTok{# } \CommentTok{# ######################estimates##############} \CommentTok{# sample2 <- sort(sample2)} \CommentTok{# #low = 25/1000} \CommentTok{# sample2.2.5 <- sample2[25]} \CommentTok{# #median} \CommentTok{# sample2.50 <- sample2[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample2.975 <- sample2[975]} \CommentTok{# } \CommentTok{# sample5 <- sort(sample5)} \CommentTok{# #low = 25/1000} \CommentTok{# sample5.2.5 <- sample5[25]} \CommentTok{# #median} \CommentTok{# sample5.50 <- sample5[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample5.975 <- sample5[975]} \CommentTok{# } \CommentTok{# sample10 <- sort(sample10)} \CommentTok{# #low = 25/1000} \CommentTok{# sample10.2.5 <- sample10[25]} \CommentTok{# #median} \CommentTok{# sample10.50 <- sample10[500]} \CommentTok{# #high = 975/1000} \CommentTok{# sample10.975 <- sample10[975]} \CommentTok{# } \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[5,] <- c("detect1",detect1,sample2.50,sample2.2.5,sample2.975,NA)} \CommentTok{# Sumtable[6,] <- c("detect2",detect2,sample5.50,sample5.2.5,sample5.975,NA)} \CommentTok{# Sumtable[7,] <- c("detect5",detect3,sample10.50,sample10.2.5,sample10.975,NA)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----detection sim.2-----------------------------------------------------} \CommentTok{# } \CommentTok{# #be very careful as Dat is just IDS and no id of females with calves} \CommentTok{# #BUT Data is identified females...} \CommentTok{# length(Data$ID)} \CommentTok{# length(dat$ID)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# glimpse(Data)} \CommentTok{# dat.detect <- dplyr::select(Data,ID,Calves,Calves.1, Calves.2) %>%} \CommentTok{# mutate(Calves = factor(Calves),} \CommentTok{# Calves.1 = factor(Calves.1),} \CommentTok{# Calves.2 = factor(Calves.2))} \CommentTok{# } \CommentTok{# a <- as.data.frame.matrix(table(Data$ID,Data$Calves))} \CommentTok{# head(a)} \CommentTok{# a[,7] <-row.names(a)} \CommentTok{# colnames(a)[1] <- "y2006"} \CommentTok{# colnames(a)[2] <- "y2007"} \CommentTok{# colnames(a)[3] <- "y2008"} \CommentTok{# colnames(a)[4] <- "y2009"} \CommentTok{# colnames(a)[5] <- "y2010"} \CommentTok{# colnames(a)[6] <- "y2011"} \CommentTok{# colnames(a)[7] <- "ID"} \CommentTok{# a[,8] <- 0} \CommentTok{# colnames(a)[8] <- "y2012"} \CommentTok{# a[,9] <- 0} \CommentTok{# colnames(a)[9] <- "y2013"} \CommentTok{# a <- dplyr::select(a,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012, y2013)} \CommentTok{# } \CommentTok{# } \CommentTok{# b <- as.data.frame.matrix(table(Data$ID,Data$Calves.1))} \CommentTok{# head(b)} \CommentTok{# b[,5] <-row.names(b)} \CommentTok{# colnames(b)[5] <- "ID"} \CommentTok{# b[,6] <- 0} \CommentTok{# colnames(b)[6] <- "y2006"} \CommentTok{# b[,7] <- 0} \CommentTok{# colnames(b)[7] <- "y2007"} \CommentTok{# b[,8] <- 0} \CommentTok{# colnames(b)[8] <- "y2008"} \CommentTok{# b[,9] <- 0} \CommentTok{# colnames(b)[9] <- "y2009"} \CommentTok{# colnames(b)[1] <- "y2010"} \CommentTok{# colnames(b)[2] <- "y2011"} \CommentTok{# colnames(b)[3] <- "y2012"} \CommentTok{# colnames(b)[4] <- "y2013"} \CommentTok{# b <- dplyr::select(b,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012, y2013)} \CommentTok{# } \CommentTok{# } \CommentTok{# c <- as.data.frame.matrix(table(Data$ID,Data$Calves.2))} \CommentTok{# head(c)} \CommentTok{# colnames(c)[1] <- "y2013"} \CommentTok{# c[,2] <-row.names(c)} \CommentTok{# colnames(c)[2] <- "ID"} \CommentTok{# c[,3] <- 0} \CommentTok{# colnames(c)[3] <- "y2006"} \CommentTok{# c[,4] <- 0} \CommentTok{# colnames(c)[4] <- "y2007"} \CommentTok{# c[,5] <- 0} \CommentTok{# colnames(c)[5] <- "y2008"} \CommentTok{# c[,6] <- 0} \CommentTok{# colnames(c)[6] <- "y2009"} \CommentTok{# c[,7] <- 0} \CommentTok{# colnames(c)[7] <- "y2010"} \CommentTok{# c[,8] <- 0} \CommentTok{# colnames(c)[8] <- "y2011"} \CommentTok{# c[,9] <- 0} \CommentTok{# colnames(c)[9] <- "y2012"} \CommentTok{# } \CommentTok{# c <- dplyr::select(c,ID,y2006,y2007,y2008, y2009, y2010, y2011, y2012,y2013)} \CommentTok{# } \CommentTok{# countdat <- rbind(a,b,c)} \CommentTok{# glimpse(countdat)} \CommentTok{# head(full.dat)} \CommentTok{# } \CommentTok{# full.dat <- group_by(countdat, ID) %>%} \CommentTok{# summarise(y2006 = sum(y2006),} \CommentTok{# y2007 = sum(y2007),} \CommentTok{# y2008 = sum(y2008),} \CommentTok{# y2009 = sum(y2009),} \CommentTok{# y2010 = sum(y2010),} \CommentTok{# y2011 = sum(y2011),} \CommentTok{# y2012 = sum(y2012),} \CommentTok{# y2013 = sum(y2013))} \CommentTok{# } \CommentTok{# 2012-2006} \CommentTok{# } \CommentTok{# ##checking....} \CommentTok{# } \CommentTok{# sort(Data$ID)} \CommentTok{# filter(Data, ID == "AI06022")} \CommentTok{# filter(Data, ID == "AI08340")} \CommentTok{# filter(Data, ID == "AI08343")} \CommentTok{# } \CommentTok{# head(Data)} \CommentTok{# } \CommentTok{# } \CommentTok{# # glimpse(c)} \CommentTok{# # Data$Calves.1,} \CommentTok{# # # Spread and gather are complements} \CommentTok{# # df <- data.frame(x = c("a", "b"), y = c(3, 4), z = c(5, 6))} \CommentTok{# # df %>% spread(x, y) %>% gather(x, y, a:b, na.rm = TRUE)} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----different missing intervals 2---------------------------------------} \CommentTok{# longer5.6 <- c(sample.true,5,6,6)} \CommentTok{# } \CommentTok{# #greater.than.2} \CommentTok{# mean.56<-sum(longer5.6)/length(longer5.6)} \CommentTok{# s.56<-sd(longer5.6)} \CommentTok{# SE.56<-s.56/(sqrt(length(longer5.6)))} \CommentTok{# n.56<-(length(longer5.6))} \CommentTok{# low.qt.56<- mean.56-(qt(0.975,length(longer5.6))*SE.56)} \CommentTok{# high.qt.56 <- mean.56+(qt(0.975,length(longer5.6))*SE.56)} \CommentTok{# } \CommentTok{# #add it to the table from bradford data} \CommentTok{# Sumtable[8,] <- c("longer.56",n.56,mean.56,low.qt.56,high.qt.56,sd(longer5.6))} \CommentTok{# } \CommentTok{# ###sort out numbering in dataframe} \CommentTok{# Sumtable <- as.data.frame(Sumtable)} \CommentTok{# } \CommentTok{# Sumtable$n <- as.numeric(as.character(Sumtable$n))} \CommentTok{# Sumtable$mY <- as.numeric(as.character(Sumtable$mY))} \CommentTok{# Sumtable$low.qt <- as.numeric(as.character(Sumtable$low.qt))} \CommentTok{# Sumtable$high.qt <- as.numeric(as.character(Sumtable$high.qt))} \CommentTok{# Sumtable$sd <- as.numeric(as.character(Sumtable$sd))} \CommentTok{# Sumtable$interval <- as.character(Sumtable$interval)} \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----missing_data_table 2, echo=FALSE------------------------------------} \CommentTok{# library(knitr)} \CommentTok{# } \CommentTok{# kable(Sumtable, format = "markdown",col.names = c("Interval","Sample size", "Mean", "Lower limit", "Higher limit", "SD"))} \CommentTok{# } \CommentTok{# } \CommentTok{# } \CommentTok{# ## ----referee_comment3_plot, echo=FALSE-----------------------------------} \CommentTok{# ggplot(Sumtable, aes(y = mY, x = interval)) +} \CommentTok{# geom_point(size = 5) +} \CommentTok{# geom_errorbar(aes(ymin = low.qt, ymax = high.qt), width = 0.05,size = 1, alpha = 0.5) +} \CommentTok{# scale_y_continuous(breaks = round(seq(2.3, 5, by = 0.2),1)) +} \CommentTok{# labs(y = "Mean calving interval",x = "Calving interval modification" ) +} \CommentTok{# geom_point(size = 3) +} \CommentTok{# theme_classic() +} \CommentTok{# theme_hc() +} \CommentTok{# theme(legend.position="none")} \end{Highlighting} \end{Shaded} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \hypertarget{exercise-4-2}{% \section{Exercise 4}\label{exercise-4-2}} In these exercises, you will be adapting the code written in this chapter to investigate slightly different questions. You should create a new R script \texttt{Ex4.R} in your working directory for these exercises so your chapter code is left unchanged. Exercise 4A is based solely on the required material and Exercises 4B - 4F are based on the example cases. You should work through each example before attempting each of the later exercises. \emph{The solutions to this exercise are found at the end of this book (\protect\hyperlink{ex4a-answers}{here}). You are \textbf{strongly recommended} to make a good attempt at completing this exercise on your own and only look at the solutions when you are truly stumped.} \hypertarget{exercise-4a-required-material-only-2}{% \subsection*{Exercise 4A: Required Material Only}\label{exercise-4a-required-material-only-2}} \addcontentsline{toc}{subsection}{Exercise 4A: Required Material Only} These questions are based on the material in Sections \ref{randomness} - \ref{mc-summaries} only. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Simulate flipping an unfair coin (probability of heads = 0.6) 100 times using \texttt{rbinom()}. Count the number of heads and tails. \item Simulate flipping the same unfair coin 100 times, but using \texttt{sample()} instead. Determine what fraction of the flips resulted in heads. \item Simulate rolling a fair 6-sided die 100 times using \texttt{sample()}. Determine what fraction of the rolls resulted in an even number. \item Simulate rolling the same die 100 times, but use the function \texttt{rmultinom()} instead. Look at the help file for details on how to use this function. Determine what fraction of the rolls resulted in an odd number. \end{enumerate} \protect\hyperlink{ex4a-answers}{Solutions} \hypertarget{exercise-4b-test-rnorm-2}{% \subsection*{\texorpdfstring{Exercise 4B: Test \texttt{rnorm}}{Exercise 4B: Test rnorm}}\label{exercise-4b-test-rnorm-2}} \addcontentsline{toc}{subsection}{Exercise 4B: Test \texttt{rnorm}} These questions will require you to adapt the code written in Section \ref{rnorm-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Adapt this example to investigate another univariate probability distribution, like \texttt{-lnorm()}, \texttt{-pois()}, or \texttt{-beta()}. See the help files (e.g., \texttt{?rpois}) for details on how to use each function. \end{enumerate} \protect\hyperlink{ex4b-answers}{Solutions} \hypertarget{exercise-4c-stochastic-power-analysis-2}{% \subsection*{Exercise 4C: Stochastic Power Analysis}\label{exercise-4c-stochastic-power-analysis-2}} \addcontentsline{toc}{subsection}{Exercise 4C: Stochastic Power Analysis} These questions will require you to adapt the code written in Section \ref{power-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item What sample size \texttt{n} do you need to have a power of 0.8 of detecting a significant difference between the two tagging methods? \item How do the inferences from the power analysis change if you are interested in \texttt{p\_new\ =\ 0.4} instead of \texttt{p\_new\ =\ 0.25}? Do you need to tag more or fewer fish in this case? \item Your analysis takes a bit of time to run so you are interested in tracking its progress. Add a progress message to your nested \texttt{for()} loop that will print the sample size currently being analyzed: \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{ (n }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{} \KeywordTok{cat}\NormalTok{(}\StringTok{"}\CharTok{\textbackslash{}r}\StringTok{"}\NormalTok{, }\StringTok{"Sample Size = "}\NormalTok{, n_try[n])} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{I) \{} \NormalTok{ ...} \NormalTok{ \}} \NormalTok{\}} \end{Highlighting} \end{Shaded} \protect\hyperlink{ex4c-answers}{Solutions} \hypertarget{exercise-4d-harvest-policy-analysis-2}{% \subsection*{Exercise 4D: Harvest Policy Analysis}\label{exercise-4d-harvest-policy-analysis-2}} \addcontentsline{toc}{subsection}{Exercise 4D: Harvest Policy Analysis} These questions will require you to adapt the code written in Section \ref{harv-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Add an argument to \texttt{ricker\_sim()} that will give the user an option to create a plot that shows the time series of recruitment, harvest, and escapement all on the same plot. Set the default to be to not plot the result, in case you forget to turn it off before performing the Monte Carlo analysis. \item Add an \emph{error handler} to \texttt{ricker\_sim()} that will cause the function to return an error \texttt{if()} the names of the vector passed to the \texttt{param} argument aren't what the function is expecting. You can use \texttt{stop("Error\ Message\ Goes\ Here")} to have your function stop and return an error. \item How do the results of the trade-off analysis differ if the process error was larger (a larger value of \(\sigma\))? \item Add implementation error to the harvest policy. That is, if the target exploitation rate is \(U\), make the real exploitation rate in year \(y\) be: \(U_y \sim Beta(a,b)\), where \(a = 100U\) and \(b = 100(1-U)\). You can make there be more implementation error by inserting a smaller number other than 100 here. How does this affect the trade-off analysis? \end{enumerate} \protect\hyperlink{ex4d-answers}{Solutions} \hypertarget{exercise-4e-the-bootstrap-2}{% \subsection*{Exercise 4E: The Bootstrap}\label{exercise-4e-the-bootstrap-2}} \addcontentsline{toc}{subsection}{Exercise 4E: The Bootstrap} These questions will require you to adapt the code written in Section \ref{boot-test-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Replicate the bootstrap analysis, but adapt it for the linear regression example in Section \ref{regression}. Stop at the step where you summarize the 95\% interval range. \item Compare the 95\% bootstrap confidence intervals to the intervals you get by running the \texttt{predict()} function on the original data set with the argument \texttt{interval\ =\ "confidence"}. \end{enumerate} \protect\hyperlink{ex4e-answers}{Solutions} \hypertarget{exercise-4f-permutation-tests-2}{% \subsection*{Exercise 4F: Permutation Tests}\label{exercise-4f-permutation-tests-2}} \addcontentsline{toc}{subsection}{Exercise 4F: Permutation Tests} These questions will require you to adapt the code written in Section \ref{perm-test-ex} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Adapt the code to perform a permutation test for the difference in each of the zooplankton densities between treatments. Don't forget to fix the missing value in the \texttt{chao} variable. See \protect\hyperlink{ex1b}{Exercise 2} for more details on this. \item Adapt the code to perform a permutation test for another data set used in this book where there are observations of both a categorical variable and a continuous variable. The data sets \texttt{sockeye.csv}, \texttt{growth.csv}, or \texttt{creel.csv} should be good starting points. \item Add a calculation of the p-value for a one-tailed test (i.e., that the difference in means is greater or less than zero). Steps 1 - 4 are the same: all you need is \texttt{Dnull} and \texttt{Dobs}. Don't be afraid to Google this if you are confused. \end{enumerate} \protect\hyperlink{ex4f-answers}{Solutions} \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.6542487668, "avg_line_length": 45.9620098039, "ext": "tex", "hexsha": "7e8ec5435c23dc4dc801967300079864de827497", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-04-06T16:04:54.000Z", "max_forks_repo_forks_event_min_datetime": "2021-04-06T16:04:54.000Z", "max_forks_repo_head_hexsha": "6e81bf3328bafcb2a64cfcc932931ea745b11c21", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "davan690/au-r-workshop", "max_forks_repo_path": "A-beginners-guide-to-population-dynamics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6e81bf3328bafcb2a64cfcc932931ea745b11c21", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "davan690/au-r-workshop", "max_issues_repo_path": "A-beginners-guide-to-population-dynamics.tex", "max_line_length": 1364, "max_stars_count": null, "max_stars_repo_head_hexsha": "6e81bf3328bafcb2a64cfcc932931ea745b11c21", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "davan690/au-r-workshop", "max_stars_repo_path": "A-beginners-guide-to-population-dynamics.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 125865, "size": 375050 }
\section*{Appendix: Stream library} Those stream library functions that are not primitive functions are pre-declared as follows: \input ../source/lib/stream.js
{ "alphanum_fraction": 0.7901234568, "avg_line_length": 27, "ext": "tex", "hexsha": "157b3186b071cd371a825623302f20774caf4583", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-04-01T01:04:51.000Z", "max_forks_repo_forks_event_min_datetime": "2020-03-31T06:16:46.000Z", "max_forks_repo_head_hexsha": "20bd4b72f8ff749277bff6e4758c0f1828d26d6c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "alcen/lazy-source", "max_forks_repo_path": "docs/source_language_specs/source_stream_library.tex", "max_issues_count": 19, "max_issues_repo_head_hexsha": "20bd4b72f8ff749277bff6e4758c0f1828d26d6c", "max_issues_repo_issues_event_max_datetime": "2020-05-02T10:31:34.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-25T05:46:50.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "alcen/lazy-source", "max_issues_repo_path": "docs/source_language_specs/source_stream_library.tex", "max_line_length": 92, "max_stars_count": null, "max_stars_repo_head_hexsha": "20bd4b72f8ff749277bff6e4758c0f1828d26d6c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "alcen/lazy-source", "max_stars_repo_path": "docs/source_language_specs/source_stream_library.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 34, "size": 162 }
\documentclass{beamer} % to typeset the presentation as a handout uncomment: %\documentclass{article} %\usepackage{beamerarticle} \usepackage{graphicx,hyperref,url} \usepackage{color,colortbl} \usepackage{hyperref} \newcommand{\myhref}[2]{{\color{blue}\href{#1}{#2}}} \usecolortheme{beaver} \usetheme{Goettingen} \setbeamercolor{navigation symbols}{fg=red, bg=black} \usefonttheme[onlymath]{serif} \makeatletter \setbeamertemplate{sidebar canvas \beamer@sidebarside}% [vertical shading][top=red,bottom=gray] \makeatother \title{Monitoring the urban noise environment} \subtitle{joint with the IoT Lab, Civic Innovation YYC} \author{D. Richert, H. Leung} \institute[University of Calgary] { Department of Electrical and Computer Engineering\\ Schulich School of Engineering\\University of Calgary } \logo{% \includegraphics[width=2cm,height=2cm,keepaspectratio]{figures/uc_logo} \hspace*{1cm} {\color{black} August 24, 2017} \hspace*{1cm} \includegraphics[width=2cm,height=2cm,keepaspectratio]{figures/schulich} } \date{\scalebox{1}{\insertlogo}} \AtBeginSection[] { \begin{frame}<beamer>{Outline} \tableofcontents[currentsection,currentsubsection] \end{frame} } \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame}{Outline} \tableofcontents \end{frame} \section{Project Motivation} \begin{frame}{Project Motivation - LPWAN Project} \begin{itemize} \item Establish the City of Calgary as a leader in the smart city movement \item Assess how LPWAN technology can be used to improve the quality of life for citizens of Calgary \item Test the capabilities of LPWAN to determine which smart city applications the technology is best-suited for \item Speed up concept-to-development and minimize costly mistakes of adopting the wrong IoT solution \item Implement and validate futuristic algorithms on a real sensor network \item Strengthen the relationship between Civic Innovation YYC and the U of C \end{itemize} \end{frame} \begin{frame}{Project Motivation - acoustic monitoring} \begin{itemize} \item Unwanted noise in urban environments has negative health effects \begin{itemize} \item loss of sleep, disruption to relaxation and social gatherings, hearing loss, high blood pressure, and more \end{itemize} \item City noise codes aim to reduce noise pollution, but violations of the code are difficult to catch \item Continuous monitoring of noise is rare. Noise assessments are complaint driven. \item Noise data also contains information about the happenings within a city. \begin{itemize} \item traffic noise, construction noise, persons in distress, car accidents, etc. \end{itemize} \item Acoustic monitoring promises to be a good application for environmental monitoring using LPWAN. \end{itemize} \end{frame} \section{Relevant case studies} \subsection{SONYC} \begin{frame}{Sounds of New York City (SONYC) - project objectives} From the project \myhref{https://wp.nyu.edu/sonyc}{website}... \vfill Objectives: to create technological solutions for \begin{itemize} \item the systematic, constant monitoring of noise pollution at the city scale \item the accurate description of acoustic environments in terms of its composing sources \item broadening citizen participation in noise reporting and mitigation \item enabling city agencies to take effective, information-driven action for noise mitigation. \end{itemize} \end{frame} \begin{frame}{Sounds of New York City (SONYC) - project overview} \begin{center} \begin{figure} \includegraphics[scale=0.4]{figures/sonyc.png} \caption{SONYC project overview (image taken from the project \myhref{https://wp.nyu.edu/sonyc}{website})} \end{figure} \end{center} \end{frame} \begin{frame}{Sounds of New York City (SONYC) - prototype} \begin{center} \begin{figure} \includegraphics[scale=0.5]{figures/sonyc_prototype} \caption{SONYC sensor unit prototype (image taken from the project \myhref{https://wp.nyu.edu/sonyc}{website})} \end{figure} \end{center} \end{frame} \begin{frame}{Sounds of New York City - project impact \& results} \begin{itemize} \item granted a \$4.6 million Fontier award from the NSF to advance research in cyber-physical systems for smart cities \item NYC BigApps finalist - a civic innovation competition in NYC to improve the city. \item supports 15 academics \item partners with NYC Environmental Protection, NYC Health, and Downtown Alliance. \end{itemize} \end{frame} \subsection{RUMEUR network} \begin{frame}{RUMEUR network - project objectives} Designed and maintained by Bruitparif, a non-profit based in Paris. Project objectives (from the project's \myhref{http://www.conforg.fr/euronoise2015/proceedings/data/articles/000043.pdf}{EuroNoise conference paper}) \begin{itemize} \item Better understand noise phenomena: factors that influence noise, changes over time, exposure data correlated to socio-economic impacts \item Evaluate noise mitigation measures: obtain indicators for tracking the impact of mitigation policies, anticipate impact of future projects. \item Disseminate noise information to the public \end{itemize} \end{frame} \begin{frame}{RUMEUR network - project overview} \begin{minipage}{0.5\linewidth} \begin{itemize} \item 45 long-term high precision monitoring terminals and 350 short-term terminals \item Focus is on traffic and aircraft noise \item Data transmission over cellular network \item Real-time data dissemination through their \myhref{http://rumeur.bruitparif.fr/}{web application} \end{itemize} \end{minipage} \begin{minipage}{0.45\linewidth} \includegraphics[scale=0.3]{figures/rumeur_longterm} \end{minipage} \end{frame} \begin{frame}{RUMEUR network - data dissemination} \begin{center} \includegraphics[scale=0.28]{figures/rumeur2} \end{center} \end{frame} \begin{frame}{RUMEUR network - data dissemination} \begin{center} \includegraphics[scale=0.28]{figures/rumeur1} \end{center} \end{frame} \begin{frame}{RUMEUR network - project impact \& results} \begin{itemize} \item recipient of the Best Life Environment Project \item recipient of the Decibel D'Or award by the National Council of Noise \end{itemize} \end{frame} \section{Acoustic sensing basics} \begin{frame}{Acoustic sensing basics} \begin{itemize} \item Frequency range of human hearing is 20Hz-20kHz, but most sensitive in the 2-5kHz range \item Sound level range of human hearing 0dB - 85dB (above 85dB is dangerous) \item What is a dB? A logarithmic ratio of two values. In acoustics, the equivalent sound pressure level is used: \begin{align*} L_{eq} = 20 \log_{10} \bigg( \frac{1}{t_2-t_1}\int_{t_1}^{t_2} \frac{p_1}{p_0} dt \bigg) \end{align*} where $p_1$ is the rms sound pressure and $p_0$ is a reference sound pressure ($20 \mu Pa$). \end{itemize} \end{frame} \begin{frame}{Acoustic sensing basics} \begin{itemize} \item For environmental monitoring, a weighted sound level is used (dBA, e.g.) \item The (lower) threshold of human hearing is frequency dependant - at the same sound pressures, lower \& higher frequencies seem quieter to a human \item Weighting lower \& higher frequencies less give a more representative sound level: \begin{center} \includegraphics[scale=0.175]{figures/acoustic_weighting_curves.png} \end{center} \end{itemize} \end{frame} \begin{frame}{Acoustic sensing basics - sound level meter standards} IEC 61672-1 is the sound level meter (SLM) standard for sensors used for legally enforceable acoustic monitoring purposes. Includes specifications for device: \begin{itemize} \item frequency response tolerance limits \item self generated noise \item linearity \item type 1 devices (precision) vs. type 2 devices (general purpose) \item industry standard tests that can be used to evaluate sensor performance \end{itemize} \end{frame} \begin{frame}{Acoustic sensing basics - sound level meter standards} \begin{itemize} \item {\bf Class 1 - precision measurements}: accurate, reliable, enforceable acoustic monitoring. Sensors alone cost \$1,000-2,000. Required accuracy of $\pm$1dBA at 1kHz. Pre-amplifiers and high voltages required for operation. \item {\bf Class 2 - general purpose measurements}: minimum requirement for OSHA noise measurements and general purpose noise surveys. Required accuracy of $\pm$2dBA at 1kHz. \item {\bf Class 3 - low-cost}: Sensors can be very cheap (as low as \$1). Measure noise levels in the 40-100dBA range with $\pm$3dBA accuracy or worse. Enables scalablity. \end{itemize} \end{frame} \begin{frame}{Acoustic sensing basics - City of Calgary Bylaws} \begin{itemize} \item City of Calgary Bylaw - continuous sound in downtown, for example: \begin{center} \fbox{\begin{minipage}{0.75\linewidth} \emph{No person shall cause continuous sound that exceeds 75dBA during the day-time (60dBA during the night-time)} \end{minipage}} \end{center} \begin{itemize} \item Continuous sound = continuous duration over a 3 minute period, or sporadically for a total of 3 minutes over a 15 minute period. \item Sound level must exceed 5dBA over ambient before it becomes an offence \end{itemize} \item For enforcement purposes, a type 2 sound level meter must be used \end{itemize} \end{frame} \section{LPWAN basics} \begin{frame}{LPWAN - a type of wireless telecommunication} \begin{itemize} \item Low data throughput. Mostly uplink traffic with small 10-200B packets \item Low power (several years of operation with a single battery charge is theoretically possible) \item Long range ($\approx$ 5km in urban areas, $\approx$ 15km in open areas) \item LoRa is a specific LPWAN technology (SIGFOX is another, for example) \end{itemize} \end{frame} \begin{frame}{LoRa - proprietary physical layer} Semtech \alert{Lo}ng \alert{Ra}nge physical layer \begin{itemize} \item Sub-GHz (unlicensed ISM radio bands, 902-928MHz in Canada) bi-directional point-to-point wireless link \begin{itemize} \item The band is divided into 8 sub-bands that each have 8x125 kHz uplink channels, 1x500 kHz uplink channel and 1x500 kHz downlink channel. \end{itemize} \item 13.5mA RX current, 124mA TX current \end{itemize} \end{frame} \begin{frame}{LoRaWAN - medium access control} \begin{itemize} \item Devices are asynchronous and transmit when they have data available to send (ALOHA protocol). \item Data transmitted by an end-node device is received by multiple gateways, which forward the data packets to a centralized network server. \item The network server filters duplicate packets, performs security checks, and manages the network. \end{itemize} \end{frame} \section{Project deliverables \& timeline} \begin{frame}{Agreed upon deliverables} From the project proposal: \begin{itemize} \item Design and construction of 15 acoustic sensing units \item Deployment of sensing units \item Real-time reporting of data to LoRaWAN gateway \item Data storage on cloud database \item Model for characterization of acoustic sources (used for in-situ processing) \item Data visualization and automated dashboard tool for real-time mapping of data \item Spatial and temporal model of acoustic emissions within study area \item Report on the project outcomes and future directions \end{itemize} \end{frame} % \begin{frame}{Project timeline: Aug. 1 - Dec. 31} % \hspace*{-0.4cm}\tiny{\begin{tabular}{r|p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}|} % \multicolumn{2}{r}{Week \#} & \multicolumn{10}{c}{} % \\ Task & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline % Sensor unit specs & \cellcolor{lime} & & & & & & & & & & \\ \cline{1-1} % In-situ processing spec & \cellcolor{lime} & \cellcolor{cyan} & & & & & & & & & \\ \cline{1-1} % Design units & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & & & & & \\ \cline{1-1} % Design in-situ processing & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & & \cellcolor{cyan} & \cellcolor{cyan} & & \\ \cline{1-1} % Test prototype & & & & & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & \\ \cline{1-1} % Order all units & & & & & & & & & & \cellcolor{cyan} & \cellcolor{cyan} \\ % \cline{1-1} Data storage system & & & & & & & & & & \cellcolor{cyan}& \cellcolor{cyan} \\ \cline{1-1} % Classification models & & & & & & & & & & & \cellcolor{cyan} \\ \cline{1-1} % Units deployed & & & & & & & & & & & \\ \cline{1-1} % Data visualization tool & & & & & & & & & & & \\ \cline{1-1} % Process data & & & & & & & & & & & \\ \cline{1-1} % Spatial/temporal map & & & & & & & & & & & \\ \cline{1-1} % Report & & & & & & & & & & & \\ \hline % \end{tabular}} \medskip \\ % \hspace*{1.7cm}\tiny{\begin{tabular}{r|p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}|} % \multicolumn{2}{r}{Week \#} & \multicolumn{10}{c}{} % \\ Task & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 \\ \hline % Sensor unit specs & & & & & & & & & & & \\ \cline{1-1} % In-situ processing spec & & & & & & & & & & & \\ \cline{1-1} % Design units & & & & & & & & & & & \\ \cline{1-1} % Design in-situ processing & & & & & & & & & & & \\ \cline{1-1} % Test prototype & & & & & & & & & & & \\ % \cline{1-1} % Order all units & & & & & & & & & & & \\ \cline{1-1} Data storage system & & & & & & & & & & & \\ \cline{1-1} % Classification models & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & & & & & & & \\ \cline{1-1} % Units deployed & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & \\ \cline{1-1} % Data visualization tool & & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & & & & & \\ \cline{1-1} % Process data & & & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \\ \cline{1-1} % Spatial/temporal map & & & & & & & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \\ \cline{1-1} % Report & & & & & & & & & & \cellcolor{cyan} & \cellcolor{cyan}\\ \hline % \end{tabular}} % \end{frame} \begin{frame}{Project timeline: Aug. 28 - Jan. 31} \begin{center} \scriptsize{\begin{tabular}{l|p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}p{0.1cm}|} \multicolumn{2}{r}{Week \#} & \multicolumn{10}{c}{} \\ Phase & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 15 & 17 & 19 & 21 \\ \hline 1: Sensor unit design specs & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & & & & \\ \cline{1-1} 2: Data management system & & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & & & \\ \cline{1-1} 3: Assembly and deployment & & & & & & & & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} & \cellcolor{cyan} \\ \cline{1-1} 4: Data analysis and reporting & & & & & & & & & & \cellcolor{cyan} & \cellcolor{cyan} \\ \hline \end{tabular}} \end{center} \end{frame} \section{Sensor unit functional specs} \begin{frame}{Functional specifications - hardware} The sensor unit shall ... \begin{itemize} \item measure sound pressure levels with a comparable level of accuracy to City of Calgary Bylaw standards \item transmit processed audio data and metadata to a City of Calgary LoRaWAN radio gateway \item receive acknowledgment and control signals from a City of Calgary LoRaWAN radio gateway \item maintain its full functionality for at least 8 weeks without human intervention \item operate year round in typical Calgary weather (electronics and battery should operate down to -20$^\circ$C). \item cost less than \$200 per sensor node \item not require a wired connection to any utility \item be capable of being deployed within 30 minutes of arriving at a proposed location \end{itemize} \end{frame} \begin{frame}{Functional specifications - algorithms} The sensor unit shall ... \begin{itemize} \item randomly sample 30 seconds (?) of acoustic data \item increase the sampling frequency autonomously when changes are detected in the acoustic environment \item perform in-situ signal processing on audio measurements, including: \begin{itemize} \item filtering \item spectral decomposition \item A-weighting \item noise source characterization \end{itemize} \item transmit data to the LoRaWAN gateway when changes in the acoustic environment are detected \item autonomously update sensor unit parameters based on control signals received from the LoRaWAN gateway \end{itemize} \end{frame} \section{Proposed hardware} \begin{frame}{Proposed hardware - off-the-shelf options} There are no off-the-shelf solutions that will satisfy the functional specifications. The best candidates include: \\ \vspace*{0.25cm} \begin{minipage}{0.48\linewidth} \begin{itemize} \item \myhref{http://www.libelium.com/development/waspmote}{Waspmote by Libelium}: $\approx$ \$300, minimum operating temperature is $-10^{\circ}$C, limited ability to add custom functionality. \end{itemize} \end{minipage} \begin{minipage}{0.48\linewidth} \begin{center} \includegraphics[scale=0.2]{figures/wasp.PNG} \end{center} \end{minipage} \\ \vspace*{0.5cm} \hline \vspace*{0.5cm} \begin{minipage}{0.48\linewidth} \begin{center} \includegraphics[scale=0.2]{figures/adeunis.PNG} \end{center} \end{minipage} \begin{minipage}{0.48\linewidth} \begin{itemize} \item \myhref{http://www.adeunis-rf.com/en/products/lorawan-products}{Adeunis RF LoRaWAN transceiver}: barebones and rugged design, in-situ processing not possible. \end{itemize} \end{minipage} \end{frame} \begin{frame}{Proposed hardware - prototype} LoRa Tx/Rx: \myhref{http://www.microchip.com/DevelopmentTools/ProductDetails.aspx?PartNO=RN-2903-PICTAIL}{Microchip RN2903 LoRa Technology PICtail} \vfill \begin{center} \includegraphics[scale=0.2]{figures/microchip.PNG} \end{center} \vfill \begin{itemize} \item Features Microchip's RN2903 LoRa 915MHz transceiver module \item PICtail connection interface \item On-board PIC18 MCU \item supply current measurement points \item \$82 \end{itemize} \end{frame} \begin{frame}{Proposed hardware - prototype} Development kit: \myhref{http://www.microchip.com/DevelopmentTools/ProductDetails.aspx?PartNO=DM240001-3}{Microchip Explorer 16/32} \vspace*{-0.7cm} \begin{center} \includegraphics[scale=0.2]{figures/microchiptechnologyinc_35116794972.png} \end{center} \vspace*{-1cm} \begin{itemize} \item Supports processor plug-in modules \item PICtail connection interface \item LCD display \item supply current measurement points \item \$138 (Dev. kit) + \$32 (\myhref{http://www.microchip.com/DevelopmentTools/ProductDetails.aspx?PartNO=MA240023}{processor}) + \$68 (\myhref{https://www.digikey.ca/product-detail/en/microchip-technology/PG164130/PG164130-ND/2171224}{programmer}) \end{itemize} \end{frame} \begin{frame}{Proposed hardware - prototype} Microphone sensor: \myhref{https://www.invensense.com/products/analog/ics-40720/}{InvenSense ICS-40720 Evaluation Board} \begin{center} \begin{minipage}{0.45\linewidth} \includegraphics[scale=0.2]{figures/rp-ics-40720.png} \end{minipage} \begin{minipage}{0.45\linewidth} \includegraphics[scale=0.2]{figures/invensense_eval.PNG} \end{minipage} \end{center} \begin{itemize} \item accuracy of $\pm$2dB \item frequency response from 75Hz-20kHz \item 285$\mu$A supply current, 1.5-3.6V supply voltage \item high SNR of 70dBA \item dynamic range of 124dB \item sensitivity of -32dBV \item \$44 \end{itemize} \end{frame} \begin{frame}{Proposed hardware - final design} \smallskip \\ {\footnotesize{\begin{center} \begin{tabular}{|l|l|} \hline {\bf Part (with URL)} & {\bf Price} \\ \hline LoRa Tx/Rx (\myhref{http://www.microchip.com/wwwproducts/en/RN2903}{Microchip RN2903}) & \$14.57 \\ \hline Microphone (\myhref{https://www.invensense.com/products/analog/ics-40720/}{InvenSense ICS-40720}) & \$5.00 \\ \hline Microcontroller (\myhref{http://www.microchip.com/wwwproducts/en/PIC24FJ1024GB610}{Microchip PIC24FJ1024GB610}) & \$5.41 \\ \hline Protective case (\myhref{https://www.digikey.ca/product-detail/en/hammond-manufacturing/1550WE/HM1214-ND/2211564}{Hammond Manufacturing 1550WE}) & \$18.70 \\ \hline Antenna (\myhref{https://www.digikey.ca/product-detail/en/microchip-technology/RN-SMA4-RP/740-1033-ND/2207396}{Microchip RN-SMA4-RP}) & \$7.65 \\ \hline Microphone mount \& windshield & \$5.00 \\ \hline Lithium-thionyl chloride battery (\myhref{http://www.tadiranbatteries.de/pdf/lithium-thionyl-chloride-batteries/SL-2790.pdf}{Tadiran SL-2790}) & \$42.00 \\ \hline Miscellaneous (parts \& assembly) & \$20.00 \\ \hline Total: & \$118.33 \\ \hline \end{tabular} \end{center}}} \end{frame} \section{Questions/feedback} \begin{frame}{Questions/feedback} \begin{center} \includegraphics[scale=0.2]{figures/36601.png} \end{center} \end{frame} \begin{frame}{Our questions} \begin{itemize} \item What is the current state of the LoRaWAN gateway installation? \item What, if any, other LPWAN projects are being pursued? \item Who are the City contacts? \item What are the City's expectations of this project? What are specific questions that the City would like answered by this project? \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6220601132, "avg_line_length": 52.7016806723, "ext": "tex", "hexsha": "7ccd8cb9cb896844115df78289a76d6d14241c94", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fde4e0156f7bf8477d28827647e7032457e8fce3", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "deanmrichert/urbanNoiseMonitoring", "max_forks_repo_path": "presentations/initialMeeting/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fde4e0156f7bf8477d28827647e7032457e8fce3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "deanmrichert/urbanNoiseMonitoring", "max_issues_repo_path": "presentations/initialMeeting/main.tex", "max_line_length": 262, "max_stars_count": null, "max_stars_repo_head_hexsha": "fde4e0156f7bf8477d28827647e7032457e8fce3", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "deanmrichert/urbanNoiseMonitoring", "max_stars_repo_path": "presentations/initialMeeting/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6883, "size": 25086 }