Search is not available for this dataset
text
string
meta
dict
\chapter{Bayesian Neural Networks} A Bayesian neural network (BNN) is a probabilistic model build on the architecture of a neural network, but trained using Bayesian inference. The intent of such a model is to use the approximation capabilities of the neural networks, described in section \ref{sec:neural_network}, combined with the capability of estimating uncertainty from Bayesian inference. These uncertainty estimating capabilities comes from considering the model parameters $\boldsymbol{\theta}$ as a random variable, such that predictions are made by considering the probability distribution of $\boldsymbol{\theta}$. BNNs can therefore be viewed as a special case of ensemble learning (\cite{zhou_ensemble}) where the model ensemble is constructed by considering all possible values for $\boldsymbol{\theta}$. In practice we are not able to consider all possible values of $\boldsymbol{\theta}$ and sampling methods are used to sample different models as an approximation and there predictions are aggregated to make a single prediction.\\ \\ In section \ref{sec:bayesian_stat} we introduce two principal concepts of statistical inference; the frequentist and Bayesian paradigm and the main differences between these two paradigms. In section \ref{sec:MC_methods} we introduce the basic theory of Monte Carlo methods and in section \ref{sec:simple_BNN} we illustrate the main ideas behind Bayesian neural networks with a simple example. Since basic Monte Carlo methods are often inefficient when sampling from complex distributions we examine more sophisticated sampling methods based on Markov chains in section \ref{sec:MCMC}. \\ \\ One of these methods is the Metropolis algorithm, which we examine in section \ref{sec:Metropolis}. Afterwards we examine the Hamiltonian Monte Carlo in section \ref{sec:HMC}. This algorithm builds on the principles of Metropolis, but explores the target distribution more efficiently. Lastly in section \ref{sec:nuts} we cover an extension of Hamiltonian Monte Carlo, that fixes possible inefficient "U-turns", called the No-U-Turn sampler. We end this chapter by briefly examining the choice of prior distributions in section \ref{sec:priors} and how one can choose the parameters for these using a distribution. \section{Bayesian \& Frequentist Views of Learning}\label{sec:bayesian_stat} In this section we will introduce two concepts of statistical inference. These are the Bayesian and the frequentist paradigm. \\ \\ The ideas behind Bayesian statistics goes back to the 18th century and is named after Thomas Bayes (\cite{stigler1986history}). In Bayesian learning we consider the model parameters $\boldsymbol{\theta}$ as random and aim to learn the probability distribution of these. On the other hand the conventional frequentist methodology considers the model parameters $\boldsymbol{\theta}$ as fixed but unknown, while the point estimate $\hat{\boldsymbol{\theta}}$ is a random variable, as it is a function of the dataset, which is assumed to be random. \\ \\ To illustrate the difference between these two approaches in more detail, we will consider an example, which involves a simple coin toss. The uncertainty of the coin showing head or tails can be expressed by the coins probability $p$ of showing heads, which is often referred to as the bias of the coin. Since the properties of the coin is not known beforehand, we do not know the exact probability of showing heads. It could be a fair coin and have the probability $p=\frac{1}{2}$ or it could be an unbalanced coin meaning that $p\neq \frac{1}{2}$. \\ \\ A Bayesian statistician would express this uncertainty by a probability distribution over possible values of the unknown probability $p$ and would then update the distribution as more observations become known. Frequentists would find the introduction of a distribution over parameter weights as pure nonsense. The frequentist would instead flip the coin a given number of times to form a dataset, and choose some estimator, for the unknown probability $p$ which would be most consistent with the data, an obvious choice would be the relative frequency of heads in the past coin tosses. \\ \\ This example illustrates the main differences between these two paradigms and these differences will be highlighted in a more formal way in the subsequent sections. \subsection{Maximum Likelihood Estimation} \label{sec:mle} Let us now consider a dataset $\boldsymbol{X}$, with $n$ examples $\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)},\ldots \boldsymbol{x}^{(n)}$ drawn independently from the true but unknown distribution. We let $L(\boldsymbol{\theta}\mid \boldsymbol{X})\equiv p(\boldsymbol{X}\mid \boldsymbol{\theta})$ denote the likelihood function, where $p\lr{\boldsymbol{X}\mid\boldsymbol{\theta}}$ is a parametric family of probability distributions with parameter $\boldsymbol{\theta}$. A frequentist is, as described earlier, trying to find an estimate of the true parameter that have generated the data, we call such an estimate a point estimate and denote it by $\hat{\boldsymbol{\theta}}$ to separate it from the true parameter. A point estimate can be viewed as a function of data $\hat{\boldsymbol{\theta}}=g\lr{\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)},\ldots \boldsymbol{x}^{(n)}}$ which is drawn from a random process meaning that $\hat{\boldsymbol{\theta}}$ is a random variable. In most cases of frequentist inference a point estimate is found by maximizing the likelihood and is called the maximum likelihood estimate (MLE) \begin{equation*} \begin{split} \hat{\boldsymbol{\theta}}_{\text{MLE}}&=\argmax_{\boldsymbol{\theta}}{L(\boldsymbol{\theta}\mid \boldsymbol{X})}\\ & = \argmax_{\boldsymbol{\theta}}{\prod^n_{i=1}p\lr{\boldsymbol{x}^{(i)}\mid \boldsymbol{\theta}}} \end{split} \end{equation*} It is often more convenient to maximize a sum instead of a product, not only is it easier to handle sums when differentiating but it also helps stabilize the calculations numerically (\cite{Goodfellow-et-al-2016}). Thus, we take the logarithm of the likelihood function to obtain the log-likelihood function $\ell(\boldsymbol{\theta}\mid \boldsymbol{X})\equiv \log{p(\boldsymbol{X}\mid \boldsymbol{\theta})}$, \begin{equation*} \begin{split} \hat{\boldsymbol{\theta}}_{\text{MLE}}&=\argmax_{\boldsymbol{\theta}}{\ell(\boldsymbol{\theta}\mid \boldsymbol{X})}\\ &=\argmax_{\boldsymbol{\theta}}{\sum^n_{i=1}\log{p\lr{\boldsymbol{x}^{(i)}\mid\boldsymbol{\theta}}}} \end{split} \end{equation*} and since the logarithm is a monotonic-increasing function, optimizing the log-likelihood is equivalent to optimizing the likelihood. \\ \\ When the size of the dataset is small the MLE is often prone to overfitting and regularization methods such as penalized maximum likelihood are applied, see \cite{hastie01statisticallearning}. \\ \\ We can generalize the maximum likelihood estimator to the case where our goal is to estimate a conditional distribution $p\lr{\boldsymbol{y}\mid \boldsymbol{X},\boldsymbol{\theta}}$, in order to predict $\boldsymbol{y}$ given $\boldsymbol{X}$ as described for supervised learning in section \ref{sec:ml_basic}. In this case the conditional maximum likelihood is given by \begin{equation*} \hat{\boldsymbol{\theta}}_{\text{MLE}}=\argmax_{\boldsymbol{\theta}}L\lr{\boldsymbol{\theta}\mid \boldsymbol{y}}=\argmax_{\boldsymbol{\theta}} p(\boldsymbol{y} \mid \boldsymbol{X}, \boldsymbol{\theta}) \end{equation*} and if assume that the targets $\boldsymbol{y}^{(1)},\ldots \boldsymbol{y}^{(n)}$ are i.i.d, we can write \begin{equation*} \hat{\boldsymbol{\theta}}_{\text{MLE}}=\underset{\boldsymbol{\theta}}{\arg \max } \sum_{i=1}^{n} \log p\left(\boldsymbol{y}^{(i)} \mid \boldsymbol{x}^{(i)},\boldsymbol{\theta}\right) \end{equation*} The sum of squared errors, obtained by summing over all datapoints in equation \ref{eq:mse}, can be justified theoretically by the use of maximum likelihood with a Gaussian likelihood. To see this, consider a regression model that outputs $f(\boldsymbol{X};\boldsymbol{\theta})=\boldsymbol{\theta}\boldsymbol{X}$, with real-valued targets $\boldsymbol{y}$. If we define the likelihood as the conditional distribution of $\boldsymbol{y}$ as Gaussian with mean given by the regression output $\hat{y}\equiv f(\boldsymbol{X};\boldsymbol{\theta})$ and standard deviation $\sigma$, we can write \begin{equation*} L(\boldsymbol{\theta}\mid \boldsymbol{y})=\prod_{i=1}^{n} p\left(y^{(i)} \mid \boldsymbol{x}^{(i)} , \boldsymbol{\theta}\right)=\prod_i^n\frac{1}{\sqrt{2\pi\sigma}}\exp\left(-(\hat{y}^{(i)}-y^{(i)})^2/2\sigma^2\right) \end{equation*} Next we take the logarithm of the likelihood function which gives us the log-likelihood function \begin{equation*} \begin{split} \ell(\boldsymbol{\theta}\mid \boldsymbol{y})&=\sum_i^n\log\frac{1}{\sqrt{2\pi\sigma}}\exp\left(-(\hat{y}^{(i)}-y^{(i)})^2/2\sigma^2\right)\\ &=-\frac{n}{2} \log \sigma^{2}-\frac{n}{2} \log (2 \pi)-\frac{1}{2 \sigma^{2}} \sum_{i}^n\left(\hat{y}^{(i)}-y^{(i)}\right)^{2} \end{split} \end{equation*} note that $-\frac{n}{2} \log \sigma^{2}-\frac{n}{2} \log (2 \pi)$ does not depend on the model parameters $\boldsymbol{\theta}$ and can therefore be omitted when maximizing. Maximizing the log-likelihood is the same as minimizing the negative log-likehood, so we can write \begin{equation*} \begin{split} \min_{\boldsymbol{\theta}}{-\ell(\boldsymbol{\theta}\mid \boldsymbol{y})}=\frac{1}{2 \sigma^{2}} \sum_{i}^n\left(\hat{y}^{(i)}-y^{(i)}\right)^{2} \end{split} \end{equation*} and since $\frac{1}{2 \sigma^{2}}$ does not depend on $\boldsymbol{\theta}$ either, we can see that minimizing the negative log-likelihood is the same as minimizing the sum of squared errors. \subsection{Bayesian Learning and Prediction} A different approach than the frequentist perspective of the parameter value $\boldsymbol{\theta}$ as fixed but unknown and the point estimator $\hat{\boldsymbol{\theta}}$ as a random variable, is taken with Bayesian inference. The Bayesian paradigm considers the dataset as fixed and observable, while the true parameter $\boldsymbol{\theta}$ is unknown, and thus considered a random variable. \\ \\ In Bayesian statistics, we begin with defining a prior distribution $p(\boldsymbol{\theta})$ over the parameters. This prior distribution expresses our initial view on the parameters, before any data has been observed. When data becomes available, we update our prior distribution to a posterior distribution. The posterior distribution is defined by Bayes' rule \begin{equation*} p(\boldsymbol{\theta}|\boldsymbol{X},\boldsymbol{y})=\frac{p(\boldsymbol{\theta})p(\boldsymbol{y}|\boldsymbol{X},\boldsymbol{\theta})}{p(\boldsymbol{y})} \end{equation*} and it combines the information about the data, that comes from the likelihood function $p(\boldsymbol{y}|\boldsymbol{X},\boldsymbol{\theta})$, with the prior distribution. $p(\boldsymbol{y})$ is called the model evidence and is the distribution of the observed data marginalized over the parameters $p(\boldsymbol{y})=\int p(\boldsymbol{y}\mid \boldsymbol{X}, \boldsymbol{\theta})p(\boldsymbol{\theta})d\boldsymbol{\theta}$. The model evidence is often intractable, since it requires integration over all possible values of $\boldsymbol{\theta}$, which in many applications requires integration over high-dimensional spaces. As a result, finding analytical solutions of the posterior is not possible for complex models. We often ignore the evidence term, as it does not depend on $\boldsymbol{\theta}$ and thus for a fixed $\boldsymbol{y}$ can be interpreted as a constant, we thus write \begin{equation} \label{eq:posterior} p(\boldsymbol{\theta}|\boldsymbol{X},\boldsymbol{y})\propto p(\boldsymbol{\theta})p(\boldsymbol{y}\mid \boldsymbol{X},\boldsymbol{\theta}) \end{equation} An important quality of the Bayesian method, is that it uses a full distribution over the parameters $\boldsymbol{\theta}$ to make predictions. Let us for example consider the case where we have observed a sample consisting of $n$ examples $\boldsymbol{X}=\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)},\ldots, \boldsymbol{x}^{(n)}$. To predict an unobserved label $\boldsymbol{y}^{(n+1)}$ for a new example $\boldsymbol{x}^{(n+1)}$ we need the posterior predictive distribution, that is to integrate the model predictions by the posterior \begin{equation} \label{eq:post_pred_distribution} \begin{split} &p\left(\boldsymbol{y}^{(n+1)} \mid \boldsymbol{x}^{(n+1)}, \lr{\boldsymbol{x}^{(1)},\boldsymbol{y}^{(1)}}, \ldots, \lr{\boldsymbol{x}^{(n)}, \boldsymbol{y}^{(n)}}\right)\\ &=\int p\left(\boldsymbol{y}^{(n+1)} \mid \boldsymbol{x}^{(n+1)}, \boldsymbol{\theta} \right) p \lr{\boldsymbol{\theta} \mid \lr{\boldsymbol{x}^{(1)}, \boldsymbol{y}^{(1)}}, \ldots, \lr{\boldsymbol{x}^{(n)}, \boldsymbol{y}^{(n)}}} \, d \boldsymbol{\theta} \end{split} \end{equation} This is quite different from the maximum likelihood method, that uses a point estimate for $\boldsymbol{\theta}$ to make predictions on any unobserved data, the Bayesian method takes the uncertainty of estimating $\boldsymbol{\theta}$ into account when making predictions, which tends to do well in avoidance of overfitting (\cite{Goodfellow-et-al-2016}). \\ \\ An important difference between the Bayesian approach and MLE, lies on the contribution of a prior distribution. The prior has the effect of shifting the probability mass towards regions of the parameter space, that are preferred a priori. According to \cite{Goodfellow-et-al-2016}, the prior often expresses a preference for models that are simpler or more smooth. Critics of the Bayesian method often point their fingers at the prior distribution, and criticize it for being a subjective component that can affect the predictions of the model. According to \cite{neal2012bayesian} Bayesian methods often do a lot better than a frequentist model, when training data is limited in availability, but suffers from high computational cost when the number of training examples are large. \\ \\ \cite{neal2012bayesian} and \cite{mackay1991} argues that Bayesian models embodies Occam's Razor, which is the principle that we should prefer simpler models to complex ones. This principle is a component often found in machine learning since a too complex model might overfit the data. This belief is justified when model parameters are estimated by maximum likelihood, but \cite{neal2012bayesian} argues that one should not limit the complexity of Bayesian neural networks to prevent overfitting. The approach of training by minimizing loss might lead to a choice of model with increasing complexity the more data that is available. With a Bayesian approach adjusting the complexity of the model based on the amount of data available makes no sense, since a correct prior and model for 10 observations must be correct for 10.000 observations as well. One might however switch to a simple model if it seems unlikely that a complex computational expensive model will provide significant benefit. \\ \\ As mentioned earlier, in many practical examples, the posterior distribution is intractable and therefore must be derived in some other way. Often we will use methods such as Monte Carlo to approximate the posterior distribution. \subsection{Maximum a Posteriori (MAP) Estimation} A way to avoid the computational hurdle of approximating the entire Bayesian posterior, is to use a point estimate as an approximation. Instead of turning completely to frequentist methods and use MLE, one can still benefit of the Bayesian method, by allowing the prior to influence the choice of the point estimate. One way to do this, is to use the maximum a posteriori (MAP) point estimate. The MAP estimate is obtained by maximizing the posterior distribution \begin{equation}\label{eq: MAP} \begin{split} \hat{\boldsymbol{\theta}}_{\text{MAP}}&=\argmax_{\boldsymbol{\theta}}{p(\boldsymbol{\theta}\mid \boldsymbol{X},\boldsymbol{y}})=\argmax_{\boldsymbol{\theta}}{p(\boldsymbol{y}\mid \boldsymbol{X},\boldsymbol{\theta}}) p(\boldsymbol{\theta})\\ &=\argmax_{\boldsymbol{\theta}}{\log p(\boldsymbol{y}\mid \boldsymbol{X},\boldsymbol{\theta}})+ \log p(\boldsymbol{\theta}) \end{split} \end{equation} note that the evidence term has been omitted, since it does not depend on the parameter $\boldsymbol{\theta}$ and thus vanishes under maximization anyway. The bottom part of equation \ref{eq: MAP} can be recognized as an equation consisting of the standard log-likelihood term plus a log-prior term. \\ \\ As an example, consider a model with a Gaussian prior placed on the regression weights $\boldsymbol{\theta}$. If we specifically choose the prior to be given by $\mathcal{N}\left(\boldsymbol{\theta},0,\frac{1}{\alpha}\boldsymbol{I}^2\right)$, then the log-prior term in equation \ref{eq: MAP} is proportional to the L2 norm introduced in \ref{eq:L2_reg}, plus a term that does not depend on $\boldsymbol{\theta}$. Note also that the MAP estimate is the same as the MLE, when choosing a uniform prior, since the $p(\boldsymbol{\theta})$ becomes a constant function in equation \ref{eq: MAP} and consequently we can ignore it when maximizing the expression. Compared to MLE, MAP estimation has the advantage that it can benefit from the information in the prior that cannot be found in the dataset. According to \cite{Goodfellow-et-al-2016} this additional information, gained from the choice of prior, can reduce the variance in the MAP point estimate in direct comparison with MLE estimate, but this advantage has a price of an increased bias. \\ \\ MAP is closely related to Bayes optimal estimation, instead of finding the most probable hypothesis (set of parameters), it aims at finding the most probable label for a new example. The Bayes optimal estimation is done by predicting the $\boldsymbol{y}^{(n+1)}$, which maximizes the posterior predictive distribution in equation \ref{eq:post_pred_distribution}. We do not pursue the idea of Bayes optimal estimation, as we aim to minimize a loss function which is not always equivalent to predicting the most probable label. In this way we preserve the idea that the loss function dictates how much the algorithm should care about making certain predictions as explained in \ref{sec:loss_func}. \\ \\ An obvious disadvantage from using MAP estimation, is that it discards the information contained in the distribution. It is estimating distributions that make Bayesian methods attractive, especially if one wants to evaluate the uncertainty of the parameters. As we value access to this distribution higher than faster computational time we will not pursue MAP estimation any further. \section{Monte Carlo Methods}\label{sec:MC_methods} One way of approximating intractable integrals, like the posterior predictive distribution in equation \ref{eq:post_pred_distribution}, is to use Monte Carlo methods. The idea behind Monte Carlo methods is to view the integral as an expectation of some random variable with respect to a probability distribution $p(\cdot)$. In the case of BNNs our random variable is the model parameters $\boldsymbol{\theta}$, and we can write the posterior predictive distribution as \begin{equation} \begin{split} &p\left(\boldsymbol{y}^{(n+1)} \mid \boldsymbol{x}^{(n+1)}, \lr{\boldsymbol{x}^{(1)},\boldsymbol{y}^{(1)}}, \ldots, \lr{\boldsymbol{x}^{(n)}, \boldsymbol{y}^{(n)}}\right)\\ &=\int p\left(\boldsymbol{y}^{(n+1)} \mid \boldsymbol{x}^{(n+1)}, \boldsymbol{\theta}\right) p \lr{\boldsymbol{\theta} \mid \lr{\boldsymbol{x}^{(1)}, \boldsymbol{y}^{(1)}}, \ldots, \lr{\boldsymbol{x}^{(n)}, \boldsymbol{y}^{(n)}}} \, d \boldsymbol{\theta}\\ &=\int f\lr{\boldsymbol{\theta}}\hat{p}\lr{\boldsymbol{\theta}} d\boldsymbol{\theta} \end{split} \end{equation} where we use a simpler notation of $\hat{p}\lr{\boldsymbol{\theta}}$ to denote the posterior distribution. Such an integral can be interpreted as an expectation taking under the probability distribution $\hat{p}(\boldsymbol{\theta})$ \begin{equation*} s=\int f(\boldsymbol{\theta})\hat{p}(\boldsymbol{\theta}) d \boldsymbol{\theta}=\mathbb{E}_{\hat{p}}[f(\boldsymbol{\theta})] \end{equation*} Now in order to approximate $s$ we can draw samples from the distribution $\hat{p}(\boldsymbol{\theta}) $ and approximate the expected value by the empirical average. If we draw $n$ samples $\boldsymbol{\theta}\sim \hat{p}(\boldsymbol{\theta})$ we can approximate $s$ by $\hat{s}_n$ \begin{equation}\label{eq:empirical_mean_MC} \hat{s}_{n}=\frac{1}{n} \sum_{i=1}^{n} f\left(\boldsymbol{\theta}^{(i)}\right) \end{equation} This implies the simplest situation, where it is possible to simulate directly from the probability distribution, which is often not possible. \\ \\ We can justify this approximation theoretically, by noticing that $\hat{s}_n$ is an unbiased estimator of $s$ \begin{equation*} \mathbb{E}_{\hat{p}}\left[\hat{s}_{n}\right]=\frac{1}{n} \sum_{i=1}^{n} \mathbb{E}_{\hat{p}}\left[f\left(\boldsymbol{\theta}^{(i)}\right)\right]=\frac{1}{n} \sum_{i=1}^{n} s=s \end{equation*} additionally the law of large numbers, states that if the samples $\boldsymbol{\theta}^{(i)}$ are independent and identical distributed (i.i.d), the empirical average converges to the true expectation almost surely \begin{equation*} \lim _{n \rightarrow \infty} \hat{s}_{n}=s \end{equation*} this only holds if the variance of the individual terms $\operatorname{Var}[f\lr{\boldsymbol{\theta}^{(i)}}]$ is bounded. To see this, we note that $\operatorname{Var}[\hat{s}_n]$ converges to zero as n goes to infinity, if and only if $\operatorname{Var}[f\lr{\boldsymbol{\theta}}]<\infty$ \begin{equation*} \begin{split} \operatorname{Var}\left[\hat{s}_{n}\right] &=\operatorname{Var}\left[\frac{1}{n} \sum_{i=1}^{n} f(\boldsymbol{\theta}^{(i)})]\right] \\ &=\frac{1}{n^{2}} \sum_{i=1}^{n} \operatorname{Var}[f(\boldsymbol{\theta}^{(i)})] =\frac{\operatorname{Var}[f(\boldsymbol{\theta})]}{n} \end{split} \end{equation*} Further the central limit theorem states that, if $\mathbb{E}_{\hat{p}}[f(\boldsymbol{\theta})]=s$ and $\operatorname{Var}[f(\boldsymbol{\theta})]<\infty$ then \begin{equation*} \frac{\hat{s}_{n}-s}{\sqrt{\operatorname{Var}[f(\boldsymbol{\theta})] / n}} \sim \mathcal{N}(0,1) \end{equation*} which is equivalent to \begin{equation*} \hat{s}_n\sim \mathcal{N}\left(s,\frac{\operatorname{Var}[f(\boldsymbol{\theta})]}{n}\right) \end{equation*} Which gives us a way of estimating confidence intervals around the estimate $\hat{s}$. \\ \\ When it is infeasible or not possible to make simulations of $\boldsymbol{\theta}$ directly, Markov chain Monte Carlo methods can be used. such methods simulate from a target distribution by running a Markov chain, that eventually will converge to target distribution. Markov chain Monte Carlo methods will be examined more thoroughly in section \ref{sec:MCMC}. \section{A Simple Bayesian Neural Network} \label{sec:simple_BNN} A simple example, inspired by \cite{neal2012bayesian}, will illustrate the general concept of Bayesian learning for neural networks and the inefficiency of brute force methods of sampling. Figure \ref{fig:simple_BNN} shows six BNNs whose weights and biases were drawn from independent standard normal prior distributions except output weights, which had a standard deviation of $\frac{1}{\sqrt{16}}$. The networks performs regression on six data points. \\ \\ The six networks was chosen from a larger pool of $10^5$ networks with weights and biases sampled from identical prior distributions. The likelihood was computed for each of these networks and scaled so that the largest likelihood was 1. The networks were then accepted with the probability of this scaled likelihood for which only six was accepted. This approach resembles rejection sampling and embodies the posterior from equation \ref{eq:posterior} by making the prior control the generation of candidate networks and the likelihood control which of these candidates are rejected. We follow the suggestion of \cite{neal2012bayesian} to model regression tasks with a conditional distribution for the real valued targets $\boldsymbol{y}_k$, from $k$ neural networks with outputs $f_k(\boldsymbol{x})$, defined by a Gaussian distribution \begin{equation}\label{eq:regr_taget_distribution} p(\boldsymbol{y} \mid \boldsymbol{x}) = \prod_k \frac{1}{\sqrt{2 \pi} \sigma_k} \exp{- \frac{\lr{f_k(\boldsymbol{x}) - y_k}^2}{2 \sigma_k^2}} \end{equation} with mean $f_k(\boldsymbol{x})$ and standard deviation $\sigma_k$ as a hyperparameter, which we choose to be $\sigma_k = 0.1$ for all $k$. \\ \\ According to \cite{neal2012bayesian} the optimal way to predict the target associated with various new examples, assuming we want to minimize the expected squared error, is to predict the mean of the predictive distribution in equation \ref{eq:post_pred_distribution}. For a regression model defined by equation \ref{eq:regr_taget_distribution} this is equal to predicting \begin{equation*} \hat{\boldsymbol{y}}^{(n+1)} = \int f\lr{\boldsymbol{x}^{(n+1)}, \boldsymbol{\theta}} p\lr{\boldsymbol{\theta} \mid \lr{\boldsymbol{x}^{(1)}, \boldsymbol{y}^{(1)}}, \dots, \lr{\boldsymbol{x}^{(n+1)}, \boldsymbol{y}^{(n+1)}}} \, d \boldsymbol{\theta} \end{equation*} As we do not know this distribution, we resort to a Monte Carlo approximation by averaging over the outputs from the six neural networks, obtained by sampling parameters from the posterior. The average is shown in figure \ref{fig:simple_BNN} by the solid line. But Bayesian inference can do more than a single-valued guess. By examining the function we can also see the uncertainty of the guesses, for example how rapidly uncertainty increases beyond the region of the data points. \\ \\ This illustrates some of the benefits of using Bayesian inference for neural networks, but what remains is the downside of computational time. Generating $10^5$ samples to get six draws from the posterior is not very efficient and this only becomes more infeasible as the number of data points increase. \begin{figure} \centering \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{pics/figure_simple_BNN.pdf} \caption{Sampled neural networks from a posterior predictive distribution that is based on a Gaussian prior and a Gaussian likelihood on the six data points. The average prediction of the networks are plotted along with a filled area defined by the average plus minus the standard deviation of the network predictions to represent uncertainty. The Python code for implementing this Bayesian neural network and the production of this figure can be seen in appendix \ref{app:simple_BNN}} \label{fig:simple_BNN} \end{figure} \clearpage \section{Markov Chain Monte Carlo}\label{sec:MCMC} As mentioned in section \ref{sec:bayesian_stat} the posterior distribution is often intractable and we might need simulation based methods to find a feasible solution. We want to be able to calculate posterior summaries like $\mathbb{E}_{\hat{p}}\left[f(\boldsymbol{\theta})\right]$ where $f(\boldsymbol{\theta})$ is some function over the parameters and the expectation is taken under the posterior distribution. Such an expectation is straightforward to approximate using the simple Monte Carlo method, described in section \ref{sec:MC_methods}, when dealing with nice and low dimensional distributions. Other methods like importance sampling has often been proposed in the litterateur for simple problems, but rarely for high-dimensional problems as we often face with neural networks. \\ \\ The simple example in section \ref{sec:simple_BNN} illustrated that brute force methods are also not very useful in complex models like BNNs and we are instead motivated to use a collection of more sophisticated methods. This thesis will mainly pursue this motivation by exploring the Markov chain Monte Carlo (MCMC) methods. The main idea is to simulate a Markov chain, which has the posterior distribution as its stationary distribution. We will initially give a concise explanation of the fundamentals of Markov chains in section \ref{sec:basic_mc} and follow this by exploring the most popular MCMC algorithms for sampling in BNNs. \subsection{Markov Chains}\label{sec:basic_mc} A stochastic process is a set of random variables that are defined over some probability space $\left\{X^{(t)} \right\}_{t\in T}$, where $T\subseteq \mathbb{R}$ is the indexation and may be interpreted as a time index. In the context of machine learning, the set $T$ can be interpreted as the iterations of a simulation scheme. In stochastic simulation we typically have that $T=\mathbb{N}$ and we can write the stochastic process as $\left\{X^{(n)}\right\}_{n\geq 0}$, where $n$ reminds us that we have a discrete index set. Throughout this thesis we will only consider discrete time stochastic processes, since this is found to be most relevant when discussing stochastic simulation. \\ A stochastic process is defined over a space of possible values called the state space $\mathbb{S}$ and will in most applications be either integers or real values. \\ \\ With the objective of modelling BNNs, we want to sample model parameters from the posterior distribution. These samples will then be used to make predictions for unseen data by approximating the posterior predictive distribution from equation \ref{eq:post_pred_distribution} via Monte Carlo integration. In order to do this using MCMC we will consider the stochastic process $\{\boldsymbol{\theta}^{(n)}\}_{n\geq 0}$. To make the description of MCMC methods more general in the next sections, we will denote the distribution we aim to sample from the target distribution and denote it $\hat{p}(\cdot)$. The target distribution in the case of BNNs is the posterior of the weights such that $\hat{p}(\boldsymbol{\theta})\equiv p(\boldsymbol{\theta}\mid \boldsymbol{X},\boldsymbol{y})$. \\ \\ Using Markov chain Monte Carlo we aim to sample from a stochastic process satisfying the Markov property. Such a process is defined by being dependant only on the previous state of the process and a set of transitional probabilities, or densities for a infinite state space, and is called a Markov chain. The Markov chain is defined by an initial distribution for the first state of the chain $\boldsymbol{\theta}^{(0)}$, and a transition density for the next states in the system. We write the transition density from transitioning from state $\boldsymbol{\theta}^{(n-1)}$ to another state $\boldsymbol{\theta}^{(n)}$ as $T(\boldsymbol{\theta}^{(n)}\mid \boldsymbol{\theta}^{(n-1)})$. \\ \\ When sampling from the Markov chain we want to make sure that samples are actually coming from the desired distribution $\hat{p}(\boldsymbol{\theta})$ no matter the initial distribution. To ensure this we must generate a Markov chain that has our desired distribution $\hat{p}\lr{\boldsymbol{\theta}}$ as a stationary distribution. That is if $\boldsymbol{\theta}^{(n-1)}$ has distribution $\hat{p}(\cdot)$, then $\boldsymbol{\theta}^{(n)}$ will have the same distribution, and this will hold for all future states of the chain. The property of a Markov chain having a stationary distribution is called the invariance property and is defined by \begin{equation*} \pi(\boldsymbol{\theta}^{(n)})=\int T(\boldsymbol{\theta}^{(n)}\mid \boldsymbol{\theta}) \pi(\boldsymbol{\theta})d\boldsymbol{\theta} \end{equation*} A sufficient, but not necessary condition, that ensures that a particular $\hat{p}(\boldsymbol{\theta})$ is stationary distribution is the detailed balance condition. The condition states if we let $T(\cdot,\cdot)$ be a transition density, which satisfies the following condition $$T(\boldsymbol{\theta}^{(n-1)}, \boldsymbol{\theta}^{(n)}) \hat{p}(\boldsymbol{\theta}^{(n)})= T(\boldsymbol{\theta}^{(n)}, \boldsymbol{\theta}^{(n-1)})\hat{p}(\boldsymbol{\theta}^{(n-1)})$$ then $\hat{p}(\cdot)$ is a stationary distribution of the Markov chain associated with the transition density $T(\cdot,\cdot)$. This property however only ensures that $\hat{p}\lr{\boldsymbol{\theta}}$ is stationary distribution but not that it is the only one, meaning that our Markov chain can end up sampling from the wrong distribution, even though it satisfies the detailed balance condition. \\ \\ To guarantee that we sample from $\hat{p} \lr{\boldsymbol{\theta}}$ we need to ensure that the Markov chain has only this distribution as a stationary distribution. A Markov chain which has a unique stationary distribution, from which it converges to, from any initial state is called an ergodic Markov chain (see e.g \cite{turkman2019computational}). For a Markov chain on a finite state space to be ergodic, it has to be irreducible and aperiodic. The same goes for a Markov chain on a infinite state space, but a slightly stronger condition called Harris recurrence is needed, see \cite{gamerman2006markov}. \\ \\ Often we discard or burn some of the initial states, since these may not be representative of the desired distribution, as the chain might not have reached the stationary distribution yet. These discarded steps are part of what is called the burn-in period of the Markov chain. When the chain has reached the stationary distribution, it is possible to draw as many identical distributed samples as we wish for, but one should note that any successive samples will be highly correlated with each other and therefore not necessarily a good representative for the target distribution. \cite{Goodfellow-et-al-2016} suggest a way to mitigate this problem, by only returning every $n$ successive sample. Because of both the burn-in period and the time required for the chain to return uncorrelated samples MCMCs are often computational expensive. \\ \\ In order to produce truly independent samples, \cite{Goodfellow-et-al-2016} suggest to run multiple Markov chains in parallel. They also mention that practitioners often chooses the number of chains to run in parallel, similar to the number of examples in a mini-batch and then draw the samples needed from this set of Markov chains. \subsection{The Metropolis algorithm} \label{sec:Metropolis} The Metropolis algorithm is a MCMC method, which is used to sample from a distribution. The Metropolis algorithm was originally introduced by \cite{Metropolis1953} and was developed to simulate the states for a system of molecules. This was later further developed by \cite{hastings70}, so that the algorithm could now simulate from a general distribution and not just a symmetric one, as it was previously based on. The Metropolis algorithm is an attractive MCMC method due to its versatility and simplicity.\\ \\ The algorithm considers a target distribution $\hat{p}(\boldsymbol{\theta})$ and a proposal distribution $q(\boldsymbol{\theta})$. The algorithm generates a Markov chain, by starting the chain at some arbitrarily point generated from the proposal distribution $\boldsymbol{\theta}^{(0)}\sim q(\boldsymbol{\theta})$ and then proposing a candidate state for the next state in the chain $\boldsymbol{\theta}^{\text{\text{cand}}}$, where this candidate state is drawn from the conditional distribution on the previous state $\boldsymbol{\theta}^{\text{\text{cand}}}\sim q(\boldsymbol{\theta}^{(n+1)}\mid\boldsymbol{\theta}^{(n)})$. The next step is to decide whether or not to reject this new state based on the relative density to the old state. If the relative density is larger than one, we accept the new state, if the relative density is less than one, we accept the new state with probability $\frac{\hat{p}(\boldsymbol{\theta}^{\text{\text{cand}}})}{\hat{p}(\boldsymbol{\theta}^{(n)})}$. In this context the Metropolis algorithm imposes the symmetry condition of on the proposal distribution, so that \begin{equation*} q(\boldsymbol{\theta}^{(n)}\mid \boldsymbol{\theta}^{(n-1)})=q(\boldsymbol{\theta}^{(n-1)}\mid \boldsymbol{\theta}^{(n)}) \end{equation*} A pseudocode version of the Metropolis algorithm can be seen in algorithm \ref{algo_2}. % Metropolis Algorithm \begin{algorithm}\label{algo_2} \SetAlgoLined \KwInput{A proposal distribution $q$} \KwOutput{A set of parameters $\boldsymbol{\theta^{(n)}}$ for $n = 1, \dots, N$} initialize $\boldsymbol{\theta}^{(0)}\sim q(\boldsymbol{\theta})$; \For{$n=1,2,\ldots, N$}{ Propose: $\boldsymbol{\theta}^{\text{\text{cand}}} \sim q\left(\boldsymbol{\theta}^{(n)} \mid \boldsymbol{\theta}^{(n-1)}\right)$ Acceptance Probability: $ \alpha\left(\boldsymbol{\theta}^{\text{\text{cand}}} \mid \boldsymbol{\theta}^{(n-1)}\right)=\min \left\{1, \frac{\hat{p}\left(\boldsymbol{\theta}^{\text{\text{cand}}}\right)}{ \hat{p}\left(\boldsymbol{\theta}^{(n-1)}\right)}\right\} $ $u \sim \text { Uniform }(0,1) $ \uIf{$u<\alpha$}{ Accept the proposal: $\boldsymbol{\theta}^{(n)} \leftarrow \boldsymbol{\theta}^{\text{\text{cand}}}$\; } \Else{ Reject the proposal: $\boldsymbol{\theta}^{(n)} \leftarrow \boldsymbol{\theta}^{(n-1)}$ \; } } \caption{Metropolis algorithm} \end{algorithm} One apparent problem is that due to the evidence term we can not calculate the posterior exactly, which is our target distribution $\hat{p}\lr{\boldsymbol{\theta}}$, so we can not directly calculate $\frac{\hat{p}(\boldsymbol{\theta}^{(n)})}{\hat{p}\left(\boldsymbol{\theta}^{(n-1)}\right)}$. But a nice property of Metropolis acceptance probability is that we only need a function that is proportional to the posterior, as any constant of proportionality will cancel out in the calculation of the acceptance probability. As the evidence term can be interpreted as a constant of proportionality, it will cancel out and we can instead use equation \ref{eq:posterior} and calculate the ratio of the likelihood times the prior, which we are often able to. \\ \\ To show that the Metropolis algorithm is in fact sampling from the target distribution, we must show that the Markov chain converges to a stationary distribution which is our target distribution. If we assume that the Markov Chain is ergodic we can do this by showing that the metropolis satisfies the detailed balanced condition explained in section \ref{sec:MCMC}.\\ \\ For $\boldsymbol{\theta}^{(n)}\neq \boldsymbol{\theta}^{(n-1)}$ the Metropolis algorithm has the following transitions densities, \begin{equation*} T\left(\boldsymbol{\theta}^{(n)} \mid \boldsymbol{\theta}^{(n-1)}\right)=q\left(\boldsymbol{\theta}^{(n)} \mid \boldsymbol{\theta}^{(n-1)}\right) \min \left(1, \frac{\hat{p}\left(\boldsymbol{\theta}^{(n)}\right)}{ \hat{p}(\boldsymbol{\theta}^{(n-1)})}\right) \end{equation*} We can show that this satisfies the detailed balanced condition by \begin{equation*} \begin{aligned} T\left(\boldsymbol{\theta}^{(n)} \mid \boldsymbol{\theta}^{(n-1)}\right) \hat{p}(\boldsymbol{\theta}^{(n-1)}) &=q\left(\boldsymbol{\theta}^{(n)} \mid \boldsymbol{\theta}^{(n-1)}\right) \min \left(1,\frac{ \hat{p}\left(\boldsymbol{\theta}^{(n)}\right) }{ \hat{p}(\boldsymbol{\theta}^{(n-1)})}\right) \hat{p}(\boldsymbol{\theta}^{(n-1)}) \\ &=q\left(\boldsymbol{\theta}^{(n)} \mid \boldsymbol{\theta}^{(n-1)}\right) \min \left(\hat{p}(\boldsymbol{\theta}^{(n-1)}), \hat{p}\left(\boldsymbol{\theta}^{(n)}\right)\right) \\ &=q\left(\boldsymbol{\theta}^{(n-1)} \mid \boldsymbol{\theta}^{(n)}\right) \min \left(\hat{p}\left(\boldsymbol{\theta}^{(n-1)}\right), \hat{p}(\boldsymbol{\theta}^{(n-1)})\right) \\ &=q\left(\boldsymbol{\theta}^{(n-1)} \mid \boldsymbol{\theta}^{(n)}\right) \min \left(1, \frac{\hat{p}(\boldsymbol{\theta}^{(n-1)}) }{ \hat{p}\left(\boldsymbol{\theta}^{(n)}\right)}\right) \hat{p}\left(\boldsymbol{\theta}^{(n)}\right) \\ &=T\left(\boldsymbol{\theta}^{(n-1)} \mid \boldsymbol{\theta}^{(n)}\right) \hat{p}\left(\boldsymbol{\theta}^{(n)}\right) \end{aligned} \end{equation*} This shows that the transitions proposed by the algorithm leaves the target distribution $\hat{p}(\boldsymbol{\theta})$ invariant and therefore samples produced by this Markov chain all has the same stationary distribution $\hat{p}(\boldsymbol{\theta})$, provided that the Markov chain is ergodic. However according to \cite{neal2012bayesian} the Metropolis algorithm will not always produce an ergodic Markov chain and that this depends on the details of the target distribution and proposal distribution. If the produced Markov chain is not ergodic we might end up sampling from a stationary distribution that is not our target distribution. \\ \\ There are many possible choices for the proposal distribution. \cite{neal2012bayesian} mentions that a simple choice could be a Gaussian centered on $\boldsymbol{\theta}^{(n)}$ with standard deviation chosen so that the acceptance probability of the candidate state is reasonably high, since a very low acceptance ratio is usually unwanted, as many rejections means that we are inefficiently wasting computation time. He also notes that when sampling from high-dimensional and complex distributions, which is often the case with posteriors in BNNs, the standard deviation of such a proposal distribution will often have to be small compared to the extent of the target distribution, as large changes almost certainly will lead to a region of low probability. This will result in highly dependant states and many steps will be needed to arrive at distant points in the distribution. As suggested in section \ref{sec:basic_mc} one way of coping with this problem is to throw away some of the samples or run multiple chains in parallel. \\ \\ \cite{neal2012bayesian} further mentions that this problem with Metropolis is made worse due to its movements taking the inefficient form of a random walk instead of a systematic path, as can be seen in the illustration of a sampling with Metropolis in figure \ref{fig:MH_sampling}. This inefficient movement yields slower convergence to the target distribution. This drawback, will be even more prominent in higher dimension and for more complex target distributions according to \cite{gelmanbda04}. \\ \\ A more generalized version of the algorithm is the one introduced by \cite{hastings70}, which allows for non-symmetric proposal distributions $q(\boldsymbol{\theta}^{(n)}\mid \boldsymbol{\theta}^{(n-1)}) \neq q(\boldsymbol{\theta}^{(n-1)}\mid \boldsymbol{\theta}^{(n)})$. To correct for this asymmetry in the proposal distribution, the acceptance ratio is replaced by \begin{equation}\label{eq: hasti_pasti} \alpha\left(\boldsymbol{\theta}^{c a n d} \mid \boldsymbol{\theta}^{(n-1)}\right)=\min \left\{1, \frac{q\left(\boldsymbol{\theta}^{(n-1)} \mid \boldsymbol{\theta}^{c a n d}\right) \hat{p}\left(\boldsymbol{\theta}^{c a n d}\right)}{q\left(\boldsymbol{\theta}^{c a n d} \mid \boldsymbol{\theta}^{(n-1)}\right) \hat{p}\left(\boldsymbol{\theta}^{(n-1)}\right)}\right\} \end{equation} One should note that the Metropolis algorithm is an instance of the generalized version, since equation \ref{eq: hasti_pasti} is identical to the acceptance ratio in the original algorithm when allowance for a symmetric distribution. \\ \\ The introduction of an asymmetric proposal distribution, is often useful when we want to increase the speed of the random walk generated by the Metropolis algorithm. However, this is often not sufficient for complicated models with high-dimensional target distributions as we face with Bayesian neural networks, see \cite{gelmanbda04}. \begin{figure} \centering \includegraphics[width=\textwidth, height=\textheight, keepaspectratio]{pics/mh_randomWalk_behavior.pdf} \caption{Illustration of the convergence to a target distribution with 300 samples from the Metropolis algorithm. The target distribution is a bivariate Gaussian with mean $\boldsymbol{\mu}= \begin{bmatrix} 0 & 0 \end{bmatrix}$ and covariance matrix $\boldsymbol{\Sigma}= \begin{bmatrix} 1 & 0.6\\ 0.6 & 1 \end{bmatrix}$. The Python code for producing this figure can be found in appendix \ref{app:MH_code}. } \label{fig:MH_sampling} \end{figure} \clearpage \section{Hamiltonian Monte Carlo}\label{sec:HMC} Another way to generate proposals with a higher efficiency, is by updating the parameters by dynamical simulation and then use the Metropolis algorithm to accept or reject these proposals. Such an algorithm suppress the local random walk behavior and allow it to move faster and more rapidly through the target distribution. This method is called Hamiltonian Monte Carlo (HMC), and is commonly used in computational physics and statistics. The algorithm was originally proposed by \cite{Duane1987216} for calculations used in lattice quantum chromodynamics, but was later introduced to the field of computational statistics when it was used for Bayesian neural networks in \cite{neal2012bayesian}. This means that the Markov chain from which we sample in BNNs is produced analogous to paths of particles using Hamiltonian dynamics, and we will explain these dynamics using this physical analogy, as it gives a more intuitive idea of HMC. \\ \\ It turns out that the HMC algorithm reduces correlation between successive sampled states, compared to the Metropolis Hastings algorithm in section \ref{sec:Metropolis}, by proposing moves to distant states of the target distribution, which maintain a high probability of acceptance due to the properties of the simulated Hamiltonian dynamics. The reduced correlation imply that fewer Markov chain samples are needed to approximate integrals with respect to the target probability distribution. \subsection{Hamiltonian Dynamics} Before we move to the actual HMC algorithm, we will explain the Hamiltonian dynamics from which we produce the Markov chain for the algorithm. The Hamiltonian dynamics are used to describe how an object move around in a system or space. It is defined by the objects position $\boldsymbol{q}\in \mathbb{R}^d$ and its momentum $\boldsymbol{\rho}\in \mathbb{R}^d$, which in the field of physics is equivalent to an object's mass times its velocity at some point in time. When performing HMC to generate samples from the target distribution $\hat{p}(\boldsymbol{\theta})$, the position variable plays the role of the parameter vector and we will from now on let $\boldsymbol{\theta}\equiv \boldsymbol{q}$. The object's position is associated with a potential energy $U(\boldsymbol{\theta})$ and the momentum is associated with a kinetic energy $K(\boldsymbol{\rho})$. The sum of the potential and kinetic energy is regarded as the total energy of the system often called the Hamiltonian \begin{equation*} H(\boldsymbol{\theta},\boldsymbol{\rho})=U(\boldsymbol{\theta})+K(\boldsymbol{\rho}) \end{equation*} An important property of the Hamiltonian is that it conserves the sum of the potential and kinetic energy, meaning that it is constant over time. Taking the partial derivative with respect to time of the position and momentum shows us how they evolve over time \begin{equation}\label{eq:hamilton_equations} \begin{split} \frac{d \theta_{i}}{d t}=\frac{\partial H}{\partial \rho_{i}}=\frac{\partial K(\boldsymbol{\rho})}{\partial \rho_{i}} \\ \frac{d \rho_{i}}{d t}=-\frac{\partial H}{\partial \theta_{i}}=-\frac{\partial U(\boldsymbol{\theta})}{\partial \theta_{i}} \end{split} \end{equation} for $i=1,2, \ldots,d$. These are named Hamiltonian equations and represent a differential equations system. The Hamiltonian equations are useful, since if we are able to evaluate the partial derivatives from equation \ref{eq:hamilton_equations}, we are able to predict the position and momentum variables of the object at any point in the future $t^\prime>t$. \cite{neal2012bayesian} shows that these dynamics, along with the energy conserving property of the Hamiltonian, results in the process being reversible and preserving volume of the state space, which in turn provides a stationary distribution. \subsection{Discretizing Hamiltonian Equations} The Hamiltonian equations describe how an objective evolve in continuous time, but for simulating Hamiltonian dynamics on a computer we have to approximate the differential equations which is done by discretizing time. We do this by splitting the time interval $dt$ into smaller intervals $\varepsilon$. \\ \\ The usual discretizing scheme for simulating Hamiltonian equations, is the Leapfrog method. The Leapfrog method takes a half step to update momentum variable then a whole step to update the position value, and finally the last half step to update momentum \begin{equation*} \begin{split} \rho_{i}^{(t+\varepsilon / 2)}=\rho_{i}^{(t)}-(\varepsilon / 2) \frac{\partial U}{\partial \theta_{i}^{(t)}} \\ \theta_{i}^{(t+\varepsilon)}=\theta_{i}^{(t)}+\varepsilon \frac{\partial K}{\partial \rho_{i}^{(t+\varepsilon / 2)}} \\ \rho_{i}^{(t+\varepsilon)}=\rho_{i}^{(t)}-(\varepsilon / 2) \frac{\partial U}{\partial \theta_{i}^{(t+\varepsilon)}} \end{split} \end{equation*} According to \cite{neal2012mcmc} the Leapfrog method preserves the HMC properties of being reversible and preserve volume of the state space, ensuring that we sample from a stationary distribution. \subsection{The Hamiltonian and Probability Distributions} We have now explained what a Hamiltonian is and how we can simulate its dynamics by using the Leapfrog method. We will now connect this to MCMC theory from the previous sections in order to explain how to use HMC to sample from the posterior of the parameters in a BNN. In order to perform this connection, we need to relate the target distribution and the Hamiltonian, such that we can use the Hamiltonian equations to sample from the target distribution. A way of doing this, proposed by \cite{neal2012bayesian}, is to use a concept from statistical mechanics known as the canonical (Boltzmann) distribution. We can write a probability distribution on $\boldsymbol{\theta}$ under the canonical distribution as \begin{equation*} p(\boldsymbol{\theta})\propto \exp\left(\frac{-E(\boldsymbol{\theta})}{T}\right) \end{equation*} where $E(\boldsymbol{\theta})$ can be any energy function defined over $\boldsymbol{\theta}$. $T$ is often called the temperature of the system and usually chosen to be equal to 1 as it plays no role in this application, see \cite{neal2012bayesian}. One should note that any probability distribution that is nowhere zero can be put into this form by letting $E(\boldsymbol{\theta})=-\log p(\boldsymbol{\theta})-\log Z$, for any convenient choice of normalization constant $Z$. Since the Hamiltonian is an energy function for the joint state of both position and momentum, a joint distribution can be defined by \begin{equation*} p(\boldsymbol{\theta},\boldsymbol{\rho})\propto \exp(-\H(\boldsymbol{\theta},\boldsymbol{\rho})) = \exp(-U(\boldsymbol{\theta}))\exp(-K(\boldsymbol{\rho})) \end{equation*} Assuming Independence between $\boldsymbol{\theta}$ and $\boldsymbol{\rho}$ we can by the equation above write $U(\boldsymbol{\theta})=-\log p(\boldsymbol{\theta})$ and $K(\boldsymbol{\rho})=-\log p(\boldsymbol{\rho})$ meaning that the Hamiltonian can be interpreted as the log joint distribution on $(\boldsymbol{\theta},\boldsymbol{\rho})$. \\ \\ We now have a joint distribution, in terms of the Hamiltonian function, which we know how to simulate from. But we are in fact only interested in samples of the position variable $\boldsymbol{\theta}$, which is samples from our target distribution, and not samples from the momentum variable $\boldsymbol{\rho}$, which is only introduced to make the algorithm move faster through the parameter space. This means that we can choose the marginal distribution of the momentum arbitrarily. This is often chosen by practitioners to be Gaussian, $\boldsymbol{\rho}\sim \mathcal{N}\left(0, \boldsymbol{\Sigma} \right)$, where $\boldsymbol{\Sigma}$ is some symmetric, positive-definite covariance matrix and often chosen to be diagonal, such that $\boldsymbol{\rho}$ is $d$-dimensional multivariate Gaussian, with the $d$ variables being independent. We follow the simple approach from \cite{hoffman2011nouturn} and let $\boldsymbol{\Sigma}$ be the identity matrix $\boldsymbol{I}$. This makes the dynamics of equation \ref{eq:hamilton_equations} simplify to \begin{equation*} \begin{split} \frac{d \theta_{i}}{d t}&=\rho_i \\ \frac{d \rho_{i}}{d t}&=\frac{\partial \log p(\theta_{i})}{\partial \theta_i} \end{split} \end{equation*} A more rigorous examination of possible choices for the covariance matrix is provided by \cite{neal2012mcmc}. \subsection{The Hamiltonian Monte Carlo Algorithm} We start the HMC algorithm from an initial state $\lr{\boldsymbol{\theta}^{(0)},\boldsymbol{\rho}^{(0)}}$, and then we simulate the Hamiltonian dynamics for $t+\varepsilon$ using the Leapfrog method. The states generated for the position and momentum variables at the end of the Leapfrog simulation is used as proposals for a new state $(\boldsymbol{\theta}^{\text{cand}},\boldsymbol{\rho}^{\text{cand}})$. These states needs to be accepted according to a criterion, because the leapfrog-discretization provides an error term in its approximation of the continuous Hamiltonian dynamics. This criterion is the Metropolis acceptance criterion, \begin{equation}\label{eq:hmc_acceptance} \begin{split} \alpha\left((\boldsymbol{\theta},\boldsymbol{\rho}) \mapsto (\boldsymbol{\theta}^{\text{cand}} , \boldsymbol{\rho}^{\text{cand}} )\right) &= \min\left\{1, \frac{p(\boldsymbol{\theta}^{\text{cand}},\boldsymbol{\rho}^{\text{cand}})}{p(\boldsymbol{\theta},\boldsymbol{\rho})} \right\}\\ &= \min\{1,\exp\left(\log p(\boldsymbol{\theta}^{\text{cand}},\boldsymbol{\rho}^{\text{cand}})- \log p(\boldsymbol{\theta}, \boldsymbol{\rho}) \right)\\ &= \min \left\{1,\exp\left(-\H(\boldsymbol{\theta}^{\text{cand}},\boldsymbol{\rho}^{\text{cand}}) +\H(\boldsymbol{\theta},\boldsymbol{\rho})\right) \right\} \end{split} \end{equation} This means that we follow the same logic as in the Metropolis algorithm, but use distributions provided by the Hamiltonian dynamics. With $\boldsymbol{\rho}\sim \mathcal{N}\left(0,\boldsymbol{I}\right)$ this is equivalent to \begin{equation*} \alpha\left((\boldsymbol{\theta},\boldsymbol{\rho}) \mapsto (\boldsymbol{\theta}^{\text{cand}} , \boldsymbol{\rho}^{\text{cand}} )\right) =\min\left\{1,\frac{\exp\left(\mathcal{L}(\boldsymbol{\theta}^{\text{cand}})-\frac{1}{2}\boldsymbol{\rho}^{\text{(cand)}^\top}\boldsymbol{\rho}^{\text{(cand)}}\right)}{\exp\left(\mathcal{L}(\boldsymbol{\theta})-\frac{1}{2}\boldsymbol{\rho}^\top\boldsymbol{\rho}\right)}\right\} \end{equation*} where $\mathcal{L}\left(\boldsymbol{\theta}\right)$ is the log-probability distribution on $\boldsymbol{\theta}$. \\ \\ We see from the last part in equation \ref{eq:hmc_acceptance} that if we could perfectly discretize the Hamiltonian dynamics, the Metropolis acceptance criterion would always be equal to 1 due to the energy conservation property of the Hamiltonian. Since this is usually not possible, the Metropolis acceptance criterion will often be lower than 1. We can see that if we get a small error in the discretization the term $\H(\boldsymbol{\theta},\boldsymbol{\rho})-\H(\boldsymbol{\theta}^{\text{cand}},\boldsymbol{\rho}^{\text{cand}})$ in the exponent should be small, yielding a high acceptance rate. This way of making proposals is beneficial since it allows the Markov chain to effectively make large and uncorrelated moves in the state space, while keeping a high acceptance probability. HMC is written in pseudocode in algorithm \ref{alg:HMC}. It takes in an initial value for the parameters $\boldsymbol{\theta}^{(0)}$, which is the starting point of the algorithm. The input $\mathcal{L}$ is the log-probability distribution of $\boldsymbol{\theta}$, which is defined to be equal to the negative potential energy function. In BNN this is identical to the log-posterior distribution on the neural network parameters. One need to be able to at least evaluate the posterior distribution and its gradient or alternatively something proportional to it. In our case where the target distribution is the posterior distribution, we know that it is proportional to the likelihood times the prior, which we in most cases are able to evaluate. The $M$ input is the number of total iterations one would like to perform. \\ \\ The algorithm also relies on a stepsize $\varepsilon$ variable, that defines the size of the leapfrog step. If $\varepsilon$ is chosen too be too large, the leapfrog simulation of the Hamiltonian will be inaccurate and lead to a very low rate of acceptance making the algorithm ineffectively waste of computational time. On the other hand, if we choose $\varepsilon$ to be too small, we will waste computational time by taking too small steps. The sampling is also affected by a hyperparameter $L$, that defines how many leapfrog steps the algorithm performs before proposing a new candidate state. A very small value for $L$ will give successive samples that lie close to each other which results in the same undesirable random walk behavior as the Metropolis algorithm from section \ref{sec:Metropolis}. Too large a value for $L$ might produce trajectories that loop back and retrace their steps, a behavior called U-turns. This behavior results in the algorithm inefficiently wasting time, sampling from the same area of the distribution again and again. Tuning these parameters can be hard and one are often forced to rely on heuristics based preliminary runs, see \cite{neal2012mcmc}. \\ \\ In figure \ref{fig:HMC_Example} we have illustrated how HMC propose a candidate sample for a bivariate $\mathcal{N}\left(\boldsymbol{0},\boldsymbol{I}\right)$ target distribution. In subfigure (a) and (b) we have chosen a proper value for $\varepsilon$ and $L$ such that the algorithm generates proposals that are appropriately far from the previous ones. In subfigure (c) and (d) we have chosen larger values for $\varepsilon$ and $L$, and we can see that the algorithm starts to loop-back, which results in identical proposals for each iteration and a large proportion of the target distribution will therefore never be visited. In the next section we will look into a modification of the HMC algorithm, so that we can avoid this kind of U-turn behavior. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{pics/HMC_Example.png} \caption{A simulation example with HMC for a bivariate Gaussian target distribution with $\boldsymbol{\mu}= \begin{bmatrix} 0 & 0 \end{bmatrix}$ and covariance matrix $\boldsymbol{\Sigma}= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$. Subfigure (a) and (b) shows a HMC simulation example with a proper choice for $\varepsilon$ and $L$, where the target distribution is thoroughly explored. Subfigure (c) and (d) is the same example, modified with a poor choice of values for $\varepsilon$ and $L$, where the target distribution is poorly approximated. This is especially clear on the plots of the marginal distributions where the histograms are far from the plotted correct distribution. The result is the U-Turn effect, which yield an ineffective exploration of the target distribution. The example has been generated with the interactive gallery provided by \cite{feng} \textcopyright MIT.} \label{fig:HMC_Example} \end{figure} \begin{algorithm}\label{alg:HMC} \SetAlgoLined \SetKwFunction{Leapfrog}{Leapfrog} \KwInput{An intial parameter $\boldsymbol{\theta}^{(0)}$} \KwInput{A log-probability distribution $\mathcal{L}$} \KwInput{Total number of iterations $M$} \KwInput{Size of leapfrog-steps $\varepsilon$} \KwInput{Number of leapfrog-steps before generating candidate state, $L$.} \KwOutput{Set of accepted parameters} $\text{Given } \boldsymbol{\theta}^{(0)}, \mathcal{L}, M, \varepsilon, L,$\\ \For{m=1,2 \dots, M}{ Sample $\boldsymbol{\rho}^{(0)} \sim \mathcal{N}(0, \boldsymbol{\boldsymbol{I}})$ \\ Set $ \boldsymbol{\theta}^{\text{cand}} \leftarrow \boldsymbol{\theta}^{(m-1)}, \boldsymbol{\rho}^{\text{cand}} \leftarrow \boldsymbol{\rho}^{(0)}$\\ \For{i = 1 to L}{ Set $\boldsymbol{\theta}^{\text{cand}}, \boldsymbol{\rho}^{\text{cand}} \leftarrow \Leapfrog(\boldsymbol{\theta}^{\text{cand}}, \boldsymbol{\rho}^{\text{cand}}, \varepsilon)$ } Negate momentum: $\boldsymbol{\rho}^{\text{cand}} \leftarrow - \boldsymbol{\rho}^{\text{cand}}$\\ Compute acceptance probability:\\ $\alpha=\min \left\{1, \frac{\exp \left\{\mathcal{L}(\boldsymbol{\theta}^{\text{cand}})-\frac{1}{2} \boldsymbol{\rho}^{\text{(cand)}^\top} \boldsymbol{\rho}^{\text{cand}}\right\}}{\exp \left\{\mathcal{L}\left(\boldsymbol{\theta}^{(m-1)}\right)-\frac{1}{2} \boldsymbol{\rho}^{(0)^\top} \boldsymbol{\rho}^{(0)}\right\}}\right\}$\\ Sample $u\sim \text{Uniform}(0,1)$\\ \uIf{$u<\alpha$}{ Accept the proposal: $\boldsymbol{\theta}^{(m)} \leftarrow \boldsymbol{\theta}^{c a n d}$\\ } \Else{ Reject the proposal: $\boldsymbol{\theta}^{(m)} \leftarrow \boldsymbol{\theta}^{(m-1)}$ \\ } } \SetKwProg{Fn}{Function}{:}{\KwRet{$\boldsymbol{\theta}^{\text{cand}},\boldsymbol{\rho}^{\text{cand}}$}} \Fn{\Leapfrog{$\boldsymbol{\theta}$, $\boldsymbol{\rho}$, $\varepsilon$}}{ Set $\boldsymbol{\rho}^{\text{cand}} \leftarrow \boldsymbol{\rho}^{\text{cand}}+(\varepsilon / 2) \nabla_{\theta} \mathcal{L}(\theta)$\\ Set $\boldsymbol{\theta}^{\text{cand}} \leftarrow \theta+\varepsilon \boldsymbol{\rho}^{\text{cand}}$\\ Set $\boldsymbol{\rho}^{\text{cand}} \leftarrow \boldsymbol{\rho} ^{\text{cand}}+(\varepsilon / 2) \nabla_{\theta} \mathcal{L}(\boldsymbol{\theta}^{\text{cand}})$ } \caption{Hamiltonian Monte Carlo} \end{algorithm} \clearpage \subsection{No-U-Turn Hamiltonian Monte Carlo}\label{sec:nuts} In this section we present an algorithm introduced by \cite{hoffman2011nouturn} and evaluated by \cite{nishio_arakawa_nouturn}, whose explanation we found more clear and concise. No-U-Turn (NUTS) extends HMC by eliminating the need to specify the trajectory length $L$. This algorithm gets its name by avoiding the possible U-Turning behavior shown in figure \ref{fig:HMC_Example} (c) and (d). This is done by introducing a criterion that tells us when to stop simulating the dynamics to prevent a possible U-turn. The authors define this criterion to be when performing another leapfrog step will no longer increase the distance between the proposed state $\boldsymbol{\theta}^{\text{cand}}$ and the initial value $\boldsymbol{\theta}^{(0)}$. More specifically, they choose a criterion, based on the derivative with respect to time of half the squared distance between the initial parameter $\boldsymbol{\theta}^{(0)}$ and the current state $\boldsymbol{\theta}^{\text{cand}}$, meaning that leapfrog steps are performed until \begin{equation*} \begin{split} \frac{d}{d t} \frac{(\boldsymbol{\theta}^{\text{cand}}-\boldsymbol{\theta})^\top \cdot(\boldsymbol{\theta}^{\text{cand}}-\boldsymbol{\theta})}{2}&=(\boldsymbol{\theta}^{\text{cand}}-\boldsymbol{\theta})^\top \cdot \frac{d}{d t}(\boldsymbol{\theta}^{\text{cand}}-\boldsymbol{\theta})\\ &=(\boldsymbol{\theta}^{\text{cand}}-\boldsymbol{\theta})^\top \cdot \boldsymbol{\rho}^{\text{cand}} < 0 \end{split} \end{equation*} However, \cite{hoffman2011nouturn} notes that by doing this we do not have the guarantee of time reversibility, so the sampling algorithm might not converge to the correct distribution. NUTS overcomes this problem by using slice sampling and applying a double method suggested by \cite{neal_slice_sampling}. \\ \\ NUTS augments the distribution of HMC, $p \lr{\boldsymbol{\theta}, \boldsymbol{\rho}} \propto \exp{\left(\mathcal{L} \lr{\boldsymbol{\theta}} - \frac{1}{2} \boldsymbol{\rho}^\top \boldsymbol{\rho}\right)}$, to include a slice variable $u$ so that the joint probability of $\boldsymbol{\theta}, \boldsymbol{\rho}$ and $u$ is \begin{equation*} p \lr{\boldsymbol{\theta}, \boldsymbol{\rho}, u} \propto \mathbf{1} \lrs{ u \in \lrs{0,\exp{\left(\mathcal{L} \lr{\boldsymbol{\theta}} - \frac{1}{2} \boldsymbol{\rho}^\top \boldsymbol{\rho}\right)}}} \end{equation*} meaning that the un-normalized marginal probability of $\boldsymbol{\theta}$ and $\boldsymbol{\rho}$ (gained by integrating over $u$) is \begin{equation*} p \lr{\boldsymbol{\theta}, \boldsymbol{\rho}} \propto \exp{\left(\mathcal{L} \lr{\boldsymbol{\theta}} - \frac{1}{2} \boldsymbol{\rho}^\top \boldsymbol{\rho}\right)} \end{equation*} The conditional probabilities $p \lr{u \mid \boldsymbol{\theta}, \boldsymbol{\rho}}$ and $p \lr{\boldsymbol{\theta}, \boldsymbol{\rho} \mid u}$ are each uniform as long as the condition \begin{equation} \label{eq:nuts_unif_condition} u \leq \exp{\left(\mathcal{L} \lr{\boldsymbol{\theta}} - \frac{1}{2} \boldsymbol{\rho}^\top \boldsymbol{\rho}\right)} \end{equation} is satisfied. The challenge of slice sampling is to find the bounds of the region for which this condition is satisfied. \cite{neal_slice_sampling} proposes a doubling method, where the size of the initial segment containing the current value of $\boldsymbol{\theta}$ is randomly chosen and afterwards expanded by doubling its size until the samples are outside of the region. The expanding directions are randomly chosen to be leapfrog steps forward or backward in the Markov chain to satisfy reversibility. \\ \\ NUTS generates a finite set of all $\lr{\boldsymbol{\theta}, \boldsymbol{\rho}}$ by iteratively doubling its size. The doubling process is stopped to satisfy the condition \begin{equation*} \lr{\boldsymbol{\theta}^+ - \boldsymbol{\theta}^-}^\top \boldsymbol{\rho}^- < 0 \quad \text{or} \quad \lr{\boldsymbol{\theta}^- - \boldsymbol{\theta}^+}^\top \boldsymbol{\rho}^+ < 0 \end{equation*} where $\boldsymbol{\theta}^+, \boldsymbol{\rho}^-$ and $\boldsymbol{\theta}^+, \boldsymbol{\rho}^-$ is the leftmost and rightmost variables of all $\lr{\boldsymbol{\theta}, \boldsymbol{\rho}}$ generated by the doubling process respectively. \\ \\ A subset of proposal candidates $\lr{\boldsymbol{\theta}, \boldsymbol{\rho}}$, denoted by $\mathbb{C}$, is selected from the doubling process to satisfy the condition in equation \ref{eq:nuts_unif_condition}. The new values of $\lr{\boldsymbol{\theta}^*, \boldsymbol{\rho}^*}$ are then afterwards sampled uniformly from $\mathbb{C}$. \\ \\ To further improve this algorithm \cite{hoffman2011nouturn} used the following transition kernel in each step of doubling \begin{equation*} \resizebox{\textwidth}{!}{$ T\left(\boldsymbol{\theta}^{*}, \boldsymbol{\rho}^{*} \mid \boldsymbol{\theta},\boldsymbol{\rho}, \mathbb{C}\right)=\begin{cases} \frac{\mathbf{1}\left[\boldsymbol{\theta}^{*}, \boldsymbol{\rho}^{*} \in \mathbb{C}^{\text{new}}\right]}{\left|\mathbb{C}^{\text{new}}\right|} & \text{ when } \left|\mathbb{C}^{\text{new}}\right|>\left|\mathbb{C}^{\text{old}}\right|\\ \frac{\left|\mathbb{C}^{\text{new}}\right|}{\left|\mathbb{C}^{\text{old}}\right|} \frac{\mathbf{1}\left[\boldsymbol{\theta}^{*}, \boldsymbol{\rho}^{*} \in \mathbb{C}^{\text{new}}\right]}{\left|\mathbb{C}^{\text{new}}\right|}+\left(1-\frac{\left|\mathbb{C}^{\text{new}}\right|}{\left|\mathbb{C}^{\text{old}}\right|}\right) \mathbf{1}\left[\left(\boldsymbol{\theta}^{*}, \boldsymbol{\rho}^{*}\right)=(\boldsymbol{\theta}, \boldsymbol{\rho})\right] &\text{ when } \left|\mathbb{C}^{\text{new}}\right|\leq\left|\mathbb{C}^{\text{old}}\right| \end{cases}$ } \end{equation*} where $\mathbb{C}^{\text{new}}$ is the subset of $\lr{\boldsymbol{\theta}, \boldsymbol{\rho}}$, added by the last step of the doubling process and $\mathbb{C}^{\text{old}}$ is the disjoint subset of $\mathbb{C}$ so that $\mathbb{C} = \mathbb{C}^{\text{new}} \cup \mathbb{C}^{\text{old}}$. This transition kernel proposes a move from a state in $\mathbb{C}^{\text{old}}$ to a random state in $\mathbb{C}^{\text{new}}$ and accepts this move with probability $\frac{\mathbb{C}^{\text{new}}}{\mathbb{C}^{\text{old}}}$. The authors show that $T$ satisfies the detailed balance condition so that it leaves the uniform distribution over $\mathbb{C}$ invariant. According to \cite{nishio_arakawa_nouturn} this transitional kernel permits memory-efficient implementation and produces larger jumps on average than simple uniform sampling. \\ \\ NUTS is especially efficient as it can automatically choose a step size that achieves an acceptance probability around a desired level. The stepsize $\varepsilon$ for the $j$th iteration of a NUTS Markov chain is tuned as follows \begin{equation*} \begin{split} \log\lr{\varepsilon_{j+1}} &= \mu - \frac{\sqrt{j}}{\gamma} \frac{1}{j + j_0} \sum_{i=1}^j \lr{\delta - \alpha_i}\\ \log \lr{\bar{\varepsilon}_{j+1}} &= \eta_j \log \lr{\varepsilon_{j+1}} + \lr{1 - \eta_j} \log \lr{\bar{\varepsilon}_j} \\ \varepsilon_{j+1} &= \bar{\varepsilon}_{j+1} \end{split} \end{equation*} where $\alpha_j$ is an actual acceptance probability for the $j$th iteration, $\delta$ is the desired average acceptance probability, $\mu$ is a freely chosen point that the iterated $\varepsilon_j$ shrinks towards, $\gamma$ is a free parameter that controls the shrinkage amount towards $\mu$ and $j_0$ is a free parameter that dampens early exploration. \\ \\ \cite{hoffman2011nouturn} introduces the variables $\eta_j = j^{-\kappa}$ with $\kappa < 1$ to give more recent iterates more weight. They show that this way of adapting stepsize guarantees that $\alpha \rightarrow \delta$. They recommend setting $\mu = \log \lr{10 \varepsilon_1}$ and $\delta \approx 0.6$. \\ \\ NUTS tunes $\varepsilon$ during a predetermined warm-up phase and fixes this value thereafter. Since the algorithm accepts or rejects $\lr{\boldsymbol{\theta}^*, \boldsymbol{\rho}^*}$ from multiple candidates, an alternative statistic to the Metropolis acceptance probability must be defined. They do this by defining for each iteration the acceptance probability by \begin{equation*} \alpha_{j}=\frac{1}{\left|B_{j}\right|} \sum_{\boldsymbol{\theta}, \boldsymbol{\rho} \in B_{j}} \min \left\{1, \frac{p\left(\boldsymbol{\theta}^{j}, \boldsymbol{\rho}^{j}\right)}{p\left(\boldsymbol{\theta}^{j-1}, \boldsymbol{\rho}^{j, 0}\right)}\right\} \end{equation*} The pseudocode for NUTS can be seen in algorithm \ref{alg:nuts}. \begin{algorithm} \label{alg:nuts} \SetAlgoLined \KwInput{Initial parameters $\boldsymbol{\theta}^{(0)}$} \KwInput{A log-probability distribution $\mathcal{L}$} \KwInput{Initial size of leapfrog-steps $\bar{\varepsilon_0}$} \KwInput{Desired average of acceptance probability $\delta$} \KwInput{Aim point for values of the iterated $\varepsilon_j$ values $\mu$} \KwInput{Parameter controlling shrinkage towards $\mu$, $\gamma$} \KwInput{Parameter that controls dampening of early exploration $j_0$} \KwInput{Parameter that controls how much more weight are given to more recent iterates $\kappa < 1$} \KwInput{Total number of total number of iterations $J$} \KwInput{Total number of iterations for adapting size of leapfrog steps $J^{\text{adapt}}$} \KwOutput{Samples from the target distribution} \For{j=0, \dots, J}{ Sample momentum $\boldsymbol{\rho}^{\text{init}} \sim N(0, \boldsymbol{I})$\\ Sample auxiliary variable $u \sim \operatorname{Uniform}\left(0, \exp \left(\mathcal{L}\left(\boldsymbol{\theta}^{(j)}\right)-\frac{1}{2}\boldsymbol{\rho}^{\text{init}^{\top}} \boldsymbol{I}^{-1} \boldsymbol{\rho}^{\text{init}}\right)\right)$\\ Generate $\mathbb{C}$ by using the doubling method with transition kernel $T$. \\ Compute acceptance probability: $\alpha_{j}=\frac{1}{\left|B_{j}\right|} \sum_{\theta, \boldsymbol{\rho} \in B_{j}} \min \left\{1, \frac{p\left(\boldsymbol{\theta}^{j+1}, \boldsymbol{\rho}^{j+1}\right)}{p\left(\boldsymbol{\theta}^{j}, \boldsymbol{\rho}^{\text{init}}\right)}\right\}$\\ Accept the proposal $\left(\boldsymbol{\theta}^{*}, \boldsymbol{\rho}^{*}\right)$ with probability $\alpha_{j}$. \\ \uIf{$j\leq J^{\text{adapt}}$}{ $\log\lr{\varepsilon_{j+1}} \leftarrow \mu - \frac{\sqrt{j}}{\gamma} \frac{1}{j + j_0} \sum_{i=1}^j \lr{\delta - \alpha_i}$\\ $\log \lr{\bar{\varepsilon}_{j+1}} \leftarrow j^{-\kappa}\log \lr{\varepsilon_{j+1}} + \lr{1 - j^{-\kappa}} \log \lr{\bar{\varepsilon}_j}$ \\ $\varepsilon_{j+1} \leftarrow \bar{\varepsilon}_{j+1}$ } \Else{ $\varepsilon_{j+1}=\varepsilon_{J^{\text{adapt}}}$ } } \caption{No-U-Turn Sampler with Dual Averaging. One can easily change this pseudocode to one that runs until a certain number of samples are collected. For a more detailed pseudocode see \cite{hoffman2011nouturn}.} \end{algorithm} \clearpage \section{Priors}\label{sec:priors} From section \ref{sec:bayesian_stat} and the example given in section \ref{sec:simple_BNN} it is seen that Bayesian inference starts with a prior for the model parameters, which is supposed to embody ones prior beliefs about the assigned task. The prior is an important component and choosing a bad prior can affect the resulting posterior greatly. That said, the prior does have a diminishing effect on the posterior as the number of samples grow, as the likelihood will concentrate the posterior around a few highly likely parameters. One might think that if one has no qualified prior belief, then a weak prior, like a wide uniform distribution, can be a safe bet, but this can however have various unforeseen consequences as pointed out by \cite{lemoine2019} and \cite{sarma_kay2020}. \\ \\ The prior component is however more than a dangerous element that threatens the quality of the posterior. The prior provides a principled mechanism for researchers to incorporate previous research and knowledge into the model. Priors can also be beneficial in small sample sizes as the prior acts to regularize and reduce the chances of overfitting. Although in neural networks the relationship between parameters and the problem can be very abstract and not as intuitive as in other machine learning models like linear regression models or support vector machines, so having qualified prior beliefs about the weights might not be so easy. \\ \\ \cite{neal2012bayesian} stresses that even though it can seem like BNNs can be threatened by a lack of a suitable prior this is not the case, as much past work shows useful criteria for selecting suitable priors, even without full understanding of what the prior over the parameters will mean in terms of the output of the network. \cite{mackay1991} and \cite{MacKay1992} has produced results, that \cite{neal2012bayesian} describes as at least reasonable, by giving the parameters Gaussian prior distributions. He lets the standard deviation of these distributions be selected as a hyperparameter, which allows the model to adapt to the data. \\ \\ According to \cite{neal2012bayesian}, our prior knowledge will often be too unspecific to fix the parameter values chosen for the prior distribution, even if we have complete insight into their effects on the prior. We may then wish to treat these values as unknown hyperparameters, giving them a higher-level broad prior distribution, which we call a hyper-prior. \cite{neal2012bayesian} refers to these as Hierarchical models. One benefit of such models is that the appropriate degree of regularization for the task can be determined automatically from the data, see \cite{mackay1991} and \cite{MacKay1992}.
{ "alphanum_fraction": 0.7477984927, "avg_line_length": 127.5379426644, "ext": "tex", "hexsha": "077af2d76da2477004d8fcfd3e081659956d291b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "629b1c5f4bbdb80ef1d1037b4a0a1b7f95ac710b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mraabo/Dissertation--Bayesian-Neural-Networks", "max_forks_repo_path": "bayesian_neural_networks.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "629b1c5f4bbdb80ef1d1037b4a0a1b7f95ac710b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mraabo/Dissertation--Bayesian-Neural-Networks", "max_issues_repo_path": "bayesian_neural_networks.tex", "max_line_length": 1613, "max_stars_count": null, "max_stars_repo_head_hexsha": "629b1c5f4bbdb80ef1d1037b4a0a1b7f95ac710b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mraabo/Dissertation--Bayesian-Neural-Networks", "max_stars_repo_path": "bayesian_neural_networks.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 20168, "size": 75630 }
\section{Data preprocessing}\label{sec:data-preprocessing} To fulfill the requirements of the adapted transfer learning model the preprocessing tool was developed alongside a whole bunch of settings. The collected data sequences are in form of coordinates' series, so it is necessary to represent them as 3-dimensional pictures which are the input to the convolutional neural network. The described tool allows transferring those sequences into multidimensional arrays of numbers and also scale them to the required input size of a used transfer learning model. \begin{figure}[!hbt] \center \includegraphics[width=\linewidth]{resources/bot_sequences} \captionof{figure}{Collected and preprocessed bot sequences with applied linear interpolation} \label{fig:bot_sequences} \end{figure} \begin{figure}[!hbt] \center \includegraphics[width=\linewidth]{resources/user_sequences} \captionof{figure}{Collected and preprocessed user's sequences with applied linear interpolation} \label{fig:user_sequences} \end{figure} The data consists of isolated points that are result of discrete sampling, so to improve performance efficiency these points that represent the original coordinates on the user's screen are interpolated using a linear interpolation technique. The operation was performed by connecting the discrete points using straight lines, so each consecutive event's coordinates were combined into a chain that results in a continuous curve on the input pictures. The input data with applied linear interpolation and resizing are visualized in Fig.~\ref{fig:bot_sequences} and Fig.~\ref{fig:user_sequences}. It was verified on the collected dataset that interpolated data results in better accuracy of a model in comparison to the isolated ones. The presented tool works also as a splitter between training and testing sets using the provided ratio between them and as a label assigner which allows adding the corresponding labels to the single sequence basing on the provided identifier of the bot user. Using this tool it is also possible to increase the number of bot samples by using several repetitions of the original bot set in cases of an imbalanced dataset. The output of the preprocessor is a tuple of training and testing dataset along with labels, which can be directly the input of a neural network model. In further work, in order to improve the performance of the model, data augmentation was introduced and applied. A detailed description will be provided in Section~\ref{sec:machine-learning-model}, but it is worth mentioning that this approach required changes in the preprocessor tool. The preprocessor was extend to allow saving interpolated and scaled images to the directory provided by the user. Along with the extensions in the preprocessor, the main script was enhanced with the ability to load the dataset from image files. The previous option --- loading from binary files --- was unchanged.
{ "alphanum_fraction": 0.8132611637, "avg_line_length": 92.375, "ext": "tex", "hexsha": "a841dc8fdb27c231739fd73847644a84792e5d9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "24fe0f9dca4fa0b18137fdebd976feef9997d895", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Mouse-BB-Team/Thesis", "max_forks_repo_path": "thesis/chapters/implementation/preprocessing_subchapter.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "24fe0f9dca4fa0b18137fdebd976feef9997d895", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Mouse-BB-Team/Thesis", "max_issues_repo_path": "thesis/chapters/implementation/preprocessing_subchapter.tex", "max_line_length": 258, "max_stars_count": null, "max_stars_repo_head_hexsha": "24fe0f9dca4fa0b18137fdebd976feef9997d895", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Mouse-BB-Team/Thesis", "max_stars_repo_path": "thesis/chapters/implementation/preprocessing_subchapter.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 569, "size": 2956 }
\section{Assignment 1} \subsection{Data Definition Language} \subsubsection{Table Details} \begin{itemize} \item Hotel (hotel\_no(10, varchar), name(20, varchar), address(30,varchar)) \item Room (room\_no(10, varchar), hotel\_no(10, varchar), type(10, varchar), price(20, number)) \item Booking (hotel\_no(10, varchar), guest\_no(10, varchar), date\_from(date), date\_to(10), room\_no(10, varchar)) \item Guest (guest\_no(10, varchar), name(30, varchar), address(30, varchar)) \end{itemize} \subsubsection{Questions} \begin{enumerate} \item Create table hotel that contains all hotel details, hotel\_no is the primary key. \item Create table room that contains all room details for a hotel. room\_no and hotel\_no are the primary keys. \item Create table booking that contains booking details for a guest. Primary keys are hotel\_no, guest\_no and date\_from. \item Create table guest that contains guest details. Primary key is guest\_no. \item Alter table hotel and add an attribute "status". \item Alter table guest and modify the size of guest name to 30. \item Rename table hotel to cityhotel. \item Drop table room. \item Describe table hotel. \item Describe table guest. \end{enumerate}
{ "alphanum_fraction": 0.7346774194, "avg_line_length": 25.8333333333, "ext": "tex", "hexsha": "9f4ac9e1346d564f192ba1e9dd0ded8fb602b63f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "705f3041190fba4fdb49a7dc18f1f1d8e10c1dbe", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ANSEduGroup/6th-sem-labs", "max_forks_repo_path": "Documentation/DBMS/a1_q.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "705f3041190fba4fdb49a7dc18f1f1d8e10c1dbe", "max_issues_repo_issues_event_max_datetime": "2017-04-10T09:16:28.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-10T09:15:30.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ANSEduGroup/6th-sem-labs", "max_issues_repo_path": "Documentation/DBMS/a1_q.tex", "max_line_length": 72, "max_stars_count": null, "max_stars_repo_head_hexsha": "705f3041190fba4fdb49a7dc18f1f1d8e10c1dbe", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ANSEduGroup/6th-sem-labs", "max_stars_repo_path": "Documentation/DBMS/a1_q.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 351, "size": 1240 }
\documentclass[revision-guide.tex]{subfiles} %% Current Author: \setcounter{chapter}{11} \begin{document} \chapter{Electric Fields} \begin{content} \item concept of an electric field \item uniform electric fields \item capacitance \item electric potential \item electric field of a point charge \end{content} \subsection{Candidates should be able to:} \spec{explain what is meant by an electric field and recall and use $E=\frac{F}{q}$ for electric field strength} An electric field is the region of space in which electrical forces are exerted on charged bodies. The definition of electric field strength is force per unit charge, and this is written in symbols as: \[ E=\frac{F}{q} \] Electric field therefore has units of \si{\newton\per\coulomb}. \spec{recall that applying a potential difference to two parallel plates stores charge on the plates and produces a uniform electric field in the central region between them} If an electric potential is created between two parallel plates a distance $d$ apart then an electric field exists between them. Except at the edges, the field is \emph{uniform}. This is shown in figure \ref{parallelplates} by the fact that the field lines are parallel. \begin{figure}[ht] \begin{center} \begin{tikzpicture} \draw[thick] (0,0) -- (10,0); \draw[thick] (0,3) -- (10,3); \foreach \x in {1.2,2,...,8.8} { \draw[-{Latex[length=5mm, width=2mm]},red] (\x,3) -- (\x,1.25); \draw[red] (\x,1.25) -- (\x,0); } \foreach \x in {0,0.4,...,10} { \draw (\x,3.2) node {+}; \draw (\x, -0.2) node {-}; } \end{tikzpicture} \end{center} \caption{Uniform field between parallel plates} \label{parallelplates} \end{figure} Note that field lines show the direction in which a force would act on a \emph{postive} charge so they are from positive to negative. \spec{derive and use the equations Fd = QV and $E=\frac{V}{d}$ for a charge moving through a potential difference in a uniform electric field} From the definition potential difference we can say that the work done moving through a potential difference is \[ W = QV \] and that is equal to the force multiplied by the distance moved, \[ Fd \] If the movement is through a uniform field we can equate these two to give \[ Fd = QV \] The definition of electric field strength gives $F=QE$ and therefore substitution and re-arrangement give \[ E = \frac{V}{d} \] \spec{recall that the charge stored on parallel plates is proportional to the potential difference between them} This is an application of Gauss' law which relates the electric field strength around an object to the charge contained within a surface. It is enough to recall this fact at Pre-U. \spec{recall and use $C = \frac{Q}{V}$ for capacitance} This is the definition of capacitance and should be learnt. It can also be calculated from the gradient of a graph of $Q$ against $V$. \spec{derive, recall and use $W = \frac{1}{2}QV$ for the energy stored by a capacitor, derive the equation from the area under a graph of charge stored against potential difference, and derive and use related equations such as $W = \frac{1}{2}CV^2$} If a capacitor is partially charged with a charge $Q$ then to increase its potential difference by $\delta V$ will require a small amount of work $\delta W$ such that \[ \delta W = Q \delta V \] If a graph is plotted of $Q$ against $V$ then this can be seen as the area of a small section. Thus, the energy required to charge a capacitor from uncharged to a p.d. of $V$ is given by the area of a graph of $Q$ against $V$ from zero to $V$, hence \[ W = \frac{1}{2}QV \] We can substitute for each quantity in turn to give the following variations of the equation: \[ W = \frac{1}{2}QV = \frac{1}{2}CV^2 = \frac{1}{2}\frac{Q^2}{C} \] This can, of course, also be done with integration \[ W = \int_0^V Q dV = \int_0^V CV dV = \frac{1}{2}CV^2 \] \spec{analyse graphs of the variation with time of potential difference, charge and current for a capacitor discharging through a resistor} When a capacitor discharges through a resistor the potential difference on the capacitor drives current around the circuit. Since it is this current which removes charge from the capacitor, it is the case that the rate of change of charge on the capacitor is proportional to the charge on the capacitor. \[ \frac{dQ}{dt} = -I = -\frac{V}{R} = -\frac{Q}{RC} \] This is a first order differential equation and has a solution \[ Q = Q_0 e^{-\frac{t}{RC}} \] The substitutions $Q=CV$ and then $I = \frac{V}{R}$ can be used to get similar equations for each of the above. The graph of this change is exponential decay. The key features are the initial value (e.g. $V_0$) and the fact that all three quantities tend to zero. \spec{define and use the time constant of a discharging capacitor} The time constant, $\tau$, is defines as follows: \[ \tau = RC \] This means that in one time constant the current, voltage and charge on a capacitor have declined to $\frac{1}{e}$ of their initial values. A common rule-of-thumb from electronics is that a capacitor takes five time constants to discharge. If we plug this into the equation \[ V = V_0 e^{-\frac{t}{RC}} = V_0 e^{-5} = 0.067\ V_0 \] So the voltage across the capacitor has declined to less than 1\% of its initial value. \spec{analyse the discharge of a capacitor using equations of the form $x=x_0e^{\frac{-t}{RC}}$} Much of this has been covered above. One important point to note is that the analysis of capacitor decay is usually carried out by plotting a graph of $\ln{V}$ against time. This changes the equation to give \[ \ln{V} = -\frac{t}{RC} + \ln{V_0} \] Therefore the gradient of the graph is $-\frac{1}{RC}$ and the intercept $\ln{V_0}$. \spec{understand that the direction and electric field strength of an electric field may be represented by field lines (lines of force), and recall the patterns of field lines that represent uniform and radial electric fields} Field lines are a way of visualising the field. The direction of the field lines is from north to south and show the direction in which a positively charged particle will move. The density of field lines represent the strength of the field. For example in figure \ref{solenoid} the region \textbf{A} contains a uniform strong field which is stronger than the field at \textbf{B} as the field lines are more closely spaced. \begin{figure}[ht] \begin{tikzpicture} \foreach \x in {1,2,3,4,5} { \draw (\x-0.6,0) circle (0.2cm); \draw (\x-0.6,3) circle (0.2cm); } % top loop \draw[-{latex[length=5mm,width=2mm]}] (0,2.5) -- (2.5,2.5); \draw (2.5,2.5) -- (5,2.5); \draw (5,2.5) .. controls (7,2.5) and (7,4) .. (5,4); \draw[-{latex[length=5mm,width=2mm]}] (5,4) -- (2.5,4); \draw (2.5,4) -- (0,4); \draw (0,4) .. controls (-2,4) and (-2,2.5) .. (0,2.5); % 2nd loop \draw[-{latex[length=5mm,width=2mm]}] (0,2) -- (2.5,2); \draw (2.5,2) -- (5,2); \draw (5,2) .. controls (8,2) and (8,5) .. (5,5); \draw[-{latex[length=5mm,width=2mm]}] (5,5) -- (2.5,5); \draw (2.5,5) -- (0,5); \draw (0,5) .. controls (-3,5) and (-3,2) .. (0,2); % middle loop \draw[-{latex[length=5mm,width=2mm]}] (-2,1.5) -- (2.5,1.5); \draw (2.5,1.5) -- (7,1.5); % 4th loop \draw[-{latex[length=5mm,width=2mm]}] (0,1) -- (2.5,1); \draw (2.5,1) -- (5,1); \draw (5,1) .. controls (8,1) and (8,-2) .. (5,-2); \draw[-{latex[length=5mm,width=2mm]}] (5,-2) -- (2.5,-2); \draw (2.5,-2) -- (0,-2); \draw (0,-2) .. controls (-3,-2) and (-3,1) .. (0,1); % bottom loop \draw[-{latex[length=5mm,width=2mm]}] (0,0.5) -- (2.5,0.5); \draw (2.5,0.5) -- (5,0.5); \draw (5,0.5) .. controls (7,0.5) and (7,-1) .. (5,-1); \draw[-{latex[length=5mm,width=2mm]}] (5,-1) -- (2.5,-1); \draw (2.5,-1) -- (0,-1); \draw (0,-1) .. controls (-2,-1) and (-2,0.5) .. (0,0.5); % Labels \draw (2.5,1.5) node[anchor=south] {\textbf{A}}; \draw (2.5,4.5) node {\textbf{B}}; \end{tikzpicture} \caption{Field around a solenoid} \label{solenoid} \end{figure} \spec{understand electric potential and equipotentials} Electric potential is defined as the energy per unit charge due to an electric field. Space without an electric field is defined as having zero potential. An equipotential is a line or surface on which the potential is a constant value. Therefore no work is done against the electric force moving along an equipotential. On diagrams equipotentials always cross field lines at right angles. \spec{understand the relationship between electric field and potential gradient, and recall and use $E = -\frac{dV}{dX}$} The strength of the electric field at any point is equal to the negative of the potential gradient. This can be seen most easily in a uniform field. If a unit charge is moved through a distance $\Delta x$ within a uniform field $E$ then the work done, $F\Delta x = -q\Delta V$ (for a unit charge). The negative sign comes from the fact that if the force is doing work on the charge it must be moving in the opposite direction to the force on that charge. Thus, the change in energy per unit charge is given by $\Delta V = -\frac{F}{q}\Delta x$. As $\frac{F}{q}$ is the field strength $E$, this can be rearranged to give $E = -\frac{\Delta V}{\Delta x}$. This can be generalised for a non-uniform field as \[ E = - \frac{dV}{dx} \] \spec{recognise and use \[ F = \frac{Q_1 Q_2}{4\pi\epsilon_0 r^2} \] for point charges} The equation above is known as Coulomb's law and enables the calculation of the force between two point charges separated by a distance $r$. \spec{derive and use $E = \frac{Q}{4\pi\epsilon_o r^2} $ for the electric field due to a point charge} This is simply from the definition of electric field strength: \[ E = \frac{F}{Q} = \frac{1}{Q_2} \frac{Q_1 Q_2}{4\pi\epsilon_0 r^2} = \frac{Q}{4\pi\epsilon_o r^2} \] \spec{*use integration to derive $W = \frac{Q_1 Q_2}{4\pi\epsilon_0 r}$ from $F=\frac{Q_1 Q_2}{4\pi\epsilon_0 r^2}$ for point charges} Since free space is defined as having zero potential, the electrostatic potential energy, $W$, of a particle is equal to the work done by the field moving the particle from a distance $r$ to infinity. Since work done is equal to $\int F dx$ it follows: \[ W = \int_{r}^\infty \frac{Q_1 Q_2}{4\pi\epsilon_0 r^2} dr = \frac{Q_1 Q_2}{4\pi\epsilon_0 r}\] You can define this alternatively by thinking about the work done bringing a charged particle from infinity to a distance $r$. If you do this it is important to remember that $F$ acts in the opposite direction to $r$ so the limits of integration are reversed and the formula has a minus sign, thus reaching the same answer. \spec{*recognise and use $W = \frac{Q_1 Q_2}{4\pi\epsilon_0 r }$ for the electrostatic potential energy for point charges.} This is simply applying the equation above. This will allow the calculation of changes in potential energy and possibly transfer to other forms of energy (e.g. kinetic). \end{document}
{ "alphanum_fraction": 0.6944770858, "avg_line_length": 58.8457446809, "ext": "tex", "hexsha": "abc996bb7fba75d4a511c0a894adae09272508d0", "lang": "TeX", "max_forks_count": 22, "max_forks_repo_forks_event_max_datetime": "2021-05-09T13:48:59.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-19T16:16:46.000Z", "max_forks_repo_head_hexsha": "d0f993750d660df38f05085ccf3b351d2ea3dd7d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sirioq/physics-PreU", "max_forks_repo_path": "12-electric-fields.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "d0f993750d660df38f05085ccf3b351d2ea3dd7d", "max_issues_repo_issues_event_max_datetime": "2021-06-24T08:14:03.000Z", "max_issues_repo_issues_event_min_datetime": "2016-12-19T16:46:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sirioq/physics-PreU", "max_issues_repo_path": "12-electric-fields.tex", "max_line_length": 704, "max_stars_count": 14, "max_stars_repo_head_hexsha": "d0f993750d660df38f05085ccf3b351d2ea3dd7d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sirioq/physics-PreU", "max_stars_repo_path": "12-electric-fields.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-08T21:47:07.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-13T19:37:54.000Z", "num_tokens": 3372, "size": 11063 }
\subsection{Acceleration Field for a Rigid Body} \begin{frame} Acceleration vector $\vb{a}{}=\xvec[:]{\bm{r}}$ with respect to time $t$: \[\vb{a}{}= \frac{d}{dt}(\vb{v}{O} + \vb{\omega}{}\times\vb{r}{}) = \xvec[.]{\bm{r}}_{\bm{M}}+\xvec[.]{\bm{\omega}}\times\vb{r}{} + \vb{\omega}{}\times\xvec[.]{\bm{r}} = \vb{a}{0}+\vb{\alpha}{}\times\vb{r}{}+\vb{\omega}{}\times(\vb{\omega}{}\times\vb{r}{})\] where $\displaystyle \vb{\alpha}{} = \xvec[.]{\bm{\omega}} = \dot{\omega}_x\ih + \dot{\omega}_y\jh + \dot{\omega}_z\kh + \omega_x\frac{d\ih}{dt} + \omega_y\frac{d\jh}{dt}+\omega_z\frac{d\kh}{dt}$\\ $\hskip 22mm\displaystyle = \alpha_x\ih + \alpha_y\jh + \alpha_z\kh + \omega_x\vb{\omega}{}\times\ih + \omega_y\vb{\omega}{}\times\jh + \omega_z\vb{\omega}{}\times\kh$\\ $\hskip 22mm\displaystyle = \alpha_x\ih + \alpha_y\jh + \alpha_z\kh + \vb{\omega}{}\times\vb{\omega}{}$\\ $\hskip 22mm\displaystyle = \alpha_x\ih + \alpha_y\jh + \alpha_z\kh$ \begin{block}{Formula} For any point of a rigid body, its acceleration equation is: \[ \vb{a}{}=\vb{a}{0}+\vb{\alpha}{}\times\vb{r}{}+\vb{\omega}{}\times(\vb{\omega}{}\times\vb{r}{}) \] where $\vb{\alpha}{}\times\vb{r}{}$ is tangential acceleration\\\hskip10.5mm $\vb{\omega}{}\times(\vb{\omega}{}\times\vb{r}{})$ is centripetal acceleration \end{block} \end{frame} \begin{frame} In matrix form, acceleration vector $\vb{a}{M}$ of point $M$ relative to fixed reference frame $(O_0x_0y_0z_0)$ can be written as: \[ \vb{a}{M}= \begin{bmatrix} a_x\\a_y\\a_z \end{bmatrix}= \begin{bmatrix} a_{Ox}+(z\alpha_y-y\alpha_z)+\omega_y(y\omega_x-x\omega_y)+\omega_z(x\omega_x-x\omega_z)\\ a_{Oy}+(x\alpha_z-z\alpha_x)+\omega_z(z\omega_y-y\omega_z)+\omega_x(x\omega_y-y\omega_z)\\ a_{Oz}+(y\alpha_x-x\alpha_y)+\omega_x(x\omega_z-z\omega_x)+\omega_y(y\omega_z-z\omega_y) \end{bmatrix} \]\hskip7mm $\displaystyle= \vb{a}{O}+\vb{\alpha}{}\times\vb{r}{MO} + \vb{\omega}{}\times(\vb{\omega}{}\times\vb{r}{MO}) = \vb{a}{O} +\vb{a}{MO}^{\bm t}+ \vb{a}{MO}^{\bm n}$\\ where \[ \vb{a}{MO}^{\bm t}= \begin{cases} \perp \vb{r}{MO}\rightturn \vb{\alpha}{}\\\alpha r_{MO} \end{cases} \] \[ \hskip 6mm\vb{a}{MO}^{\bm n}= \begin{cases} \uparrow\uparrow \vb{r}{MO}\\\omega^2r_{MO}=\frac{v_{MO}^2}{r_{MO}} \end{cases} \] For planar motions: $\vb{\omega}{}\times(\vb{\omega}{}\times\vb{r}{MO})=-\vb{\omega}{}^2\vb{r}{MO}$ \end{frame}
{ "alphanum_fraction": 0.6106231702, "avg_line_length": 50.8723404255, "ext": "tex", "hexsha": "37217d26a25be7df93d3c5cc1f61a16cac3fa7c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ed3383576918b6ca1480e45c6ed689c87636fc41", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "HungNguyenDang/literate-meme", "max_forks_repo_path": "Finished/velocity_acceleration_analysis_pdf/Sections/Introduction/Acceleration_field_rigid_body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ed3383576918b6ca1480e45c6ed689c87636fc41", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "HungNguyenDang/literate-meme", "max_issues_repo_path": "Finished/velocity_acceleration_analysis_pdf/Sections/Introduction/Acceleration_field_rigid_body.tex", "max_line_length": 259, "max_stars_count": null, "max_stars_repo_head_hexsha": "ed3383576918b6ca1480e45c6ed689c87636fc41", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "HungNguyenDang/literate-meme", "max_stars_repo_path": "Finished/velocity_acceleration_analysis_pdf/Sections/Introduction/Acceleration_field_rigid_body.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1051, "size": 2391 }
\section{The mixed effects estimator}
{ "alphanum_fraction": 0.775, "avg_line_length": 10, "ext": "tex", "hexsha": "f0c97343942e44cecdec2c2c7807114f8467c6b6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/generalLinearModels/06-00-The_mixed_effects_estimator.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/generalLinearModels/06-00-The_mixed_effects_estimator.tex", "max_line_length": 37, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/generalLinearModels/06-00-The_mixed_effects_estimator.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9, "size": 40 }
\newpage \subsection{Shapes} \label{sec:shapes} This section presents an overview of the shape plugins that are released along with the renderer. In Mitsuba, shapes define surfaces that mark transitions between different types of materials. For instance, a shape could describe a boundary between air and a solid object, such as a piece of rock. Alternatively, a shape can mark the beginning of a region of space that isn't solid at all, but rather contains a participating medium, such as smoke or steam. Finally, a shape can be used to create an object that emits light on its own. Shapes are usually declared along with a surface scattering model (named ``BSDF'', see \secref{bsdfs} for details). This BSDF characterizes what happens \emph{at the surface}. In the XML scene description language, this might look like the following: \begin{xml} <scene version=$\MtsVer$> <shape type="... shape type ..."> ... $\code{shape}$ parameters ... <bsdf type="... bsdf type ..."> ... $\code{bsdf}$ parameters .. </bsdf> <!-- Alternatively: reference a named BSDF that has been declared previously <ref id="myBSDF"/> --> </shape> </scene> \end{xml} When a shape marks the transition to a participating medium (e.g. smoke, fog, ..), it is furthermore necessary to provide information about the two media that lie at the \emph{interior} and \emph{exterior} of the shape. This informs the renderer about what happens in the region of space \emph{surrounding the surface}. \begin{xml} <scene version=$\MtsVer$> <shape type="... shape type ..."> ... $\code{shape}$ parameters ... <medium name="interior" type="... medium type ..."> ... $\code{medium}$ parameters ... </medium> <medium name="exterior" type="... medium type ..."> ... $\code{medium}$ parameters ... </medium> <!-- Alternatively: reference named media that have been declared previously <ref name="interior" id="myMedium1"/> <ref name="exterior" id="myMedium2"/> --> </shape> </scene> \end{xml} You may have noticed that the previous XML example dit not make any mention of surface scattering models (BSDFs). In Mitsuba, such a shape declaration creates an \emph{index-matched} boundary. This means that incident illumination will pass through the surface without undergoing any kind of interaction. However, the renderer will still uses the information available in the shape to correctly account for the medium change. It is also possible to create \emph{index-mismatched} boundaries between media, where some of the light is affected by the boundary transition: \begin{xml} <scene version=$\MtsVer$> <shape type="... shape type ..."> ... $\code{shape}$ parameters ... <bsdf type="... bsdf type ..."> ... $\code{bsdf}$ parameters .. </bsdf> <medium name="interior" type="... medium type ..."> ... $\code{medium}$ parameters ... </medium> <medium name="exterior" type="... medium type ..."> ... $\code{medium}$ parameters ... </medium> <!-- Alternatively: reference named media and BSDF instances that have been declared previously <ref id="myBSDF"/> <ref name="interior" id="myMedium1"/> <ref name="exterior" id="myMedium2"/> --> </shape> </scene> \end{xml} This constitutes the standard ways in which a shape can be declared. The following subsections discuss the available types in greater detail.
{ "alphanum_fraction": 0.6495071194, "avg_line_length": 38.0416666667, "ext": "tex", "hexsha": "5ea219dd55ff27f0a9eb2f5bcf08be44aed08b38", "lang": "TeX", "max_forks_count": 30, "max_forks_repo_forks_event_max_datetime": "2022-03-11T06:55:34.000Z", "max_forks_repo_forks_event_min_datetime": "2017-07-21T03:56:45.000Z", "max_forks_repo_head_hexsha": "0b405d92c12d257e2581366542762c9f0c3facce", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NTForked-ML/pbrs", "max_forks_repo_path": "mitsuba-af602c6fd98a/doc/section_shapes.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "0b405d92c12d257e2581366542762c9f0c3facce", "max_issues_repo_issues_event_max_datetime": "2019-07-01T05:44:41.000Z", "max_issues_repo_issues_event_min_datetime": "2017-08-15T18:22:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NTForked-ML/pbrs", "max_issues_repo_path": "mitsuba-af602c6fd98a/doc/section_shapes.tex", "max_line_length": 119, "max_stars_count": 139, "max_stars_repo_head_hexsha": "0b405d92c12d257e2581366542762c9f0c3facce", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NTForked-ML/pbrs", "max_stars_repo_path": "mitsuba-af602c6fd98a/doc/section_shapes.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T20:33:10.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-21T00:22:34.000Z", "num_tokens": 837, "size": 3652 }
% !TeX spellcheck = <none> \documentclass[./\jobname.tex]{subfiles} \begin{document} \definecolor{light-gray}{gray}{0.9} \chapter{Task Description} This project is about constructing and building a 3-axis robot. The robot is equipped with two cameras and a red laser-pointer. The user can take a green laser-pointer to direct the robot into a different position. In order to avoid permanent sight-damage, the robot should avoid shining the laser into faces of surrounding humans. The following list contains the main working steps: \begin{itemize} \item design and construct the 3-axis robot \item manufacture and assemble the robot parts \item develop the backwards kinematic \item implement the backwards kinematic in C and compile to DLL \item find green laser point in 3D stereo image with OpenCV in Python \item drive the red laser point of the robot to the green laser point of the human \item planned but optional: \begin{itemize} \item plan path from current position to green laser position \item implement controller to correct deviance between red and green laser point \item avoid human faces along the way \end{itemize} \end{itemize} \chapter{Design} The robot consists of 3 Dynamixel MX-64AT motors, 2 Microsoft HD 3000 lifecam and a laser module DB635-1-3-FA(14x45)-ADJ from Picotronic. Aluminium frames are attached to the motors to build the framework of the robot. The cameras and laser-pointer are connected with aluminium brackets to the robot. The design of the robot and its components is established with SolidWords. The whole robot is mounted on aluminium profiles to improve the handling and to store of the electrical hardware. Beside the cables, a power supply for the motors and laser pointer, an emergency stop button and communication components for the motors are required. The cameras are directly connected to the laptop or computer via the USB ports. This means that the laptop requires 3 USB sockets. A connection via a USB-Hub failed as the OpenCV library could not find the cameras. \\ The table \ref{tab:camera_before_after} shows how the cameras were modified in order to properly attach them to the robot tool. \\ The table \ref{tab:3d_rander} displays a 3D render of the completely assembled robot.\\ Finally, the finished robot can be seen in figure \ref{fig:final_assemgly} \begin{table}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \begin{tabular}{c c} \includegraphics[width=0.6\textwidth]{img/foto/camera_with_stand.jpg} \includegraphics[width=0.49\textwidth]{img/foto/camera_without_stand.jpg} \end{tabular} } \unterschrift{comparison of the cameras before and after modification}{}{} \label{tab:camera_before_after} \end{table} \begin{table}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \begin{tabular}{c c} \includegraphics[width=0.75\textwidth]{../img/pdf/robot_3d_side.pdf} \includegraphics[width=1\textwidth]{../img/pdf/robot_3d_front.pdf} \end{tabular} } \unterschrift{3D render of the assembled robot in SolidWorks}{}{} \label{tab:3d_rander} \end{table} \begin{figure}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.8\textwidth]{img/foto/final_robot_photo.jpg} } \unterschrift{final assembly of the robot}{}{} \label{fig:final_assemgly} \end{figure} \chapter{Calculation} This chapter describes the mathematical formulations that are needed to control the robot. This includes the backwards kinematics as well as the formulas for setting $\theta_{0,1,2}$. \section{Backwards Kinematic} The following figure \ref{fig:robot_coord} shows the coordinate systems that are used to get the link parameters shown in table \ref{tab:link_params} below. The link parameters are established using the Deviant Hartenberg convention mentioned in \cite[p. 70- 79]{Craig2018}. With the knowledge of the link parameters, the transformation matrices are generated in the next step. The full matrix can be seen in equation \ref{eq:transformationmatrix} \begin{figure}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.4\textwidth]{img/pdf/robot_axis_coord.pdf} } \unterschrift{robot axis coordinate systems used to calculate the link parameters}{}{} \label{fig:robot_coord} \end{figure} \begin{table}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \begin{tabular}{|c|c|c|c|c|} \hline $i$ & $\alpha_{i-1}$ & $a_{i-1}$ & $d_i$ & $\theta_{i-1}$\\ \hline 1 & 0 & 0 & 100 & $\theta_0$\\ 2 & -90 & 50 & 0 & $\theta_1 - 90$\\ 3 & 0 & 100 & 0 & $\theta_2 + 180$\\ F & 90 & 0 & 68 & $0$ \\ \hline \end{tabular} } \unterschrift{table of link parameters}{}{} \label{tab:link_params} \end{table} \begin{equation} \label{eq:transformationmatrix} \begin{split} & T = \\ & \begin{bmatrix} -c0c1s2 -c0c2s1 & -s0 & c0c1c2 - c0s1s2 & a2c0 + dF (c0c1c2 - c0s1s2) + a3c0s1 \\ -c1s0s2 -c2s0s1 & c0 & c1c2s0 - s0s1s2 & a2s0 + dF (c1c2s0 - s0s1s2) + a3s0s1 \\ s1s2 - c1c2 & 0 & -c1s2 - c2s1 & d1 + a3c1 + dF(-c1s2 - c2s1) \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\\ \end{split} \end{equation} \subsection{Implementation} For controlling the robot, several functions are written in C and compiled to a DLL. The functions are built on the Dynamixel library \cite{Robotis2019}. The DLL provides the following methods: \begin{itemize} \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|int robot_start();|} \\ Sets up the communication with the robot by returning the port number. This number must be provided in every other function. Further this function sets up the main parameters of every motor like the velocity limits. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|void robot_stop(int port_num);|} \\ Stops the robot by driving to the home position and freeing the device with the port number. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|void robot_home(int port_num);|} \\ Separate function for driving the robot to the home position. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|int robot_drive(float z, float alpha, float beta, int port_num);|} \\ Tells the robot to drive to a specific position specified by $\alpha$, $\beta$ and z. The motion towards this position is not planned. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|int robot_reached_target(float z, float alpha, float beta, \ \ \ \ \ int port_num);|} \\ This function checks if the robot is at the specified $\alpha$, $\beta$ and z position and returns 1 if it reached the target (otherwise 0). \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|void robot_set_theta(float theta0, float theta1, float theta2, \ \ \ \ \ int port_num);|} \\ Instead of specifying a global position, this function allows to set the axis angle $\theta$ by its own. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|float robot_get_theta0(int port_num);|} \\ Returns the angle $\theta_0$ that the robot is at that time. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|float robot_get_theta1(int port_num);|} \\ Returns the angle $\theta_1$ that the robot is at that time. \item \colorbox{light-gray}{\lstinline[basicstyle=\ttfamily\color{black}]|float robot_get_theta2(int port_num);|} \\ Returns the angle $\theta_2$ that the robot is at that time. \end{itemize} The functions \textit{robot\textunderscore drive} as well as the function \textit{robot\textunderscore set\textunderscore theta} are non-blocking. This means that the function returns, before the robot reached the target. Blocking functions can be easily created outside the DLL by simply polling the functions \textit{robot\textunderscore reached\textunderscore target} or \textit{robot\textunderscore get\textunderscore theta}. \section{Goal Robot Configuration} \label{sec:calc_robot_config} The laser point on the wall can be described by only to 2 DOF, vertical and horizontal coordinates. To track the laser point, the robot has to change 2 DOF likewise. Even though the robot has 3 rotation axis, it only satisfy 2 DOF, because $theta_1$ and $theta_2$ directly depend to each other. Therefore, one of these angles can be set to any arbitrary angle. In this case $theta_1$ is constantly set to 0°. With the 2 cameras and the knowledge of the transformation matrices of the robot, the vertical and horizontal coordinates of the laser point are received and transformed into 3D coordinates of the robot base coordinate system. As a result, the rotation angle of the motors can be calculated. $\theta_0$ is a simple geometric problem and can be easily calculated with trigonometric functions (see figure \ref{fig:3d_point_base}). The equation \ref{eq:theta_2} for $\theta_2$ is slightly more advanced. Additionally, the length and height of the robot components have to be taken in consideration. \begin{figure}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.8\textwidth]{img/pdf/robot_3d_point_base.pdf} } \unterschrift{camera calibration app flowchart}{}{} \label{fig:3d_point_base} \end{figure} \begin{equation} \label{eq:theta_0} \theta_0 = arctan \left( \frac{Y_0}{X_0} \right) \end{equation} \begin{equation} \label{eq:theta_2} \theta_2 = -1 \cdot arctan \left( \frac{ Z_0 - 100 - 100 cos(\theta_1)}{\sqrt{X_{0}^{2} + Y_{0}^{2}} - 50 - 100 sin(\theta_1)} \right) \end{equation} \chapter{Stereo Vision} The idea of this project is to drive directly to the correct point, without using a closed loop controller of any sort. This is done by measuring the 3D point with a stereo camera and calculating back to the desired robot configuration (as shown in \ref{sec:calc_robot_config}). In order to measure the correct point, the cameras need to be calibrated. Further, a reliable method for finding the laser-point is needed. All vision related tasks are done with \cite{2014opencv} \section{Camera Calibration} At first a camera calibration application is written, that allows to take images, calculate the two single camera matrices and estimates the translation and rotation between the two cameras. The flowchart in figure \ref{fig:cam_cal_app_flowchart} shows the main outline of the camera calibration app. Three quality measures are deployed to verify that the matrices are correct: \begin{itemize} \item measuring a known 3D structure with the calibrated stereo camera tool \item using the same pictures of the chessboard in MATLAB and comparing the calculated matrices \item plausibility check for the translation vector and the rotation matrix between the cameras \end{itemize} Besides the camera calibration towards each other, the main camera (left) must also be calibrated to the TCP. This is called the hand-eye calibration. The following matrix \ref{eq:HE} shows the hand-eye calibration. Although there are more complex methods to generate this transformation matrix, it can also be measured from the real tool. The figure \ref{fig:hand_eye_coord} shows the rotation between the TCP coordinate system and the camera coordinate system. \begin{equation} \label{eq:HE} HE = \begin{bmatrix} 0.0 & 1.0 & 0.0 & -28.5 \\ -1.0 & 0.0 & 0.0 & 45.0 \\ 0.0 & 0.0 & 1.0 & 28.0 \\ 0.0 & 0.0 & 0.0 & 1.0 \\ \end{bmatrix} \end{equation} \begin{figure}[h] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.4\textwidth]{img/png/hand_eye_coord.png} } \unterschrift{hand-eye coordinate system transformation}{}{} \label{fig:hand_eye_coord} \end{figure} \begin{figure}[h] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.6\textwidth]{img/pdf/camera_calibration_app_flowchart.pdf} } \unterschrift{camera calibration app flowchart}{}{} \label{fig:cam_cal_app_flowchart} \end{figure} \section{Laser Point Detection} The second part is about detecting the laser point in an image. To make the task simpler, the laser point may only be projected onto a white flip board chart. \\ Two methods for detecting the laser point are tested. \subsection{Method 1: HSV Threshold} As lasers are monochromatic one could think about looking for a certain colour. The idea is to define a threshold in HSV-colour space and only look at the pixels that are within this band. By applying a dilation function to the binary image, the weakly connected pixels are morphed together and form blobs. Then the function \textit{findCountours} is deployed to detect the connected areas of white pixels. Finally, the first (biggest) area contour is taken and the centre of mass is calculated. This is the 2D point in one image. \\ This method did not work very well. Some problems have been identified: \begin{itemize} \item Taking the biggest area is not very sensible. More constraints could be helpful like: size threshold, convexity or previous point. \item As the automatic mode of the camera is not deactivated the colour of the laser point might change with different lighting scenarios. \item The definition of the colour threshold is not trivial: Either too many pixels or too few are taken. Especially with a colourful surrounding this trade-off is hard to control. \end{itemize} \subsection{Method 2: Template Matching} The second method uses template matching to find the green laser point. The process outline as follows: a template slides over the image and at every pixel they are compared. The comparison results a scalar value which are then stacked together to create something like a `heat-map'. The larger the scalar is at an index, the better is the correspondence between the image and the template. \\ There are many different methods for comparing two images defined in OpenCV. The utilize method was found to work best for this sort of object detection. The formula is shown in equation \ref{eq:template_matching}. \\ The templates are created before every run. In order to make that process as user-friendly as possible, a GUI based configuration app is created. The templates can be seen in the table \ref{tab:laser_template_comp}. \\ The main characteristics of this method are: \begin{itemize} \item This method works much better with different lighting scenarios than the HSV-threshold. \item Motion blurring results in an unstable detection of the laser point. Thus, fast movements with the robot and by hand should be avoided. \item Whenever there is no green laser point in the image, the red laser is detected as green, since this is now the best match. \item This method only works with the background (e.g. white flip board) specified in the template. \end{itemize} \begin{equation} R(x,y) = \frac{\sum_{x',y'} (T'(x',y') \cdot I'(x + x', y + y'))}{\sqrt{\sum_{x',y'} T'(x',y')^{2} \cdot \sum_{x',y'} I'(x + x', y + y')^{2}}} \label{eq:template_matching} \end{equation} \begin{table}[h] \centering \noindent\adjustbox{max width=\linewidth}{ \begin{tabular}{c c} \includegraphics[width=0.295\textwidth]{img/png/green_laser_template.png} \includegraphics[width=0.3\textwidth]{img/png/red_laser_template.png} \end{tabular} } \unterschrift{laser point template as created by the configuration app}{}{} \label{tab:laser_template_comp} \end{table} \chapter{Main App} This chapter describes the algorithmic steps taken in the main application as seen in the flowchart \ref{fig:main_app_flowchart}. The action \textit{transform 3D point to robot base} consists of multiple steps itself: \begin{itemize} \item Multiply the 3D point with the hand-eye calibration to transform the point into the TCP frame, as seen in \ref{eq:HE} \item Transform the point to the robot base coordinate system by multiplying with the backwards kinematic in \ref{eq:transformationmatrix}. \end{itemize} \begin{figure}[h] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.5\textwidth]{img/pdf/main_app_flowchart.pdf} } \unterschrift{camera calibration app flowchart}{}{} \label{fig:main_app_flowchart} \end{figure} \chapter{Conclusion} We find, that template matching method is better suited to detect small objects in a constantly changing image rather than searching the images for a specific colour. The resting distance between green and red laser point, as seen in figure \ref{fig:resting_offset} results from the offset of the laser pointer to the robot tool centre. That can be easily adjusted with a new bracket. To avoid any systematic errors, the laser pointer has to be mounted colinear with the z-axis of the robot tool. Another current problem is the high energy consumption of the laser pointer. Within a half an hour, the intensity of the laser pointer dropped significantly. In the next step, a more appropriate power supply should be installed. An additional option could be to specify $theta_1$ as a variable instead of a constant and find a more advanced system of equations. \begin{figure}[H] \centering \noindent\adjustbox{max width=\linewidth}{ \includegraphics[width=0.5\textwidth]{img/foto/resting_ofset.png} } \unterschrift{resting offset between laser points}{}{} \label{fig:resting_offset} \end{figure} \end{document}
{ "alphanum_fraction": 0.7629766796, "avg_line_length": 55.925566343, "ext": "tex", "hexsha": "b2e0f22365da5d6d7a31c0a3e3675992e01f125f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3ba1ed7b2cd151d1ee3b4fb9da12662cf2ecc726", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nicolai-schwartze/LaB3R", "max_forks_repo_path": "Documentation/tex/MainPart.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ba1ed7b2cd151d1ee3b4fb9da12662cf2ecc726", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nicolai-schwartze/LaB3R", "max_issues_repo_path": "Documentation/tex/MainPart.tex", "max_line_length": 858, "max_stars_count": null, "max_stars_repo_head_hexsha": "3ba1ed7b2cd151d1ee3b4fb9da12662cf2ecc726", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nicolai-schwartze/LaB3R", "max_stars_repo_path": "Documentation/tex/MainPart.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4684, "size": 17281 }
%---------------------------------------------------------------------------- \chapter{Model-based testing} \label{cha:modelbasedtesting} %---------------------------------------------------------------------------- The idea of model-based testing originates from the 70's and now it has an extensive literature, terminology and a commonly accepted taxonomy \cite{taxonomy}. MBT can be defined as a software testing technique, where the software's intended behaviour is verified against a formerly constructed model. This chapter introduces the concept of this variant of software testing through a concrete process (Figure~\ref{fig:mbtprocess}). \begin{figure}[htp] \centering \includegraphics[scale=0.55]{figures/mbt_process.png} \caption{Model-based testing process} \label{fig:mbtprocess} \end{figure} \begin{description} \item[1. Modelling] From informal requirements or previously defined specifications a model can be built. The model is an abstract representation of the \textit{system under test (SUT)}. It uses encapsulation for information reduction, because it should be more simple, than the original system to achieve an easier modifying and maintaining \cite{mbttestcasegeneration}. During model-based software development the model can be used for many other tasks too, as it serves analysing, synthesising and documenting the SUT as well. \item[2. Test planning] \textit{Test selection criteria} decide how the test cases are chosen, which point of view is important by testing. Later these selected criteria will control the whole test generation process. Criteria are transformed into \textit{test case specifications}, which are the formalised versions of the criteria. These two steps are often treated separately, but they form a cohesive step of test planning, thus they will be discussed together in this thesis. \item[3. Test generation] After creating the model and the test case specifications, set of \textit{test cases} is generated automatically from the model regarding all the specifications. One of the biggest challenges is to create the test cases. A simple test case consists of a pair of input parameters and expected outputs. Finite set of test cases forms a \textit{test suite}. The difficulty comes from the need to satisfy the test case specifications and to create a minimised set of test cases. \item[4. Test execution] A successfully generated test suite can be executed on the SUT. For the execution a \textit{test script} can be used, which executes the test cases. The generated test cases are strongly linked to the abstract test model, therefore an \textit{adapter} component is needed, which is often part of the test script. The adapter adapts the test inputs to the SUT. For example if the input of a method is an XML document containing an integer value, the adapter has to transform the test case's test inputs to XML. The test script usually contains a \textit{test oracle} that checks the difference between the test output and the expected output. \end{description} Utting, Pretschner and Legeard investigated the currently available MBT solutions and defined a taxonomy (see Figure~\ref{fig:mbttaxonomy}), which concentrates to three major properties of model-based testing. The three dimensions of their taxonomy are the modelling specification, test generation and test execution, which will be followed and expanded by the presentation of each stages of the testing process. \section{Modelling} \label{sec:modelling} The first step of the model-based testing process is to create a suitable model, from which a test suite can be generated. Model specifications has three dimensions considering the different MBT approaches. \begin{description} \item[Model scope] The scope of the modelling is a binary decision. The model either specifies \textit{just the test input} or \textit{the input-output pairs} for the SUT. Usually the first case is less useful, because the test script can not check the SUT's output and that's why it is difficult to create an oracle that way. \item[Model characteristics] The SUT determines the main characteristics of the model. It depends on the SUT's timing properties (\textit{timed} / \textit{untimed}), determinism (\textit{deterministic} / \textit{non-deterministic}) and dynamics (\textit{discrete} / \textit{continuous} / \textit{hybrid}). \item[Model paradigm] The third dimension is the paradigm that is used to describe the model. \textit{State-based notation} means, that set of variables defines the model, which represents the internal state of the system and there are some operations that modify those variables. Usually these operations are given by preconditions and postconditions. By \textit{transition-based notation} the model focuses on the transitions between the states of the system. Finite state machines are examples of this paradigm. \textit{History-based notations} model the allowable traces of its behaviour over time. By \textit{functional notation} collection of mathematical functions model the system. \textit{Operational notations} describe the model as a set of executable processes running parallel. Petri nets are good forms of this notation. \textit{Stochastic notations} describe the model by a probabilistic model, so it is rather suitable to model the environment than the SUT itself. An example can be the Markov chains for this type of model paradigm. The last paradigm is the \textit{data-flow notation}, where the main concept is the concentration to the data, rather than the control flow. Example can be the often used Matlab Simulink model. \end{description} \begin{figure}[htp] \centering \includegraphics[scale=0.5]{figures/mbt_taxonomy.png} \caption{Model-based testing taxonomy \cite{taxonomy}} \label{fig:mbttaxonomy} \end{figure} As we saw by the taxonomy, all the identified model paradigms used in model-based testing belong to some kind of behaviour modelling notation. This is not a surprise, because a data or functional model can not be utilised so effectively by software testing. Each model paradigm concentrates to a different aspect of the behaviour. There is a plethora of technologies for modelling behaviour and one of the most frequently used are the extended finite state machine (EFSM) and all of its variations. These variations mostly use transition based notation, but they can combine it with other modelling paradigms as well. The second most popular modelling language according to Shafique and Labiche \cite{toolsreview} is the UML state machine language, which is an enhanced version of EFSMs. Other modelling languages are used in the field of MBT too, but mostly these tools are made for specific purposes. As EFSMs or at least their variations serve as basic modelling notation for the most available model-based testing tools, that's why we have to investigate them properly. The basic parts of the UML language will be described here as well. \subsection{Extended finite state machines} \label{sub:efsm} A \textit{finite state machine} is a 6-tuple $\langle S, I, A, R, \Delta, T\rangle$, where \begin{align*} & S: \text{set of finite states},\\ & I \subset S: \text{set of initial states},\\ & A: \text{finite alphabet of input symbols},\\ & R: \text{set of possible outputs},\\ & \Delta \subset S \times A: \text{set of possible input relations},\\ & T: \text{is a transition relation function}\ f: \Delta \rightarrow S \times R \end{align*} The semantic of this model is the following. When $T(s, a) = (s', r)$, the state machine is receiving an input $a \in A$ in state $s \in S$, assuming $(s,a) \in \Delta$, then the system moves to the new state $s' \in S$ and outputs $r \in R$. A possible $(s'', a') \notin \Delta$ is interpreted as an input symbol that is not allowed in that state. An \textit{extended finite state machine} differs from a simple finite state machine in terms of the states defined differently. The states of an extended state machine have the form $S = D_0 \times D_1 \times \dots D_n$, where $D_0$ is the set of control states and $D_{i=1}^n$ is the domain of state variables $x_i$ that are assigned to each states. % subsection efsm (end) \subsection{UML state machines} \label{sub:umlstatemachine} UML state machines or UML state charts are improved versions of the mathematical concept of finite state machines expressed with the OMG's Unified Modeling Language \cite{omguml}. The original FSM notations suffer greatly by the state and transition explosion problem, because the complexity of these models tend to grow faster as the modelled system. UML state machines solved this problem by extracting the common parts of these system and sharing the common behaviour across the states. The idea behind the notation is that an entity or each of its sub-entities is always in exactly one of the possible states and there are well-defined conditional transitions between these states. There are two kinds of state machine, which can define the behaviour of model elements or describe protocol usage. \begin{figure}[htp] \centering \includegraphics[scale=0.6]{figures/statemachine_metamodel} \caption{Metamodel of UML state machine \cite{omguml}} \label{fig:statemachine_metamodel} \end{figure} UML state machines are similar to FSMs, but they also have differences. For example UML state charts introduce new features over traditional finite machines such as hierarchically nested regions, orthogonal regions, entry/exit actions, internal transitions and transition execution sequences. The main concepts of this notation are discussed separately. \begin{description} \item[States] are the phases of the system's history. For example if the history can be separated into two phases, then there are two states. \item[Extended states] represent the complete condition of the system. Usually this is implemented with states that are extended with system variables. \item[Transitions] happen when a state switched to another. \item[Actions] are executed when an event is dispatched and the system responds by performing them. \item[Events] can be everything that affects the system and causes state change. \begin{figure}[htp] \centering \includegraphics[scale=0.5]{figures/mbt_smexample} \caption{Example UML state machine} \label{fig:mbt_smexample} \end{figure} \item[Guards] are boolean expressions described with extended state variables and event parameters. They can affect the system's behaviour by enabling or disabling transitions. \item[Hierarchically nested regions] mean that if a system is in a substate then it is also at the same time in all the substate's superstates. \item[Orthogonal regions] are regions, which are either in 'OR' or 'AND' relation. \item[Entry/exit actions] are actions, which are dispatched upon entry to a state or exit from it. \item[Internal transitions] do not cause state transitions, but only some internal actions to execute and the actual state stays the same. \item[Transition execution sequence] describes an execution sequence of actions to do upon event dispatching. First the guard of the transition evaluates. Then the exit actions of the source state configuration will be executed. Then come the actions associated with the transition. Finally the entry actions of the target state configuration will be executed. \end{description} % subsection umlstatemachine (end) % section modelling (end) \section{Test planning} \label{sec:testplanning} Planning tests involves two steps considering the model-based test generation process. At first, the test selection criteria are chosen, which will be formalised into a test case specification later on. Test selection criteria control the test case generation. MBT taxonomy includes the following identified criteria. \textit{Structural model coverage criteria} aim to cover a part of the model, for example nodes and arcs of the transition-based model. The nodes of such a model represent the states of the system and the arcs represent the transitions respectively. The basic idea of \textit{data coverage criteria} is to split the data space to equivalence classes and choose values from them. \textit{Requirements based coverage criteria} are linked to the informal requirements of the SUT and it applies the coverage to the requirements. \textit{Ad-hoc test case specifications} are guided by the test case specifications. \textit{Random and stochastic criteria} are useful rather to model the environment and applicable to use with a stochastic model. \textit{Fault-based criteria} can be very efficient, because it concentrates to error finding in the SUT. The main goal of the test selection criteria is to guide the automatic test selection by the test case generation. A good criteria fulfils the previously defined testing policy and testing strategy that were specified for the system \cite{istqb}. Testing policies give rules for testing, while strategies are high-level guidelines. Major tasks of test planning consist of \begin{itemize} \item determining the scope of the testing and identifying its objectives \item determining the test approach (techniques and coverage) \item implementing testing policy and the strategy \item determining the required resources \item scheduling the testing process \item determining exit criteria such as coverage criteria \end{itemize} The required output of the test selection criteria formalisation is the test case specification. This specification has to be fully formalised, so that a test generator is capable of generating test cases based on this formalisation and the software model. % section testplanning (end) \section{Test generation} \label{sec:testgeneration} One of the most important thing that defines the test case generation is the chosen technology, because it has a strong impact on the effectiveness of software testing \cite{testcasegen} \cite{mbttestcasegeneration}. That's why this topic is under active research and resulted different approaches. Model-based testing taxonomy consists of the following popular test generation methods. The easiest one to implement is the \textit{random generation}, more difficult are the \textit{search-based algorithms}, where graph algorithms and other search algorithms are used to perform a walk on the model. \textit{Model checking} can also be used for test case generation, where the model checker searches for a counterexample, which becomes a test case. \textit{Symbolic execution} means analysing the software to determine, what inputs cause each part of a program to execute. This method is guided by test case specification to reach a specific goal. \textit{Deductive theorem proving} is similar to model checking, but the model checker is replaced with a theorem prover. \textit{Constraint solving} is useful for selecting data values from complex data domains. We can see, that there are lot of possibility to choose from, when generating test cases for a given SUT. These methods all have advantages and disadvantages and we need to investigate them thoroughly to choose a suitable one for our needs. \subsection{Adaptive random testing (ART)} \label{sub:randomtesting} Random testing is based on the idea that the inputs have to spread across the domain of the input parameters to find failure causing inputs. There are five method in the field of ART: \begin{itemize} \item From a randomly generated input set, next candidate is chosen by a selected criterion. \item Next input parameter is chosen by exclusion: the randomly generated input parameter has to be outside of previously executed regions (exclusion regions). \item One other approach uses the information about already executed input parameters, to divide the input domain into partitions. Next input parameter will be chosen from a new partition. \item The next input parameter can be chosen by dynamically adjusted test profiles. \item Distribution metrics can also help to find the next input parameter to achieve dispersion on the input domain. \end{itemize} % subsection randomtesting (end) \subsection{Search based software testing (SBST)} \label{sub:searchbasedtestgen} In the last few decades there has been an exhausting research in the field of using graph theory at model-based testing. These techniques belong to search-based test generation algorithms. One of the most used algorithms refers to the \textit{Chinese Postman Problem} \cite{graphtheorymbt}. Given that, it is impossible to cross each edge once in an undirected graph during a graph walk; in other words it does not have an Eulerian tour. What is the minimal amount of re-crossing we need to create a walk that uses each edge? The solution is to duplicate the shortest edges between the vertices having odd degree. This process is called "Eulerising" the graph. The \textit{New York Street Sweeper Problem} is a variant of the previous graph theory problem. It applies to directed graphs. Arcs need to duplicate to reach that each nodes have out-degree minus in-degree equal zero. In model-based testing one can use this idea, by creating a transition-based model, which can be represented as a graph. The vertices are the states of the SUT and the edges are the callable methods. A generated Eulerian tour gives a full transition-based structural model coverage. The previous algorithms give full transition-based coverage, but not pair-wise coverage. The following algorithm, named \textit{de Bruijn sequences}, creates every combination of the methods. First create a dual graph of the original graph, then eulerise the dual graph (by duplicating arcs to balance node polarities). Create an Eulerian tour, noting the names of the passed nodes. Dill, Ho, Horowitz and Yang worked on the \textit{limited sub-tour problem}, where the test case sequences can not be longer, than a specified upper limit. There is no optimal solution for that problem, but there are some heuristics. For example if an upper limit was set, the current sub-tour has to end and a new sub-tour has to start from that node. Other approaches are using a fitness function to find input parameters that maximises the achievement of test goals, while minimising testing costs. % subsection searchbasedtestgen (end) \subsection{Traditional MBT techniques} \label{sub:modelchecking} These test generation technologies include three similar solution especially for model-based testing purposes. \begin{description} \item[Model checking] is a traditional MBT test case generation technique, where a model checker is used to generate test cases. Input of the model checker are the model of the SUT and the formalised versions of test criteria to check. During the procedure of proofing, if test criteria are valid in the model, witness traces and counterexamples are generated. A witness trace is a path, which consists of states, where the criterion is satisfied, while counterexamples represent a path, where the criterion is violated. The resulted paths can be used as set of test cases. There are two main approaches in this topic, which are influenced by the chosen modelling notation (Section~\ref{sec:modelling}): \begin{itemize} \item \textbf{Finite state machine approaches} The model is formalised with a Mealy machine, where inputs and outputs are paired on each transition. Test case generation is driven by some test selection criteria. \item \textbf{Labelled transition system approaches} This is a common formalism for describing operational semantics of process algebra. There are two common techniques generating test cases (input/output conformance and interface automata), which describe the conformance of the SUT. These techniques do not define test selection strategies, they have to be combined with coverage criteria as seen by FSMs. \end{itemize} \item[Theorem proving] is used traditionally to validate logical formulas. However model-based testing can also benefit from the power of this method. Axiomatic foundations of MBT are based on some form of logic calculus. The models of the SUT is specified with logical expressions that are partitioned into equivalence classes. Each resulted class defines a specific feature of the SUT, therefore it represents a particular test case. A possible partitioning can be, where the logic formula is transformed into disjunctive normal form (DNF) and solved with a higher-order logical theorem prover. Another way can be to transform the problem into solving finite state machines. \item[Constraint solving] is used in a way, where a solver generates test cases by satisfying given constraints over a set of variables. With this method input model of the software and the test criteria are specified using constraints. The created constraints can be solved several ways for example with Boolean solvers (e.g. SAT solvers) or with numerical analysis (e.g. Gaussian elimination). \end{description} % subsection modelchecking (end) \subsection{Symbolic execution} \label{ssub:symbolicexecution} Symbolic execution is a program analysis technique that analyses a program’s code to automatically generate test cases from it. It belongs to white box testing, because the inner structure of the SUT is known during the test. Symbolic execution uses symbolic values, instead of concrete values, as program inputs. During the symbolic execution the state of the program is represented with \textit{symbolic values} of program variables at that point, a \textit{path constraint} is created by symbolic values and a \textit{program counter}. The path constraint is a Boolean formula that has to be satisfied to reach that point on the path. At each branch point the path constraint is updated with constraints of the inputs. If the path constraint becomes unsatisfiable, the path can not be continued. If the path constraint stays satisfiable, then all solution for the Boolean formula can be an input for a given test case. There are numerous tools, which prove the usefulness of this technique, but there are three main problem that limits the effectiveness of this method by real world programs. \begin{itemize} \item \textbf{Path explosion} The most real world program have a huge number of computational path. The execution of each path can mean an unacceptable overhead. Solutions for this problem can be using the specification of the parts that affect the symbolic execution or avoiding some branch, which are irrelevant to the test data criteria. \item \textbf{Path divergence} Programs usually implemented in a mixture of different programming languages. The symbolic execution of such a complex infrastructure is almost impossible. The unavailability of these paths leads to path divergence and some paths may not be found during the symbolic execution. Possible solution can be to replace these paths with a model during the test generation. \item \textbf{Complex constraints} Solving Boolean formulas involves using constraint solvers during the symbolic execution. There are some formula, which can not be solved with the today available tools. These formulas can be simplified by replacing solvable subformulas with concrete values. \end{itemize} % subsection symbolicexecution (end) \subsection{Combinatorial testing} \label{sub:combinatorialtesting} In combinatorial testing, samples of input parameters have to be chosen that cover a prescribed subset of combinations of the elements to be tested. Samples usually consist all t-way combination of possible input parameters. This method is called \textit{combinatorial interaction testing} (CIT). The inputs can be described with a covering array: \begin{displaymath} CA=\langle N, t, k, v\rangle \end{displaymath} where $N$ represents sample size, $t$ is called strength, $k$ are the factors and $v$ are the possible symbols. So $CA$ is an $N * k$ array on $v$ symbols such that every $N * t$ sub-array contains all $t$-tuples from the $v$ symbols at least once. Finding an appropriate coverage array is possible using heuristics. Combinatorial testing can be used if the domains of the input parameters are known. % subsection combinatorialtesting (end) % section testgeneration (end) \section{Test execution} \label{sec:testexecution} Test execution includes several steps, because the abstraction level of the generated test cases differs from the SUT. Therefore a previously mentioned adapter component is needed that bridges between the two component. The concrete execution is done by a component, named test script, which includes a test oracle that determines, if the test were run successfully or not. The tasks of the execution are the following: \begin{itemize} \item Execute the complete test suite or individual test cases with test scripts. \item Log the outcome of the execution and report the identities, versions of the SUT and the testing tools. \item Compare the results with the expectations using oracles. \item Report the differences between the actual and the expected results. \item Repeat the execution with the same configuration to prove the correctness of a previously failed test case. When we just re-execute a test case that called \textit{confirmation testing}, but we have to check that a fix does not introduce new defects (\textit{regression testing}). \end{itemize} The tests can run either \textit{online} or \textit{offline} on the SUT. During an online test, the test generator can respond to the SUT's actual output for example with an different test case sequence. By an offline test generation test cases are generated strictly before the execution. The testing can be started by an automatic execution or manually that triggers the user directly. % section testexecution (end) % chapter modelbasedtesting (end)
{ "alphanum_fraction": 0.7972234071, "avg_line_length": 98.8007662835, "ext": "tex", "hexsha": "d915e75c214209fd38c01ac89e3b5267cd26b029", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7e2da59405ae1b3ad5d1bc939e44086fa09b9ee1", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "thesnapdragon/msc-thesis", "max_forks_repo_path": "doc/dipterv2/content/mbt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7e2da59405ae1b3ad5d1bc939e44086fa09b9ee1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "thesnapdragon/msc-thesis", "max_issues_repo_path": "doc/dipterv2/content/mbt.tex", "max_line_length": 1244, "max_stars_count": null, "max_stars_repo_head_hexsha": "7e2da59405ae1b3ad5d1bc939e44086fa09b9ee1", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "thesnapdragon/msc-thesis", "max_stars_repo_path": "doc/dipterv2/content/mbt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5459, "size": 25787 }
\documentclass[letterpaper]{twentysecondcv} % a4paper for A4 \cvname{Sarath S Pillai} % Your name \cvjobtitle{Backend Engineer, \\ Aspiring Architect} % Job title/career \cvlinkedin{https://www.linkedin.com/in/sarathsp} \cvnumberphone{+919742033616} % Phone number \cvsite{https://me.sarathsp.com/} % Personal website \cvmail{[email protected]} % Email address %---------------------------------------------------------------------------------------- \begin{document} \makeprofile % Print the sidebar %---------------------------------------------------------------------------------------- % EXPERIENCE %---------------------------------------------------------------------------------------- \section{Experience} \begin{twenty} % Environment for a list with descriptions \twentyitem {May 2018 -- \\Present} {SDE 3} {\href{http://exotel.com/}{Exotel Techcom Private Limited}} {Tech Lead} { {\begin{itemize} \item Responsible for design, development and maintenance of all the REST services. \item Researched and compiled coding standards and code-review checklists for the company \end{itemize}} } \twentyitem {Sept 2016 - \\ May 2018} {SDE 2} {\href{http://exotel.com/}{Exotel Techcom Private Limited}} {Enterprise Technology} { {\begin{itemize} \item Developed and Supported services for enterprise customers \item Developed a framework and tool to create REST services in Go given service contracts \item Developed multiple internal libraries including a \textbf{mysql} go library using \textbf{database/sql} \end{itemize}} } \twentyitem {May 2014 - \\ Sept 2016} {SDE} {\href{http://exotel.com/}{Exotel Techcom Private Limited}} {Full-Stack Engineer} { \begin{itemize} \item Worked on Core product adding features,debugging and fixing bugs \item Worked on redesigning the core system in view of scale in favor of distributed , Highly Available systems \item Developed python/go libraries for {\href{https://github.com/sarathsp06/exotel-py}\textbf{Exotel}} \item Contributed to open-source libraries including {\href{https://github.com/aerospike/aerospike-client-go}{\textbf{aerospike-client-go}}},{\href{https://github.com/messagebird/sachet}{\textbf{sachet}}} etc \end{itemize} } \twentyitem {Sept 2013-\\ May 2014} {Assistant System Engineer} {\href{https://www.tcs.com/}{TCS}} {} { \begin{itemize} \item Worked on business rules extraction from a legacy application \item Written REX script to automate my tasks to parse and extract rules from COBOL files reducing expected project duration by 70\% \end{itemize} } \end{twenty} %---------------------------------------------------------------------------------------- % Opensource Contributions %---------------------------------------------------------------------------------------- \section{Opensource} \begin{twenty} \twentyitem {Projects}{}{}{} { {\begin{itemize} \item \href{https://github.com/sarathsp06/sublime-php-selective-format}{\textbf{sublime-php-selective-format:}} Sublime Text 3 plugin to format PHP selectively \item \href{https://github.com/sarathsp06/py2factor}{\textbf{py2fctor: }}Python two factor authenticator application using \href{https://tools.ietf.org/html/rfc6238}{\textbf{\textit{Time based One time password}}}s for Linux,experimental \item \href{https://github.com/sarathsp06/exotel-py}{\textbf{exotel-py:}} Python library for Exotel APIs \item \href{https://github.com/sarathsp06/gologger}{\textbf{gologger:}} Logger package for Golang with echo middle ware support \end{itemize} } } \twentyitem {Code Contributions}{}{}{} { {\begin{itemize} \item \href{https://github.com/messagebird/sachet}{\textbf{ messagebird/sachet : }} SMS alerts for Prometheus' Alert-manager \item \href{https://github.com/aerospike/aerospike-client-go}{\textbf{aerospike-client-go :}} Aerospike Client Go \end{itemize} } } \end{twenty} %---------------------------------------------------------------------------------------- % Publictions %---------------------------------------------------------------------------------------- \section{Publications} \begin{twenty} \twentyitem {May 2015} {} {\href{http://www.ijert.org/view-pdf/13070/image-assisted-data-security-using-key-encrypted-file}{Image Assisted Data Security Using Key Encrypted File}} {} { {In this paper a new technique for image-based data hiding with a binary file is proposed. Here concept of steganography is used to make text encryption} } \end{twenty} %---------------------------------------------------------------------------------------- % EDUCATION % \section{Education} % \begin{twenty} % Environment for a list with descriptions % \twentyitem % {2009 - 2013} % {B.Tech., Computer Science} % {\href{http://tkmce.ac.in/}{TKMCE}} % {Kollam, Kerala , India} % {First Class with Distinction, CGPA: 8.14/10} % \twentyitem % {2007 - 2009} % {12th} % {\href{http://www.nvshq.org/}{JNV}} % {Chennithala, Kerala, India} % {First Class with Distinction, 92\%} % \twentyitem % {2007} % {10th} % {\href{http://www.nvshq.org/}{JNV}} % {Chennithala, Kerala, India} % {First Class with Distinction, 91.4\%} % \end{twenty} \end{document}
{ "alphanum_fraction": 0.567001385, "avg_line_length": 40.9645390071, "ext": "tex", "hexsha": "fd1f4082c47db91da661a581da8d5a97419ad6e0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a9d1877331a39097e7bea6bcf4b5fb23c81f2fd4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sarathsp06/me", "max_forks_repo_path": "LaTeX/template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a9d1877331a39097e7bea6bcf4b5fb23c81f2fd4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sarathsp06/me", "max_issues_repo_path": "LaTeX/template.tex", "max_line_length": 245, "max_stars_count": null, "max_stars_repo_head_hexsha": "a9d1877331a39097e7bea6bcf4b5fb23c81f2fd4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sarathsp06/me", "max_stars_repo_path": "LaTeX/template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1432, "size": 5776 }
\section{Edge homomorphisms, transgression} Recall the Serre spectral sequence for a fibration $F\to E\to B$ has $E^2$-page given by $$ E^2_{s,t} = H_s(B;H_t(F)) \Rightarrow H_{s+t}(E). $$ If $B$ is path-connected, $\widetilde{H}_t(F) = 0$ for $t<q$, $\widetilde{H}_s(B) = 0$ for $s<p$, and $\pi_1(B)$ acts trivially on $H_\ast(F)$, we showed that there is a long exact sequence (the Serre exact sequence) \begin{equation}\label{serre-exact} H_{p+q-1}(F)\xrightarrow{\bullet} H_{p+q-1}(E)\to H_{p+q-1}(B)\to H_{p+q-2}(F)\to\cdots \end{equation} Let us attempt to describe the arrow marked by $\bullet$. %If $t>q$, we know Let $(E^r_{p,q},d^r)$ be any spectral sequence such that $E^r_{p,q} = 0$ if $p<0$ or $q<0$; such a spectral sequence is called a \emph{first quadrant} spectral sequence. The Serre spectral sequence is a first quadrant spectral sequence. In a first quadrant spectral sequence, the $d^2$-differential $d^2:E^2_{0,t}\to E^2_{-2,t+1}$ is zero, since $E^2_{s,t}$ vanishes for $s<0$. This means that $H_t(F) = H_0(B;H_t(F)) = E^2_{0,t}$ surjects onto $E^3_{0,t}$. Arguing similarly, this surjects onto $E^4_{0,t}$. Eventually, we find that $E^{r}_{0,t} \simeq E^{t+2}_{0,t}$ for $r\geq t+2$. In particular, $$E^{t+2}_{0,t} \simeq E^\infty_{0,t} \simeq \gr_0 H_t(E) \simeq F_0 H_t(E),$$ which sits inside $H_t(E)$. The composite $$E^2_{0,t} = H_t(F) \to E^3_{0,t}\to \cdots\to E^{t+2}_{0,t} \subseteq F_0 H_t(E)\to H_t(E)$$ is precisely the map $\bullet$! Such a map is known as an \emph{edge homomorphism}. The map $F\to E$ is the inclusion of the fiber; it induces a map $H_t(F)\to H_t(E)$ on homology. We claim that this agrees with $\bullet$. %We almost saw this in the construction of a sseq for a filtered complex. Recall that $F_0H_t(E)$ is defined to be $\img(H_t(F_0 E) \to H_t(E))$. In the construction of the Serre spectral sequence, we declared that $F_0 E$ is exactly the preimage of the zero skeleton. Since $B$ is simply connected, we find that $F_0 E$ is exactly the fiber $F$. To conclude the proof of the claim, consider the following diagram: \begin{equation*} \xymatrix{ F\ar[r]\ar[d] & F\ar[d]\\ F\ar[r]\ar[d] & E\ar[d]\\ \ast\ar@{^(->}[r] & B } \end{equation*} The naturality of the Serre spectral sequence implies that there is an induced map of spectral sequences. Tracing through the symbols, we find that this observation proves our claim. The long exact sequence \eqref{serre-exact} also contains a map $H_s(E)\to H_s(B)$. The group $F_s H_s(E) = H_s(E)$ maps onto $\gr_s H_s(E) \simeq E^\infty_{s,0}$. If $F$ is connected, then $H_s(B) = H_s(B;H_0(F)) = E^2_{s,0}$. Again, the $d^2$-differential $d^2:E^2_{s+2,-1}\to E^2_{s,0}$ is trivial (since the source is zero). Since $E^3 = \ker d^2$, we have an injection $E^3_{s,0} \to E^2_{s,0}$. Repeating the same argument, we get injections $$E^\infty_{s,0} = E^{s+1}_{s,0}\to \cdots\to E^2_{s,0}\to E^2_{s,0} = H_s(B).$$ Composing with the map $H_s(E)\to E^\infty_{s,0}$ gives the desired map $H_s(E) \to H_s(B)$ in the Serre exact sequence. This composite is also known as an edge homomorphism. As above, this edge homomorphism is the map induced by $E\to B$. This can be proved by looking at the induced map of spectral sequences coming from the following map of fiber sequences: \begin{equation*} \xymatrix{ F\ar[r]\ar[d] & \ast\ar[d]\\ E\ar[r]\ar[d] & B\ar[d]\\ B\ar[r] & B } \end{equation*} The topologically mysterious map is the boundary map $\partial:H_{p+q-1}(B)\to H_{p+q-2}(F)$. Such a map is called a \emph{transgression}. Again, let $(E^r_{s,t},d^r)$ be a first quadrant spectral sequence. In our case, $E^2_{n,0} = H_n(B)$, at least $F$ is connected. As above, we have injections $$i:E^n_{n,0} \to \cdots\to E^3_{n,0} \to E^2_{n,0} = H_n(B).$$ Similarly, we have surjections $$s:E^2_{0,n-1}\to E^3_{0,n-1}\to \cdots\to E^n_{0,n-1}.$$ There is a differential $d^n:E^n_{n,0}\to E^n_{0,n-1}$. The transgression is defined as the \emph{linear relation} (not a function!) $E^2_{n,0}\to E^2_{0,n-1}$ given by $$x\mapsto i^{-1} d^n s^{-1}(x).$$ However, the reader should check that in our case, the transgression is indeed a well-defined function. Topologically, what is the origin of the transgression? There is a map $H_n(E,F)\xrightarrow{\pi_\ast} H_n(B,\ast)$, as well as a boundary map $\partial : H_n(E,F) \to H_{n-1}(F)$. We claim that: $$\img \pi_\ast = \img(E^n_{n,0}\to H_n(B) = E^2_{n,0}),\quad \partial\ker\pi_\ast = \ker(H_{n-1}(F) = E^2_{0,n-1} \to E^n_{0,n-1}).$$ \begin{proof}[Proof sketch] Let $x\in H_n(B)$. Represent it by a cycle $c\in Z_n(B)$. Lift it to a chain in the total space $E$. In general, this chain will not be a cycle (consider the Hopf fibration). The differentials record this boundary; let us recall the geometric construction of the differential. Saying that the class $x$ survives to the $E^n$-page is the same as saying that we can find a lift to a chain $\sigma$ in $E$, with $d\sigma\in S_{n-1}(F)$. Then $d^n(x)$ is represented by the class $[dc]\in H_{n-1}(F)$. This is precisely the trangression. Informally, we lift something from $H_n(B)$ to $S_n(E)$; this is well-defined up to something in $F$. In particular, we get an element in $H_n(E,F)$. We send it, via $\partial$, to an element of $H_{n-1}(F)$ --- and this is precisely the transgression. \end{proof} \subsection{An example} We would like to compare the Serre exact sequence \eqref{serre-exact} with the homotopy exact sequence: $$\ast\to \pi_{p+q-1}(F)\to \pi_{p+q-1}(E)\to \pi_{p+q-1}(B)\xar{\partial} \pi_{p+q-2}(F)\to \cdots$$ There are Hurewicz maps $\pi_{p+q-1}(X)\to H_{p+q-1}(X)$. We claim that there is a map of exact sequences between these two long exact sequences. \begin{equation*} \xymatrix{ H_{p+q-1}(E) \ar[r]^{\pi_\ast} & H_{p+q-1}(B)\ar[r]_\partial & H_{p+q-2}(F)\ar[r] & \cdots\\ \pi_{p+q-1}(E)\ar[r]_{\pi_\ast}\ar[u]_{h} & \pi_{p+q-1}(B)\ar[u]^h\ar[r] & \pi_{p+q-2}(F)\ar[r]\ar[u]^h & \cdots\\ } \end{equation*} The leftmost square commutes by naturality of Hurewicz. The commutativity of the righmost square is not immediately obvious. For this, let us draw in the explicit maps in the above diagram: \begin{equation*} \xymatrix{ & & H_{p+q-1}(E,F)\ar[dl]\ar[dr] & &\\ H_{p+q-1}(E) \ar[r]^{\pi_\ast} & H_{p+q-1}(B)\ar[rr]_\partial & & H_{p+q-2}(F)\ar[r] & \cdots\\ \pi_{p+q-1}(E)\ar[r]_{\pi_\ast}\ar[dr]\ar[u]_{h} & \pi_{p+q-1}(B)\ar[u]^h\ar[rr] & & \pi_{p+q-2}(F)\ar[r]\ar[u]^h & \cdots\\ & \pi_{p+q-1}(E,F)\ar[uuur]\ar[urr]\ar[u]^\cong_{s} & & } \end{equation*} The map marked $s$ is an isomorphism (and provides the long arrow in the above diagram, which makes the square commute), since $$ \pi_n(E,F) = \pi_{n-1}(\mathrm{hofib}(F\to E)) = \pi_{n-1}(\Omega B) = \pi_n(B). $$ Let us now specialize to the case of the fibration $$\Omega X\to PX\to X.$$ Assume that $X$ is connected, and $\ast \in X$ is a chosen basepoint. Let $p\geq 2$, and suppose that $\widetilde{H}_s(X) = 0$ for $s<p$. Arguing as in \S \ref{loops-sn}, we learn that the Serre spectral sequence we know that the homology of $\Omega X$ begins in dimension $p-1$ since $PX\simeq \ast$, so $q = p-1$. Likewise, if we knew $\widetilde{H}_n(\Omega X) = 0$ for $n<p-1$, then the same argument shows that $\widetilde{H}_n(X) = 0$ for $n<p$. \subsection*{A surprise gust: the Hurewicz theorem} The discussion above gives a proof of the Hurewicz theorem; this argument is due to Serre. \begin{theorem}[Hurewicz, Serre's proof] Let $p\geq 1$. Suppose $X$ is a pointed space with $\pi_i(X) = 0$ for $i<p$. Then $\widetilde{H}_i(X) = 0$ for $i<p$ and $\pi_p(X)^{ab}\to H_p(X)$ is an isomorphism. \end{theorem} \begin{proof} Let us assume the case $p=1$. This is classical: it is Poincar\'{e}'s theorem. We will only use this result when $X$ is a loop space, in which case the fundamental group is already abelian. Let us prove this by induction, using the loop space fibration. By assumption, $\pi_i(\Omega X) = 0$ for $i<p-1$. By our inductive hypothesis, $\widetilde{H}_i(\Omega X) = 0$ for $i<p-1$, and $\pi_{p-1}(\Omega X) \xrightarrow{\simeq} H_{p-1}(\Omega X)$. By our discussion above, we learn that $\widetilde{H}_i(X) = 0$ for $i<p$. The Hurewicz map $\pi_p(X)\xrightarrow{h}H_p(X)$ fits into a commutative diagram: \begin{equation*} \xymatrix{ \pi_{p-1}(\Omega X)\ar[r]^\simeq & H_{p-1}(\Omega X)\\ \pi_p(X)\ar[u]^\simeq \ar[r]_h & H_p(X)\ar[u]^{\simeq}_{\text{transgression}} } \end{equation*} It follows from the Serre exact sequence that the transgression is an isomorphism. \end{proof} %This proof has an enormous advantage, since you can make modifications that modify all primes except for a single prime, or get rational information. %In other words, it's amenable to localizations. %On Monday we'll talk about Serre classes and get information about homotopy groups way beyond conductivity of the space, if you do it right.
{ "alphanum_fraction": 0.6631578947, "avg_line_length": 48.2620320856, "ext": "tex", "hexsha": "740cee10860501b545c621af077b04356089111f", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_path": "906/lec-64-edge-homomorphisms.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_path": "906/lec-64-edge-homomorphisms.tex", "max_line_length": 150, "max_stars_count": 5, "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_path": "906/lec-64-edge-homomorphisms.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "num_tokens": 3377, "size": 9025 }
\documentclass[revision-guide.tex]{subfiles} %% Current Author: \setcounter{chapter}{4} \begin{document} \chapter{Electricity} \spec{discuss electrical phenomena in terms of electric charge} \spec{describe electric current as the rate of flow of charge and recall and use $I = \Delta Q / \Delta t$} Electric current is the flow of charge. The \emph{current} is defined as the rate of flow of charge. Conventional current flows from positive to negative. This current can consist of positive charges flowing from positive to negative or, more usually, negative charges flowing from negative to positive. The total current depends on the charge carrier density, the cross-sectional area, the charge on the carrier and the drift velocity of the carriers. \spec{understand potential difference in terms of energy transfer and recall and use VQ = W} When a charge moves through an electric field it gains or loses potential energy. The energy change per unit charge is defined as the potential difference. \spec{recall and use the fact that resistance is defined by R = V/I and use this to calculate resistance variation for a variety of voltage-current characteristics} As well as measuring resistance directly it can be found from a graph of V against I. \emph{Note: the resistance is defined as V/I at all points and is not equal to the gradient of a V/I graph except in the case that V is proportional to I} \spec{define and use the concepts of emf and internal resistance and distinguish between emf and terminal potential difference} \spec{derive, recall and use E = I(R + r ) and E = V + Ir} A real cell or battery can be represented by a cell circuit symbol in series with a resistor. This resistance represents the \emph{internal resistance} of the cell and the fixed, theoretical potential difference across the cell symbol is the emf of the cell (the electromotive force provided). When a voltmeter is connected across the cell the terminal potential difference is measured. \begin{figure}[h] \begin{center} \begin{circuitikz} \draw (2,0) to[battery,l=$E$,o-] (4,0) to[R=$r$,-o] (6,0); \end{circuitikz} \end{center} \caption{Terminal potential difference} \end{figure} In order to relate the terminal potential difference to the internal characteristics of the cell we must subtract the potential difference across the internal resistance from the emf. In symbols this gives: $$ V = E - Ir $$ which can be re-arranged to give the formula above. If our real cell is connected into a circuit with a load resistance $R$, the circuit in figure \ref{loaded-cell} is produced. \begin{figure}[h] \begin{center} \begin{circuitikz} \draw (1,0) to[short] (2,0) to[battery,l=$E$,o-] (4,0) to[R=$r$,-o] (6,0) to[short] (7,0) to[short] (7,-2) to[R=$R$] (1,-2) to[short] (1,0); \end{circuitikz} \end{center} \caption{A loaded real cell} \label{loaded-cell} \end{figure} Now, the terminal potential difference must be equal to the potential difference across the load resistor, $R$. $$ IR = V = E-Ir $$ Which can be re-arranged to give the second equation in the specification. \spec{derive, recall and use P = VI and W = VIt, and derive and use P = I$^2$R} Power is defined as the energy transferred per unit of time. In the case of electrical power this is the product of current (charge per unit time) and potential difference (energy per unit charge). Given a constant voltage and current, the energy transferred (work done) is given by $P = \frac{W}{t} = IVt$ \spec{recall and use $R = \rho l/A$} This formula allows the calculation of a regular sample of material. In this formula $\rho$ is the resistivity, l is the length of the sample and A its cross-sectional area. Typical resistivities for conductors are of the order of \SI{e-8}{\ohm\metre} and above \SI{e9}{\ohm\metre} for insulators. Semi-conductors lie between these values. \spec{recall the formula for the combined resistance of two or more resistors in series and use it to solve problems $R_T = R_1 + R_2 + \ldots$} \spec{recall the formula for the combined resistance of two or more resistors in parallel and use it to solve problems $\frac{1}{R_T} = \frac{1}{R_1} + \frac{1}{R_2} + \ldots$} This are fairly simple to derive from Kirchoff's Laws (see below). \spec{recall Kirchhoff’s first and second laws and apply them to circuits containing no more than two supply components and no more than two linked loops} \begin{description} \item[Kirchoff's First Law] The current that flows into any junction is equal to the current which flows out. \item[Kirchoff's Second Law] The sum of the emfs around any closed loop of a circuit must equal the sum of the potential differences across any components. It is important to note that direction matters and if the loop crosses emfs or components against the flow of current they must be subtracted. \end{description} These two laws, and the definition of resistance, are the most useful tools in circuit analysis. The important skill is to work methodically through the circuit applying the laws, rather than attempting to solve the circuit all in one go. \begin{example} Calculate the current through the \SI{3}{\volt} cell. \begin{center} \begin{circuitikz} \draw (0,0) to[R=2.0<\ohm>] (0,2) to[battery, l=6.0<\volt>, i=$I_x$] (0,4) to (5,4) (0,0) to (2.5,0) to[R=0.5<\ohm>] (2.5,2) to[battery, l=3.0<\volt>, i=$I_y$] (2.5,4) (2.5,0) to (5,0) to[R=10.0<\ohm>] (5,4) ;\end{circuitikz} \end{center} \answer We can use Kirchoff's Second Law to derive two expressions linking $I_x$ and $I_y$. The first is formed by creating a loop consisting of the two branches containing cells. $$ 6 - 3 = 2I_x - 0.5I_y $$ Note that I am using a clockwise loop so the signs of the two components in the `Y' branch have negative signs. The second expression is now arrived at by using the outermost loop of the circuit (and using Kirchoff's First Law to get the current through the \SI{10}{\ohm} resistor): $$ 6 = 10(I_x + I_y) + 2I_x$$ These two equations can now be used to solve for $I_y$, giving $$ I_y = \SI{-0.923}{\ampere} $$ Note the sign of $I_y$ is negative, this means that current is flowing in the opposite direction to the arrow shown. This means that the cell is charging. \end{example} \spec{appreciate that Kirchhoff’s first and second laws are a consequence of the conservation of charge and energy, respectively} The charge flowing into or out of a junction in a given time, $t$, is given by $Q = It$. Given that a junction can neither store nor create charge, Kirchoff's First Law follows directly. Each charge carrier can only take one loop around the circuit. Once it returns to its original position its energy must be equal to the amount it had when it left. The charge carrier gains energy passing through cells and loses it passing through components. Since the sum of these energies must be zero and $W=qV$, the sum of emfs must equal the sum of potential differences across components. \spec{use the idea of the potential divider to calculate potential differences and resistances} When two resistors are in series with a battery we say that the circuit is a \emph{potential divider.} \begin{figure}[h] \begin{center} \begin{circuitikz} \draw (0,0) to[battery,l=$V$] (0,5) to (3,5) to[R=$R_1$] (3,2.5) to[R=$R_2$] (3,0) to (0,0); \end{circuitikz} \end{center} \caption{A potential divider} \end{figure} Since there are no junctions in the series circuit we can know that the current is the same in all parts of the circuit and that the total resistance is $R_1 + R_2$. The p.d. across resistor 1 is therefore given by: \[ V_1 = IR_1 = \frac{V}{R_1 + R_2} R_1 = \frac{R_1}{R_1 + R_2} V \] In other words, the ratio of p.d.s in the circuit is equal to the ratios of the resistances. This can also be extended to the ratios between the components as they share the same current. \[ I_1 = I_2 \implies \frac{V_1}{R_1} = \frac{V_2}{R_2} \implies \frac{R_1}{R_2} = \frac{V_1}{V_2} \] \begin{example} Calculate the resistance of the bulb in the circuit below. \tikzset{component/.style={draw,thick,circle,fill=white,minimum size =0.75 cm,inner sep=0pt}} \begin{center} \begin{circuitikz} \draw (0,0) to[battery,l=\SI{9}{\volt}] (0,5) to (3,5) to[lamp] (3,2.5) to[R,l=\SI{800}{\ohm}] (3,0) to (0,0); \draw (3,2.5) to (5,2.5) to (5,1.25) node[component]{3V} to[short] (5,0) to (0,0); \end{circuitikz} \end{center} \answer The potential difference across the bulb must be \SI{6}{\volt} by Kirchoff's Second Law. Therefore: \[ \frac{\SI{3}{\volt}}{\SI{6}{\volt}} = \frac{\SI{800}{\ohm}}{R} \] \[ \implies R = \SI{800}{\ohm}\times \frac{\SI{6}{\volt}}{\SI{3}{\volt}} = \SI{1600}{\ohm}\] \end{example} \end{document} %sagemathcloud={"latex_command":"latexmk -pdf -f -g -bibtex -synctex=1 -interaction=nonstopmode '5-electricity.tex'"}
{ "alphanum_fraction": 0.7305402673, "avg_line_length": 57.8116883117, "ext": "tex", "hexsha": "ec1b80dcede19fe2373bd72e0000b9290b661c11", "lang": "TeX", "max_forks_count": 22, "max_forks_repo_forks_event_max_datetime": "2021-05-09T13:48:59.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-19T16:16:46.000Z", "max_forks_repo_head_hexsha": "d0f993750d660df38f05085ccf3b351d2ea3dd7d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sirioq/physics-PreU", "max_forks_repo_path": "5-electricity.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "d0f993750d660df38f05085ccf3b351d2ea3dd7d", "max_issues_repo_issues_event_max_datetime": "2021-06-24T08:14:03.000Z", "max_issues_repo_issues_event_min_datetime": "2016-12-19T16:46:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sirioq/physics-PreU", "max_issues_repo_path": "5-electricity.tex", "max_line_length": 452, "max_stars_count": 14, "max_stars_repo_head_hexsha": "d0f993750d660df38f05085ccf3b351d2ea3dd7d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sirioq/physics-PreU", "max_stars_repo_path": "5-electricity.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-08T21:47:07.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-13T19:37:54.000Z", "num_tokens": 2552, "size": 8903 }
\documentclass{standalone} \begin{document} \subsection{Markov Random Field} Markov Random Field (MRF) is not a proper segmentation method, but it is a statistical model that is used within segmentation methods which model the spatial interaction between neighbouring pixels. It is often incorporated in clustering algorithms such as K-means with a Bayesian prior probability. This model is used when most pixels belong to the same class as their neighbouring pixels, in this cases, any anatomical structure that consists of only one pixel has a very low probability of occourring~\cite{ART:Pham}. A difficulty of this model is that it is very sensitive to the parameters that control the strength of the spatial interactions. The other MRF disadvantage is that it is a very computationally expansive algorithm. However, despite these disadvantages, MRF are widely used to model segmentation classes and intensity inhomogeneities~\cite{ART:Pham}. \end{document}
{ "alphanum_fraction": 0.8077709611, "avg_line_length": 81.5, "ext": "tex", "hexsha": "dddf99023efe3f1554a29cea4549fcc127690357", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RiccardoBiondi/SCDthesis", "max_forks_repo_path": "tex/Chapter1/MainSegmentation/MarkovRandomField.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RiccardoBiondi/SCDthesis", "max_issues_repo_path": "tex/Chapter1/MainSegmentation/MarkovRandomField.tex", "max_line_length": 350, "max_stars_count": null, "max_stars_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RiccardoBiondi/SCDthesis", "max_stars_repo_path": "tex/Chapter1/MainSegmentation/MarkovRandomField.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 204, "size": 978 }
\documentclass[]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \usepackage{xltxtra,xunicode} \else \usepackage{fontspec} \fi \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \newcommand{\euro}{€} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{longtable,booktabs} \ifxetex \usepackage[setpagesize=false, % page size defined by xetex unicode=false, % unicode breaks when used with xetex xetex]{hyperref} \else \usepackage[unicode=true]{hyperref} \fi \hypersetup{breaklinks=true, bookmarks=true, pdfauthor={R.H; S.Y}, pdftitle={False discovery rate procedures for discrete tests: analysis of the data from the Wellcome Trust Sanger Institute Mouse Genetics Project}, colorlinks=true, citecolor=blue, urlcolor=blue, linkcolor=magenta, pdfborder={0 0 0}} \urlstyle{same} % don't use monospace font for urls \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} \setlength{\emergencystretch}{3em} % prevent overfull lines \setcounter{secnumdepth}{0} %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} \setlength{\droptitle}{-2em} \title{False discovery rate procedures for discrete tests: analysis of the data from the Wellcome Trust Sanger Institute Mouse Genetics Project} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{R.H; S.Y} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{Mars 04, 2015} \begin{document} \maketitle \subsubsection{Notations:}\label{notations} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item Female: n..1 \end{itemize} \begin{longtable}[c]{@{}lccc@{}} \toprule & KO & WT &\tabularnewline \midrule \endhead 1 & n111 & n121 & n1.1\tabularnewline 0 & n211 & n221 & n2.1\tabularnewline & n.11 & n.21 & n..1\tabularnewline \bottomrule \end{longtable} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item Male: n..2 \end{itemize} \begin{longtable}[c]{@{}lccc@{}} \toprule & KO & WT &\tabularnewline \midrule \endhead 1 & n112 & n122 & n1.2\tabularnewline 0 & n212 & n222 & n2.2\tabularnewline & n.12 & n.22 & n..2\tabularnewline \bottomrule \end{longtable} \subsubsection{Results:}\label{results} \begin{longtable}[c]{@{}lrrrr@{}} \toprule Outcome & main\_effect\_size & main\_effect\_rej & interaction\_size & interaction\_rej\tabularnewline \midrule \endhead Number.Of.Thoracic.Vertebrae & 469 & 4 & 1 & 0\tabularnewline Number.Of.Lumbar.Vertebrae & 473 & 8 & 8 & 2\tabularnewline Number.Of.Pelvic.Vertebrae & 473 & 4 & 4 & 0\tabularnewline Number.Of.Caudal.Vertebrae & 473 & 8 & 7 & 0\tabularnewline Transitional.Vertebrae & 473 & 16 & 13 & 0\tabularnewline Shape.Of.Vertebrae & 473 & 2 & 2 & 1\tabularnewline Fusion.Of.Vertebrae & 467 & 24 & 23 & 0\tabularnewline Maxilla & 473 & 4 & 3 & 0\tabularnewline Zygomatic.Bone & 469 & 2 & 0 & 0\tabularnewline Number.Of.Cervical.Vertebrae & 469 & 2 & 0 & 0\tabularnewline Skull.Shape & 473 & 13 & 11 & 0\tabularnewline Number.Of.Ribs.Right & 473 & 3 & 3 & 0\tabularnewline Number.Of.Ribs.Left & 473 & 6 & 4 & 0\tabularnewline Shape.Of.Ribcage & 473 & 6 & 6 & 0\tabularnewline Shape.Of.Ribs & 466 & 3 & 0 & 0\tabularnewline Rib.Fusions & 1 & 1 & 0 & 0\tabularnewline Scapula & 2 & 2 & 0 & 0\tabularnewline Humerus & 466 & 2 & 0 & 0\tabularnewline Radius & 473 & 2 & 1 & 0\tabularnewline Ulna & 473 & 2 & 1 & 0\tabularnewline Pelvis & 473 & 7 & 6 & 0\tabularnewline Tibia & 466 & 2 & 0 & 0\tabularnewline Fibula & 3 & 3 & 0 & 0\tabularnewline Joints & 473 & 2 & 1 & 0\tabularnewline Shape.Of.Spine & 473 & 4 & 4 & 0\tabularnewline Teeth & 473 & 6 & 6 & 0\tabularnewline Mandible & 2 & 2 & 0 & 0\tabularnewline Digit.Integrity & 473 & 2 & 2 & 0\tabularnewline Kyphosis & 473 & 10 & 9 & 0\tabularnewline Lordosis & 1 & 1 & 0 & 0\tabularnewline Fusion.Processes & 473 & 27 & 20 & 0\tabularnewline Caudal.Processes & 8 & 8 & 0 & 0\tabularnewline Thoracic.Processes & 467 & 39 & 36 & 0\tabularnewline \bottomrule \end{longtable} \begin{longtable}[c]{@{}llrrrr@{}} \toprule Outcome & Group & p\_main & p\_mid\_main & p\_int & p\_mid\_int\tabularnewline \midrule \endhead Number.Of.Lumbar.Vertebrae & MDHR\_HET & 0.0001444 & 7.35e-05 & 0.0166399 & 0.0083200\tabularnewline Number.Of.Lumbar.Vertebrae & MFPQ\_HET & 0.0001444 & 7.35e-05 & 0.0166399 & 0.0083200\tabularnewline Shape.Of.Vertebrae & MUDF\_HET & 0.0001648 & 8.73e-05 & 0.0287678 & 0.0156598\tabularnewline \bottomrule \end{longtable} \begin{longtable}[c]{@{}lrrrr@{}} \toprule & Female\_KO & Female\_WT & Male\_KO & Male\_WT\tabularnewline \midrule \endhead Y & 3 & 2 & 0 & 10\tabularnewline N & 4 & 927 & 7 & 912\tabularnewline \bottomrule \end{longtable} \begin{longtable}[c]{@{}lrrrr@{}} \toprule & Female\_KO & Female\_WT & Male\_KO & Male\_WT\tabularnewline \midrule \endhead Y & 3 & 2 & 0 & 10\tabularnewline N & 4 & 927 & 7 & 912\tabularnewline \bottomrule \end{longtable} \begin{longtable}[c]{@{}lrrrr@{}} \toprule & Female\_KO & Female\_WT & Male\_KO & Male\_WT\tabularnewline \midrule \endhead Y & 0 & 31 & 5 & 42\tabularnewline N & 7 & 898 & 2 & 880\tabularnewline \bottomrule \end{longtable} \end{document}
{ "alphanum_fraction": 0.7070588235, "avg_line_length": 29.4554455446, "ext": "tex", "hexsha": "9f44c9fb74f879ca9ca9e1cdc266f7e1f700accd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d634f6d1880484d83ed1f9723ac73433abf2ef85", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "shay-y/Sanger", "max_forks_repo_path": "ReportNatasha.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d634f6d1880484d83ed1f9723ac73433abf2ef85", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "shay-y/Sanger", "max_issues_repo_path": "ReportNatasha.tex", "max_line_length": 160, "max_stars_count": null, "max_stars_repo_head_hexsha": "d634f6d1880484d83ed1f9723ac73433abf2ef85", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "shay-y/Sanger", "max_stars_repo_path": "ReportNatasha.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2264, "size": 5950 }
\section{Writing Pseudo Code in LaTex} The package \NOTE{algorithm2e} allows you write nearly all imaginable code structures as pseudo code in LaTex. A referece of commands can be found here: \url{http://www.cs.toronto.edu/~frank/Useful/algorithm2e.pdf}. Do not forget to wrap the pseudo code into a figure, thus it will be included into list of figures.\\ The following example demonstrates the package with a bubble sort algorithm. Note, that the directive \NOTE{repeat - until} is the same as the \NOTE{do-while} directive. \begin{figure}[H] \begin{algorithm}[H] \KwData{A as Array to sort,} \KwResult{A as sorted Array} int n $\leftarrow$ A.size \tcp*[l]{cache the initial size of A} \Repeat{n>1}{ int newn $\leftarrow$ 1 \For{int i=0; i<n-1; i++} { \If{A[i] > A[i+1]} { A.swap(i, +1); } } n $\leftarrow$ newn } return A \caption{bubbleSort(Array A)} \end{algorithm} \caption[BubbleSort Algorithm Pseudocoe]{PseudoCode f the BubbleSort Algorithm, derived from \url{https://de.wikipedia.org/wiki/Bubblesort}.} \end{figure} \newpage \section{Mathematical Equations} Writing mathematical equations can be a very important part of your work. This applies especially, if you want to analyze and evaluate your gathered data. The following example shows a calculation of Kohen's Cappa, a measurement which represents the inter coder reliability in coding qualitative data.\\ Let's assume the following results of coding a text by two different persons: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline result XY & \multicolumn{6}{c|}{Rater A}\\\hline \multirow{6}{*}{Rater B}& & \textbf{9} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{c(a) }\\\cline{2-7} & \textbf{9}& 27 &0& 0 &0 & 0.5510204082\\\cline{2-7} &\textbf{0} &0 &6 &0 &0&0.1224489796\\\cline{2-7} &\textbf{1} &0 &1 &7 &0&0.1632653061\\\cline{2-7} &\textbf{2} &0 &1 &7 &0&0.1632653061\\\cline{2-7} &\textbf{c(r)} & 0.5510204082 & 0.1632653061 & 0.2857142857 & 0& n=49\\\hline \end{tabular} \caption[Example data for mathematical equations]{A set of example data for further use in a mathematical equation. } \end{center} \end{table} Continuing from this basis, the calculation can be proceeded with Cohen's Kappa defined as \begin{center} $k=\frac{Pr(a) - Pr(e)}{1 - Pr(e)}$ \end{center} Where $Pr(a)$ is the actual observed agreement of the codes and $Pr(e)$ is the estimated agreement of the codes. The actual agreement can be calculated by building the sum of those codes, which have been assigned by both raters (in the table: diagonal from upper left to lower right). This sum is then divided by the amount of responses (n), which is expressed by the following calculation as \begin{center} $Pr(a) = ( 27 + 6 + 7 + 0 ) / 49 = 0.8163265306$ \end{center} The estimated agreement is calculated out of the percentages of each code in relation to the amount of responses (n). The overall estimated agreement is then built by multiplying these chances and summing up their results. The term is described as \begin{center} $Pr(e) = \displaystyle\sum_{i=1}^{m} c(r)_i \times c(a)_i $ \end{center} where m is the number of all occurring codes, i is the current code and c(r) and c(a) are the chances of assigned codes (see table) for raters and auto. Applying this term is leading to: \begin{center} $Pr(e) = (0.5510204082 \times 0.5510204082) + (0,1632653061 \times 0.1224489796) + (0.285714285 \times 0.1632653061) + (0 \times 0) = 0.3702623907$ \end{center} The values for Pr(a) and Pr(e) are calculated and can be included into the kappa equation, resulting in the following kappa for item a2s1i1a: \begin{center} $k=\frac{0.8163265306 -0.3702623907}{1 - 0.3702623907} = 0.7083333333$ \end{center} \section{Including Code into Your Text} Sometimes you want to include text which include characters that could trigger commands. At this point it useful to wrap them into the \NOTE{verbatim} environment. The text is uninterpreted and listet as is. \scriptsize \begin{verbatim} 2015_5_13/23-19-33:550: ************* LOG INFO ************* 2015_5_13/23-19-33:550: name: user1 2015_5_13/23-19-33:550: [0]: 534 2015_5_13/23-19-33:550: [1]: 321 2015_5_13/23-19-33:550: [2]: 094832 2015_5_13/23-19-33:550: [3]: 3980429804 \end{verbatim} \normalsize You can also include existing code and highlight it's syntax, using the \NOTE{lstlisting} package. Look at the declarations.tex file for listings settings. The following example highlights xml syntax by given keywords. \small \begin{lstlisting}[keywordstyle=\color{blue},language=XML] <!-- an XML comment --> <entry type="normal" pattern="a" /> <entry type="normal" pattern="b" /> <entry type="formula" pattern="t=a+b+c" required="1"/> \end{lstlisting} \normalsize
{ "alphanum_fraction": 0.7235207412, "avg_line_length": 40.2457627119, "ext": "tex", "hexsha": "1e7f88eadb32f1dbc5a73dbe6a0d4dbbb16b73b3", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2022-01-23T15:58:20.000Z", "max_forks_repo_forks_event_min_datetime": "2016-07-16T15:20:35.000Z", "max_forks_repo_head_hexsha": "a1a04d5a35601b6b0d3848f865060e4752ee89cb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jankapunkt/master-thesis", "max_forks_repo_path": "complex/includes/chapter2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1a04d5a35601b6b0d3848f865060e4752ee89cb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jankapunkt/master-thesis", "max_issues_repo_path": "complex/includes/chapter2.tex", "max_line_length": 393, "max_stars_count": 10, "max_stars_repo_head_hexsha": "a1a04d5a35601b6b0d3848f865060e4752ee89cb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jankapunkt/master-thesis", "max_stars_repo_path": "complex/includes/chapter2.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-21T07:35:49.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-02T14:49:48.000Z", "num_tokens": 1519, "size": 4749 }
%% intro.tex %% Copyright (C) 2014-2016 by Thomas Auzinger <[email protected]> %% Copyright (C) 2016- by Maximilian Hoheiser <[email protected].> % % This work may be distributed and/or modified under the % conditions of the LaTeX Project Public License, either version 1.3 % of this license or (at your option) any later version. % The latest version of this license is in % http://www.latex-project.org/lppl.txt % and version 1.3 or later is part of all distributions of LaTeX % version 2005/12/01 or later. % % This work has the LPPL maintenance status `maintained'. % % The Current Maintainer of this work is Maximilian Hoheiser. % % This work consists of the files thesisLaTeX.dtx and thesisLaTeX.ins % and the derived file thesisLaTeX.cls. % This work also consists of the file example.tex. \documentclass[../example.tex]{subfiles} % so that this document can be compiled on its own \begin{document} \newacronym{ctan}{CTAN}{Comprehensive TeX Archive Network} \newacronym{faq}{FAQ}{Frequently Asked Questions} \newacronym{pdf}{PDF}{Portable Document Format} \newacronym{svn}{SVN}{Subversion} \newacronym{wysiwyg}{WYSIWYG}{What You See Is What You Get} \newacronym{wikibooks}{WikiBooks}{Open Books online library} \newglossaryentry{texteditor} { name={editor}, description={A text editor is a type of program used for editing plain text files} } \newglossaryentry{quickcopy} { name={QuickCopy}, description={QucikCopy is a Zotero function, where you can copy the citation code of a entry to the clipboard} } \newglossaryentry{github} { name={\textsc{GitHub}}, description={a web based project repository and host site for developers, which offers commit based management} } \chapter{Introduction to \LaTeX} Since \LaTeX\ is widely used in academia and industry, there exists a plethora of freely accessible introductions to the language. Reading through the guide at \url{https://en.wikibooks.org/wiki/LaTeX} serves as a comprehensive overview for most of the functionality and is highly recommended before starting with a thesis in \LaTeX. \section{Installation} A full \LaTeX\ distribution\index{distribution} consists of not only of the binaries that convert the source files to the typeset documents, but also of a wide range of packages and their documentation. Depending on the operating system, different implementations are available as shown in Table~\ref{tab:distrib}. \textbf{Due to the large amount of packages that are in everyday use and due to their high interdependence, it is paramount to keep the installed distribution\index{distribution} up to date.} Otherwise, obscure errors and tedious debugging ensue. \begin{table} \centering \begin{tabular}{cccc} \toprule Distribution & Unix & Windows & MacOS \\ \midrule TeX Live & \textbf{yes} & yes & (yes) \\ MacTeX & no & no & \textbf{yes} \\ MikTeX & no & \textbf{yes} & no \\ \bottomrule \end{tabular} \caption{\TeX/\LaTeX\ distributions for different operating systems. Recomended choice in \textbf{bold}.} \label{tab:distrib} % \label has to be placed AFTER \caption to produce correct cross-references. \end{table} If you use a Windows PC and the recommended MikTeX distribution it will update itself, also it is recommended do activate the option, to automatically install needed packages. \section{Editors} A multitude of \TeX\ \glspl{texteditor} are available differing in their editing models, their supported operating systems and their feature sets. A comprehensive overview of \glspl{texteditor} can be found at the Wikipedia page \url{https://en.wikipedia.org/wiki/Comparison_of_TeX_editors}. The author recommends the editor TeXmaker: \url{http://www.xm1math.net/texmaker/} because it is a crossplatform editor with the same UI for Windows, Linux and Mac and it has the useful feature to display the compiled \verb|.pdf| file in the same window beside the \LaTeX\ source code. \section{Compilation} Modern editors usually provide the compilation programs to generate \gls{pdf} documents and for most \LaTeX\ source files, this is sufficient. More advanced \LaTeX\ functionality, such as glossaries and bibliographies, needs additional compilation steps, however. It is also possible that errors in the compilation process invalidate intermediate files and force subsequent compilation runs to fail. It is advisable to delete intermediate files (\verb|.aux|, \verb|.bbl|, etc.), if errors occur and persist. All files that are not generated by the user are automatically regenerated. To compile the current document, the steps as shown in Table~\ref{tab:compile} have to be taken. \begin{table} \centering \begin{tabular}{rl} \toprule & Description \\ \midrule 1 & Scan for refs, toc/lof/lot/loa items and cites \\ 2 & Build the bibliography \\ 3 & Link refs and build the toc/lof/lot/loa \\ 4 & Link the bibliography \\ 5 & Build the glossary \\ 6 & Build the acronyms \\ 7 & Build the index \\ 8 & Link the glossary, acronyms, and the index \\ 9 & Link the bookmarks \\ \midrule & Command \\ \midrule 1 & \verb|pdflatex.exe example| \\ 2 & \verb|bibtex.exe example| \\ 3 & \verb|pdflatex.exe example| \\ 4 & \verb|pdflatex.exe example| \\ 5 & \verb|makeindex.exe -t example.glg -s example.ist| \\ & \verb| -o example.gls example.glo| \\ 6 & \verb|makeindex.exe -t example.alg -s example.ist| \\ & \verb| -o example.acr example.acn| \\ 7 & \verb|makeindex.exe -t example.ilg -o example.ind example.idx| \\ 8 & \verb|pdflatex.exe example| \\ 9 & \verb|pdflatex.exe example| \\ \bottomrule \end{tabular} \caption{Compilation steps for this document. The following abbreviations were used: table of contents (toc), list of figures (lof), list of tables (lot), list of algorithms (loa).} \label{tab:compile} % \label has to be placed AFTER \caption to produce correct cross-references. \end{table} \section{Installing Packages and Classes} If you use a new userpackage in your \LaTeX\ editor, the distribution has to download it from \gls{ctan} before the compiler can create the \verb|.pdf| file. Most distributions doe this automatically, but you can also manually install packages, whether you download them from the official repository or create your own. The proceder differs a little bit depending on the distribution, but the procedure is the same, you have to put the package in a folder in the \LaTeX\ folder in your \TeX\ distribution and update the package database. For the distributions listen in table:\ref{tab:distrib} the folders and the update commands are listen in table: \ref{tab:install} \begin{table} \centering \begin{tabular}{lcc} \toprule Distribution & Folder Path & Update\\ \midrule TeX Life & \path{/usr/local/texlife/2009/texmf/} & \texttt{tlmgr update --list}\\ MacTeX & \path{} & \\ MikTeX & \path{C:\Programs(x86)\MikTeX 2.9\tex\latex\} & Refresh FNDB \\ \bottomrule \end{tabular} \caption{installation path for \LaTeX\ packages} \label{tab:install} \end{table} \section{Basic Functionality} In this section, various examples are given of the fundamental building blocks used in a thesis. Many \LaTeX\ commands have a rich set of options that can be supplied as optional arguments. The documentation of each command should be consulted to get an impression of the full spectrum of its functionality. It is also recommended to read a good \LaTeX\ book, where features are explained. A list of good books are: Der LaTeX-Begleiter, LaTeX : Basissystem, Layout, Formelsatz. Also look at \gls{ctan} and \gls{wikibooks}. \subsection{Floats} Two main categories of page elements can be differentiated in the usual \LaTeX\ workflow: \textit{(i)} the main stream of text and \textit{(ii)} floating containers that are positioned at convenient positions throughout the document. In most cases, tables, plots, and images are put into such containers since they are usually positioned at the top or bottom of pages. These are realized by the two environments \verb|figure| and \verb|table|, which also provide functionality for cross-referencing (see Table~\ref{tab:intro} and Figure~\ref{fig:intro}) and the generation of corresponding entries in the list of figures and the list of tables. Note that these environments solely act as containers and can be assigned arbitrary content. \subsection{Tables} A table in \LaTeX\ is created by using a \verb|tabular| environment or any of its extensions, e.g., \verb|tabularx|. The commands \verb|\multirow| and \verb|\multicolumn| allow table elements to span multiple rows and columns. \begin{table}[h] % placement specifier \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Position} \\ \cmidrule{1-2} % partial horizontal rule Group & Abbrev & Name \\ \midrule Goalkeeper & GK & Paul Robinson \\ \midrule \multirow{4}{*}{Defenders} & LB & Lucus Radebe \\ & DC & Michael Duburry \\ & DC & Dominic Matteo \\ & RB & Didier Domi \\ \midrule \multirow{3}{*}{Midfielders} & MC & David Batty \\ & MC & Eirik Bakke \\ & MC & Jody Morris \\ \midrule Forward & FW & Jamie McMaster \\ \midrule \multirow{2}{*}{Strikers} & ST & Alan Smith \\ & ST & Mark Viduka \\ \bottomrule \end{tabular} \caption{Adapted example from \url{https://en.wikibooks.org/wiki/LaTeX/Tables}. This example uses rules specific to the \texttt{booktabs} package and employs the multi-row functionality of the \texttt{multirow} package.} \label{tab:intro} % \label has to be placed AFTER \caption to produce correct cross-references. \end{table} \subsection{Images} An image is added to a document via the \verb|\includegraphics| command as shown in Figure~\ref{fig:intro}. The \verb|\subcaption| command can be used to reference subfigures, such as Figure~\ref{fig:intro:full width} and~\ref{fig:intro:half width}. \begin{figure}[h] \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=\textwidth]{logo} \subcaption{The header logo at text width} \label{fig:intro:full width} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=0.5\textwidth]{logo} \subcaption{The header logo at half the text width} \label{fig:intro:half width} \end{subfigure} \caption{The header logo at different sizes.} \label{fig:intro} % \label has to be placed AFTER \caption (or \subcaption) to produce correct cross-references. \end{figure} It is also possible to add an array of images with the \verb|\subcatiption| command such as Figure~\ref{fig:array} \begin{figure}[h] \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=0.6\textwidth]{TU_Logo} \subcaption{upper left} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=0.6\textwidth]{IFP_Logo} \subcaption{upper right} \end{subfigure} \par\bigskip \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=0.6\textwidth]{INF_Logo} \subcaption{Ball lower left} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \centering \includegraphics[width=0.6\textwidth]{TU_Logo} \subcaption{Pen lower right} \end{subfigure} \caption{four figures in an array} \label{fig:array} \end{figure} \subsection{Mathematical Expressions} One of the original motivation to create the \TeX\ system was the need for mathematical typesetting. To this day, \LaTeX\ is the preferred system to write math-heavy documents and a wide variety of functions aids the author in this task. A mathematical expression can be inserted inline as $\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$ outside of the text stream as \[ \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} \] or as numbered equation with \begin{equation} \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}. \end{equation} Mathematical formulas and expression can also be used in a flowtext, for example: The mathematical expression $10 \cdot 100 = 10^{10}$ is simple. \subsection{Pseudo Code} The presentation of algorithms can be achieved with various packages, such as \verb|algorithmic|, \verb|algorithm2e|, \verb|algorithmicx|, or \verb|algpseudocode|. See \url{https://tex.stackexchange.com/questions/229355} for an overview. An example of the use of the \verb|alogrithm2e| package is given with Algorithm~\ref{alg:gauss-seidel}. \begin{algorithm} \SetKw{BreakFor}{break for} \KwIn{A scalar~$\epsilon$, a matrix $\mathbf{A} = (a_{ij})$, a vector $\vec{b}$, and an initial vector $\vec{x}^{(0)}$} \KwOut{$\vec{x}^{(n)}$ with $\mathbf{A} \vec{x}^{(n)} \approx \vec{b}$} \For{$k\leftarrow 1$ \KwTo maximum iterations} { \For{$i\leftarrow 1$ \KwTo $n$} { $x_i^{(k)} = \frac{1}{a_{ii}} \left(b_i-\sum_{j<i} a_{ij} x_j^{(k)} - \sum_{j>i} a_{ij} x_j^{(k-1)} \right)$\; } \If{$\lvert\vec{x}^{(k)}-\vec{x}^{(k-1)}\rvert < \epsilon$} {\BreakFor\;} } \Return{$\vec{x}^{(k)}$\;} \caption{Gauss-Seidel} \label{alg:gauss-seidel} % \label has to be placed AFTER \caption to produce correct cross-references. \end{algorithm} \section{Bibliography} The referencing of prior work is a fundamental requirement of academic writing and well supported by \LaTeX. The \textsc{Bib}\TeX\ reference management software is the most commonly used but because of the advanced features it is advised to use \textsc{Bib}\LaTeX\ as a system for this purpose. With \textsc{Bib}\LaTeX\ it is also possible and advised to use proprietery bibliography management software in Figure~\ref{tab:bib} a short overview of commonly used ones is given. The author uses \textsc{Zotero},it needs a bit more fiddling before it works properly with \textsc{Bib}\LaTeX\ than eg. \textsc{Jabref}, but the advanced importing features of \textsc{Zotero} make it worth the extra effort. \begin{table}[h] \centering \begin{tabular}{llllll} \toprule Name & Platform & \textsc{Bib}\LaTeX\ & \textsc{Bib}\TeX\ & Filemanager & Sync \\ \midrule \href{https://www.zotero.org/}{\textsc{Zotero}} & All & Yes & Yes & Partial & Yes \\ \href{https://www.jabref.org/}{\textsc{Jabref}} & Java & Yes & Yes & Partial & No \\ \href{https://www.mendeley.com/}{\textsc{medeley}} & All + Android & No & Yes & Yes & Yes, forced file sync \\ \end{tabular} \caption{\textsc{Bib}\LaTeX\ editors, \textit{Platfrom: All = Windows, Mac, Linux}} \label{tab:bib} \end{table} To get propper \textsc{Bib}\LaTeX\ support and the ability to \gls{quickcopy} the proprietery plugin \href{https://github.com/retorquere/zotero-better-bibtex}{\textsc{better Bibtex}} hase to be implemented, which can be found on \gls{github}. For instructions on \textsc{better Bibtex} use the \gls{github} wiki site of the project. To use the bibliography from \textsc{Zotero} it hase to be exported to a \verb|.bib| file. \textsc{Zotero} supports the function to sync a collection to a certain \verb|.bib| file and keep that file updated. \par Using the \verb|\cite| command, it is possible to reference entries in a \verb|.bib| file out of the text stream, e.g., as~\cite{Turing1936}. If you are using \textsc{Zotero} you can copy the citation key to the clipboard with the \gls{quickcopy} function, to activate this function use the keyboard shortcut \verb|ctl + shift + C| The generation of the formatted bibliography needs a separate execution of \verb|bibtex.exe| (see Table~\ref{tab:compile}). \section{Table of Contents} The table of contents is automatically built by successive runs of the compilation, e.g., of \verb|pdflatex.exe|. The command \verb|\setsecnumdepth| allows the specification of the depth of the table of contents and additional entries can be added via \verb|\addcontentsline|. The starred versions of the sectioning commands, i.e., \verb|\chapter*|, \verb|\section*|, etc., remove the corresponding entry from the table of contents. \section{Acronyms / Glossary / Index} The list of acronyms, the glossary, and the index need to be built with a separate execution of \verb|makeindex| (see Table~\ref{tab:compile}). Acronyms have to be specified with \verb|\newacronym| while glossary entries use \verb|\newglossaryentry|. Both are then used in the document content with one of the variants of \verb|\gls|, such as \verb|\Gls|, \verb|\glspl|, or \verb|\Glspl|. Index items are simply generated by placing \verb|\index|\marg{entry} next to all the words that correspond to the index entry \meta{entry}. Note that many enhancements exist for these functionalities and the documentation of the \verb|makeindex| and the \verb|glossaries| packages should be consulted. \section{Tips} Since \TeX\ and its successors do not employ a \gls{wysiwyg} editing scheme, several guidelines improve the readability of the source content: \begin{itemize} \item Each sentence in the source text should start with a new line. This helps not only the user navigation through the text, but also enables revision control systems (e.g. \gls{svn}, Git) to show the exact changes authored by different users. Paragraphs are separated by one (or more) empty lines. \item Environments, which are defined by a matching pair of \verb|\begin{name}| and \verb|\end{name}|, can be indented by whitespace to show their hierarchical structure. \item In most cases, the explicit use of whitespace (e.g. \verb|\hspace{4em}| or \verb|\vspace{1.5cm}|) violates typographic guidelines and rules. Explicit formatting should only be employed as a last resort and, most likely, better ways to achieve the desired layout can be found by a quick web search. \item The use of bold or italic text is generally not supported by typographic considerations and the semantically meaningful \verb|\emph{|\texttt{$\dots$}\verb|}| should be used. \end{itemize} The predominant application of the \LaTeX\ system is the generation of \gls{pdf} files via the \textsc{Pdf}\LaTeX\ binaries. In the current version of \textsc{Pdf}\LaTeX, it is possible that absolute file paths and user account names are embedded in the final \gls{pdf} document. While this poses only a minor security issue for all documents, it is highly problematic for double blind reviews. The process shown in Table~\ref{tab:ps2pdf} can be employed to strip all private information from the final \gls{pdf} document. \begin{table}[h] \centering \begin{tabular}{rl} \toprule & Command \\ \midrule 1 & Rename the \gls{pdf} document \verb|final.pdf| to \verb|final.ps|. \\ 2 & Execute the following command: \\ & \verb|ps2pdf -dPDFSETTINGS#/prepress ^| \\ & \verb| -dCompatibilityLevel#1.4 ^| \\ & \verb| -dAutoFilterColorImages#false ^| \\ & \verb| -dAutoFilterGrayImages#false ^| \\ & \verb| -dColorImageFilter#/FlateEncode ^| \\ & \verb| -dGrayImageFilter#/FlateEncode ^| \\ & \verb| -dMonoImageFilter#/FlateEncode ^| \\ & \verb| -dDownsampleColorImages#false ^| \\ & \verb| -dDownsampleGrayImages#false ^| \\ & \verb| final.ps final.pdf| \\ \bottomrule \end{tabular} On Unix-based systems, replace \verb|#| with \verb|=| and \verb|^| with \verb|\|. \caption{Anonymization of \gls{pdf} documents.} \label{tab:ps2pdf} \end{table} \section{Resources} \subsection{Useful Links} In the following, a listing of useful web resources is given. \begin{description} \item[\url{https://en.wikibooks.org/wiki/LaTeX}] An extensive wiki-based guide to \LaTeX. \item[\url{http://www.tex.ac.uk/faq}] A (huge) set of \gls{faq} about \TeX\ and \LaTeX. \item[\url{https://tex.stackexchange.com/}] The definitive user forum for non-trivial \LaTeX-related questions and answers. \end{description} \subsection[Comprehensive TeX Archive Network]{\gls{ctan}} \label{ch:ctan} The \gls{ctan} is the official repository for all \TeX\ related material. It can be accessed via \url{https://www.ctan.org/} and hosts (among other things) a huge variety of packages that provide extended functionality for \TeX\ and its successors. Note that most packages contain \gls{pdf} documentation that can be directly accessed via \gls{ctan}. In the following, a short, non-exhaustive list of relevant \gls{ctan}-hosted packages is given together with their relative path. \begin{description}[itemsep=0ex] \item[\href{https://www.ctan.org/pkg/algorithm2e}{algorithm2e}] Functionality for writing pseudo code. \item[\href{https://www.ctan.org/pkg/amsmath}{amsmath}] Enhanced functionality for typesetting mathematical expressions. \item[\href{https://www.ctan.org/pkg/amsfonts}{amssymb}] Provides a multitude of mathematical symbols. \item[\href{https://www.ctan.org/pkg/booktabs}{booktabs}] Improved typesetting of tables. \item[\href{https://www.ctan.org/pkg/enumitem}{enumitem}] User control over the layout of lists (\verb|itemize|, \verb|enumerate|, \verb|description|). \item[\href{https://www.ctan.org/pkg/fontenc}{fontenc}] Determines font encoding of the output. \item[\href{https://www.ctan.org/pkg/glossaries}{glossaries}] Create glossaries and list of acronyms. \item[\href{https://www.ctan.org/pkg/graphicx}{graphicx}] Insert images into the document. \item[\href{https://www.ctan.org/pkg/inputenc}{inputenc}] Determines encoding of the input. \item[\href{https://www.ctan.org/pkg/l2tabu}{l2tabu}] A description of bad practices when using \LaTeX. \item[\href{https://www.ctan.org/pkg/mathtools}{mathtools}] Further extension of mathematical typesetting. \item[\href{https://www.ctan.org/pkg/memoir}{memoir}] The document class on upon which the \verb|vutinfth| document class is based. \item[\href{https://www.ctan.org/pkg/multirow}{multirow}] Allows table elements to span several rows. \item[\href{https://www.ctan.org/pkg/pgfplots}{pgfplots}] Function plot drawings. \item[\href{https://www.ctan.org/pkg/pgf}{pgf/TikZ}] Creating graphics inside \LaTeX\ documents. \item[\href{https://www.ctan.org/pkg/subcaption}{subcaption}] Allows the use of subfigures and enables their referencing. \item[\href{https://www.ctan.org/tex-archive/info/symbols/comprehensive/}{symbols/comprehensive}] A listing of around 5000 symbols that can be used with \LaTeX. \item[\href{https://www.ctan.org/pkg/voss-mathmode}{voss-mathmode}] A comprehensive overview of typesetting mathematics in \LaTeX. \item[\href{https://www.ctan.org/pkg/xcolor}{xcolor}] Allows the definition and use of colors. \end{description} \end{document}
{ "alphanum_fraction": 0.7253098585, "avg_line_length": 56.2389162562, "ext": "tex", "hexsha": "f7bbb1ab32302bb509e19475b1aae95215079656", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "47ee4d89eb2e0cc3dd5f02519dd3f2f17b7c0421", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "MaximilianHoheiser/thesisLaTeX", "max_forks_repo_path": "example/chapter1/chapter1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "47ee4d89eb2e0cc3dd5f02519dd3f2f17b7c0421", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "MaximilianHoheiser/thesisLaTeX", "max_issues_repo_path": "example/chapter1/chapter1.tex", "max_line_length": 668, "max_stars_count": null, "max_stars_repo_head_hexsha": "47ee4d89eb2e0cc3dd5f02519dd3f2f17b7c0421", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "MaximilianHoheiser/thesisLaTeX", "max_stars_repo_path": "example/chapter1/chapter1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6318, "size": 22833 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % figures - https://www.sharelatex.com/learn/Inserting_Images % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Figures} % BASIC EXAMPLE %================ \begin{figure}[h!tb] \centering \includegraphics[width=10cm]{fig/plot1} % not necessary to give extension - now you can shift between compiling to ps or to pdf without any problems \caption[Do not end short caption with full-stop]{Each figure must be supplied with a long caption, making the figure stand-alone and ended with a full-stop.} \end{figure} \newpage % SUBFIG EXAMPLE %================ % % usage: \subfloat[][caption]{...figure code...\label{label}} The subfigures are Figures \subref{firstfigure}, \subref{secondfigure}, \subref{thirdfigure} and \subref{fourthfigure}. \begin{figure} \centering \subfloat[][First subcaption (No full-stop)]{ \includegraphics{fig/figure} \label{firstfigure} } \quad \subfloat[][Second subcaption (No full-stop)]{ \includegraphics{fig/figure} \label{secondfigure} } \\ \subfloat[][Third subcaption (No full-stop)]{ \includegraphics{fig/figure} \label{thirdfigure} } \quad \subfloat[][Fourth subcaption (No full-stop)]{ \includegraphics{fig/figure} \label{fourthfigure} } \caption[Do not end short caption with full-stop]{End the main caption with a full-stop, but not each of the sub-figure captions!} \label{thislabel} \end{figure}
{ "alphanum_fraction": 0.6756373938, "avg_line_length": 22.4126984127, "ext": "tex", "hexsha": "13727478c7e25e02273e6b7d10a22b4ca7005968", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "92d341948a6c9dea47d987c9b9e7f55421960694", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "UlrichLouw/Masters_Latex", "max_forks_repo_path": "example/figures.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "92d341948a6c9dea47d987c9b9e7f55421960694", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "UlrichLouw/Masters_Latex", "max_issues_repo_path": "example/figures.tex", "max_line_length": 158, "max_stars_count": null, "max_stars_repo_head_hexsha": "92d341948a6c9dea47d987c9b9e7f55421960694", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "UlrichLouw/Masters_Latex", "max_stars_repo_path": "example/figures.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 365, "size": 1412 }
%GiG \documentclass{beamer} \usetheme{Copenhagen} \setbeamertemplate{navigation symbols}{} \setbeamertemplate{headline}{} \DeclareMathOperator*{\argmax}{arg\,max} \usepackage{hyperref} \definecolor{azure}{rgb}{0.0, 0.5, 1.0} %\newcommand{\tblue}[1]{\textcolor{blue}{#1}} \newcommand{\tblue}[1]{{\Large {\textcolor{azure}{#1}}}} \newcommand{\thblue}[1]{{\Huge {\textcolor{azure}{#1}}}} \newcommand{\hred}[1]{{\textcolor{red}{#1}}} \newcommand{\furl}[1]{{\footnote{\url{#1}}}} \title[Saravanan Thirumuruganathan] {Lecture 1: Course Introduction and Logistics} \author[CSE 5334] {Instructor: Saravanan Thirumuruganathan} \date[] \begin{document} \begin{frame} \titlepage \end{frame} %\begin{frame}{Outline} % \tableofcontents % % You might wish to add the option [pausesections] %\end{frame} \section{Outline} \begin{frame} \frametitle {Outline} \begin{enumerate} \item Data Mining/Science Basics \item Logistics \item Scientific Python \item IPython Notebook Demo \end{enumerate} \end{frame} %\begin{frame}{In-Class Quizzes} %\begin{itemize} %\item {\Large {\bf URL:}} {\LARGE \bf \url{http://m.socrative.com/}} %\item {\Large {\bf Room Name:} {\LARGE \bf 4f2bb99e}} %\end{itemize} %\end{frame} \section{Data Mining Introduction} \begin{frame}{} \begin{center} \thblue{Introduction To Data Mining/Science} \end{center} \end{frame} \begin{frame}{Big Data\footnote{\url{https://www.pinterest.com/pin/101753272804937744/}}} \begin{center} \includegraphics[scale=0.38]{bigDataStarTrek.jpg} \end{center} \end{frame} \begin{frame}{Big Data\footnote{\url{http://memegenerator.net/instance/55214797}}} \begin{center} \includegraphics[scale=0.8]{bigDataPulpFiction.jpg} \end{center} \end{frame} \begin{frame}{Big Data} \begin{center} {\Large``Between the dawn of civilization and 2003, we only created five exabytes of information; now we're creating that amount every two days.'' \\ ~\\ \qquad \qquad - Eric Schmidt, Google} \\ ~\\~\\ One Second on the Internet: \url{http://onesecond.designly.com/} \end{center} \end{frame} \begin{frame}{Smarter Devices} \begin{center} \includegraphics[scale=0.28]{smarterDevices.png} \end{center} \end{frame} \begin{frame}{Commodity Computing} \begin{center} \includegraphics[scale=0.28]{commodityComputing.png} \end{center} \end{frame} \begin{frame}{Ubiquitous Connectivity} \begin{center} \includegraphics[scale=0.28]{ubiquitousConnectivity.png} \end{center} \end{frame} \begin{frame}{Big Data - 4 V's} \begin{itemize} \item Volume \item Velocity \item Variety \item {\em Veracity} \end{itemize} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.34]{bigDataVolume.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.34]{bigDataVelocity.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.4]{bigDataVariety.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.5]{bigDataVeracity.png}\footnote{\url{http://www.ibmbigdatahub.com/}} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.5]{dataHero.jpg}\footnote{\url{https://www.behance.net/gallery/5958295/Data-Hero-Oya-Group}} \end{center} \end{frame} \begin{frame}{Data Mining} \begin{itemize} \item Process of semi‐automatically analyzing large databases to find {\bf patterns} that are \begin{itemize} \item {\bf valid}: hold on new data with some certainty \item {\bf novel}: non‐obvious to the system \item {\bf useful}: should be possible to act on the item \item {\bf understandable}: humans should be able to interpret the pattern \end{itemize} \end{itemize} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.5]{dataMiningVennDiagram.png} \end{center} \end{frame} \begin{frame}{Data Science} \begin{itemize} \item To gain insights into data through computation, statistics, and visualization \item ``A data scientist is someone who knows more statistics than a computer scientist and more computer science than a statistician'' - Josh Blumenstock \end{itemize} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.4]{dataScienceVennDiagram.png} \end{center} \end{frame} \begin{frame}{Google} \begin{center} \includegraphics[scale=0.5]{google.png} \end{center} \end{frame} \begin{frame}{Facebook} \begin{center} \includegraphics[scale=0.5]{facebook.png} \end{center} \end{frame} \begin{frame}{Netflix} \begin{center} \includegraphics[scale=0.5]{netflix.png} \end{center} \end{frame} \begin{frame}{eHarmony} \begin{center} \includegraphics[scale=0.5]{eHarmony.png} \end{center} \end{frame} \begin{frame}{FICO} \begin{center} \includegraphics[scale=0.5]{fico.png} \end{center} \end{frame} \begin{frame}{FlightCaster} \begin{center} \includegraphics[scale=0.5]{flightCaster.png} \end{center} \end{frame} \begin{frame}{IBM's Watson} \begin{center} \includegraphics[scale=0.5]{watson.png} \end{center} \end{frame} \begin{frame}{Handwritten postal codes} \begin{center} \includegraphics[scale=0.5]{ocr.png} \end{center} \end{frame} \begin{frame} \begin{center} \includegraphics[scale=0.3]{dmJobs.png} \end{center} \end{frame} \section{Logistics} \begin{frame}{} \begin{center} \thblue{Logistics} \end{center} \end{frame} \begin{frame}{My Background} \begin{itemize} \item Saravanan Thirumuruganathan \item Final year PhD Student working with Dr.Gautam Das \item Website: \url{http://saravananthirumuruganathan.appspot.com} \item Interests: Data Mining, Algorithms, Data Exploration, Social Networks, Machine Learning, Artificial Intelligence \end{itemize} \end{frame} \begin{frame}{Course Details} \begin{itemize} \item Lectures: TuTh 2-3:30pm, PKH 321 \item Course Website: \url{http://saravanan-thirumuruganathan.github.io/cse5334Spring2015/index.html} \item Instructor: Saravanan Thirumuruganathan \begin{itemize} \item Mail: firstname.lastname[at]mavs.uta.edu \item Office Hours: TuTh 12:30-2:00pm, Fri: 2-5pm or by appointment \end{itemize} \item TA: TBD \end{itemize} \end{frame} \begin{frame}{Piazza} \begin{itemize} \item Q\&A Platform \item ``mixture between a wiki and a forum'' \item \url{https://piazza.com/class/i551721xpki6w7} \item Please use it as much as possible for public/common questions and clarifications \end{itemize} \end{frame} \begin{frame}{Text Books\footnote{\url{http://www.santabanta.com/}}} \begin{center} \includegraphics[scale=0.5]{santaBanta.jpg} \end{center} \end{frame} \begin{frame}{Text Books} \begin{itemize} \item There is no book to cover them all \item Multiple books (free eBook links in Website) \begin{itemize} \item {\bf [MMDS]} Mining of Massive Datasets by Jure Leskovec, Anand Rajaraman, Jeff Ullman. \item {\bf [DMA]} Data Mining and Analysis: Fundamental Concepts and Algorithms by Mohammed Zaki and Wagner Meira. \item {\bf [ISLR]} An Introduction to Statistical Learning with Applications in R. \item {\bf [IIR]} Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schutze. \end{itemize} \end{itemize} \end{frame} \begin{frame}{Grading} \begin{itemize} \item $5$ Programming Projects: 30\% \item Capstone Project: 10\% \item Midterm: 30\% \item Final : 30\% (non comprehensive) \item Grading will be on a curve \end{itemize} \end{frame} \begin{frame}{Programming Projects} \begin{itemize} \item Team based, 1-3 members \item Coding will be in Python \item Some of them will be intensive \item Startup code, testing code will be provided \item Capstone project \item Data Science portfolios \end{itemize} \end{frame} \begin{frame}{Programming Projects} \begin{itemize} \item Project/Dataset suggestions welcome! \begin{enumerate} \item Exploratory Data Analysis using Python, Pandas, Matplotlib and Seaborn. \item Classification Algorithms using Scikit-learn \item Clustering using Scikit-learn \item Search Engine Basics \item Recommender Systems using Scipy \item Capstone Project: Putting it all together \end{enumerate} \end{itemize} \end{frame} \begin{frame}{Late Days} \begin{itemize} \item Due at $11:59$pm \item $5$ late days per student for the semester \item No more than $2$ could be used per project \item No increments \item Late Penalty: $50$\% per day \item No point in submitting after 4 days \end{itemize} \end{frame} \begin{frame}{Assignment Advice} \begin{itemize} \item Start early \item Find good team members (Piazza support will be provided) \item First assignment will be out in $2$ weeks \item Okay to change teams per project \item Everyone in team gets same score \item Collaboration/Brainstorming is Okay! \item No plagiarism! \end{itemize} \end{frame} \begin{frame}{Data Science Process} \begin{center} \includegraphics[scale=0.32]{dataScienceProcess.png} \end{center} \end{frame} \begin{frame}{Topics Covered} \begin{itemize} \item Data collection, visualization \item Exploratory data analysis \item Classifiers and Ensembles \item Clustering \item Search engine basics \item Recommender basics \item Lot of useful tools: dimensionality reduction, feature selection, hypothesis testing, sampling etc \end{itemize} \end{frame} \begin{frame}{} \begin{center} \thblue{Scientific Python} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{python.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.35]{python2.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.7]{python4.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{scientificPython.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \thblue{IPython Notebooks} \end{center} \end{frame} \begin{frame}{Process Books} \begin{center} \includegraphics[scale=0.3]{processBooks.png} \end{center} \end{frame} \begin{frame}{IPython Notebooks} \begin{center} \includegraphics[scale=0.3]{ipythonNotebooks.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{ipythonIsGreat.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{pythonInterpreter.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{iPythonInterpreter.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{ipythonWeb.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{ipythonArch.png} \end{center} \end{frame} \begin{frame}{} \begin{center} \includegraphics[scale=0.3]{ipythonMatplotlib.png} \end{center} \end{frame} %\section{Summary} %\begin{frame}{Summary} %\tblue{Major Concepts:} %\begin{itemize} %\item %\end{itemize} %\end{frame} \begin{frame}{Slide Material References} \begin{itemize} \item Slides from `Introduction to Statistical Learning' by James, Witten, Hastie, and Tibshirani \item Slides from MMDS \item Slides from Harvard CS 109 (2013 and 2014) \item Slides from Dr.Ryan Tibshirani \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6612396228, "avg_line_length": 27.1487964989, "ext": "tex", "hexsha": "47bcd90842653afc556d6aa528981fbd5adfd127", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2018-03-22T10:20:17.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-06T07:38:05.000Z", "max_forks_repo_head_hexsha": "1ba5fe4e76bfc6e336aa50fb50b8916c0cae0732", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "saravanan-thirumuruganathan/cse5334Spring2015", "max_forks_repo_path": "slides/01_Intro/1_Intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1ba5fe4e76bfc6e336aa50fb50b8916c0cae0732", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "saravanan-thirumuruganathan/cse5334Spring2015", "max_issues_repo_path": "slides/01_Intro/1_Intro.tex", "max_line_length": 161, "max_stars_count": 5, "max_stars_repo_head_hexsha": "1ba5fe4e76bfc6e336aa50fb50b8916c0cae0732", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "saravanan-thirumuruganathan/cse5334Spring2015", "max_stars_repo_path": "slides/01_Intro/1_Intro.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-05T20:53:10.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-31T16:26:56.000Z", "num_tokens": 3684, "size": 12407 }
\documentclass[]{spie} \usepackage[breaklinks,colorlinks,urlcolor=blue,citecolor=blue,linkcolor=blue]{hyperref} \usepackage{longtable} \usepackage{graphicx} % lsstdoc documentation: https://lsst-texmf.lsst.io/lsstdoc.html \input{meta} % Package imports go here. % Local commands go here. \date{\today} \input{authors} \title{Documentation Automation for the Verification and Validation of Rubin Observatory Software} \newcommand{\docRef}{DMTN-140} \newcommand{\docUpstreamLocation}{\url{https://github.com/lsst-dm/dmtn-140}} \begin{document} \maketitle %\hypersetup{pdftitle={\@title}, pdfauthor={\@author}, pdfkeywords={\@keywords}} \input{abstract} \input{body} \appendix % Remove this when you strart your paper \input{appendix} % Include all the relevant bib files. % https://lsst-texmf.lsst.io/lsstdoc.html#bibliographies \section{References} \label{sec:bib} %\bibliographystyle{yahapj} \bibliographystyle{spiebib} \bibliography{local} % Make sure lsst-texmf/bin/generateAcronyms.py is in your path \section{Acronyms} \label{sec:acronyms} %\input{acronyms.tex} \begin{tabular}{p{0.145\textwidth}p{0.8\textwidth}}\hline \textbf{Acronym} & \textbf{Description} \\\hline & \\\hline API & Application Programming Interface \\\hline ATM & Adaptavist Test Management \\\hline AURA & Association of Universities for Research in Astronomy \\\hline CA & Control (or Cost) Account \\\hline CI & Continuous Integration \\\hline DE & dark energy \\\hline DM & Data Management \\\hline DMSR & DM System Requirements; LSE-61 \\\hline DPAC & Data Processing and Analysis Consortium (Gaia) \\\hline HTML & HyperText Markup Language \\\hline LPM & LSST Project Management (Document Handle) \\\hline LSE & LSST Systems Engineering (Document Handle) \\\hline LSST & Legacy Survey of Space and Time (formerly Large Synoptic Survey Telescope) \\\hline LSSTC & LSST Corporation \\\hline LaTeX & (Leslie) Lamport TeX (document markup language and document preparation system) \\\hline MBSE & model-based systems engineering \\\hline NASA & National Aeronautics and Space Administration \\\hline PR & Pull Request \\\hline SLAC & SLAC National Accelerator Laboratory (formerly Stanford Linear Accelerator Center; SLAC is now no longer an acronym) \\\hline SQR & SQuARE document handle \\\hline SRD & LSST Science Requirements; LPM-17 \\\hline VCD & Verification Control Document \\\hline \end{tabular} \end{document}
{ "alphanum_fraction": 0.7617866005, "avg_line_length": 32.24, "ext": "tex", "hexsha": "be120d4b5dc93c32484edfca7e6b2429150c4af0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c6a511f3c9e846ab2b54e1f9febaee146a956fe4", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lsst-dm/dmtn-140", "max_forks_repo_path": "DMTN-140.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c6a511f3c9e846ab2b54e1f9febaee146a956fe4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lsst-dm/dmtn-140", "max_issues_repo_path": "DMTN-140.tex", "max_line_length": 132, "max_stars_count": null, "max_stars_repo_head_hexsha": "c6a511f3c9e846ab2b54e1f9febaee146a956fe4", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lsst-dm/dmtn-140", "max_stars_repo_path": "DMTN-140.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 718, "size": 2418 }
%%% CLASS SETTING %%% \documentclass[letterpaper, 12pt]{article} \usepackage{natbib} \usepackage[margin=1in]{geometry} % sets page layout \bibpunct[, ]{(}{)}{,}{a}{}{,} % sets the punctuation of the bibliography entires. \usepackage{authblk} %%% Paper Information %%% \title{Local Bandwagoning and National Balancing: How Uninformed Voters Respond to the Partisan Environment} % set the title of the document \author{Gento Kato\thanks{Gento Kato is Ph.D. Student, Department of Political Science, One Shields Avenue Davis, CA 95616 ([email protected]). The previous version of this paper was presented at the 77th Annual Midwest Political Science Association Conference, Palmer House Hilton, Chicago, IL, April 6, 2019. The latest version of this paper is available at \texttt{https://github.com/gentok/UninformedChoice}.}} \affil{University of California, Davis} \date{Last Update: September 9, 2019} % Other Packages/Settings \usepackage{amsfonts, amsmath, amssymb, bm} %Math fonts and symbols \usepackage[format=hang, justification=centering]{caption} \usepackage{dcolumn, multirow} % decimal-aligned columns, multi-row cells \usepackage{booktabs} % Table formatting \usepackage{graphicx, subfigure, float} % graphics commands \usepackage[colorlinks=true, citecolor=blue]{hyperref} \usepackage{setspace}% allows toggling of double/single-spacing \doublespace % set document spacing to double %\usepackage{endnotes} %\let\footnote=\endnote % Draw figure % \usepackage{tikz} % \usetikzlibrary{calc} % Add note to Figures \newcommand{\floatnote}[1]{\vspace{\abovecaptionskip}\caption*{\textbf{Note:} #1}\vspace{-\abovecaptionskip}} % Change section font \usepackage{sectsty} \sectionfont{\fontsize{14}{14}\selectfont} \subsectionfont{\fontsize{12}{12}\selectfont\itshape} \subsubsectionfont{\fontsize{12}{12}\selectfont} % Move figures to the last %\usepackage[figuresonly]{endfloat} %nomarkers, to avoid markers %\renewcommand{\listoffigures}{} % but suppress these lists \begin{document} \begin{titlepage} \singlespace \maketitle \thispagestyle{empty} % \clearpage % \thispagestyle{empty} \begin{abstract} Scholarly debate on civic competence often assumes that political knowledge is the prerequisite for systematic and ``correct'' decision-making. Uninformed voters, then, are portrayed as unsystematic and misguided decision-makers. The current study challenges this assumption by arguing that uninformed voters may not rely on (potentially misguided) individual preferences but rather refer to the partisan environment, the partisan voting patterns in past elections, to guide their decisions. The analysis of Cooperative Congressional Election Study (CCES) and American National Election Studies (ANES) provides the supporting evidence to this claim. Uninformed voters respond to the partisan environment in two ways: First, they \textit{bandwagon} with the local partisan environment; second, they \textit{balance} against the national partisan environment. The results provide the view of uninformed voters as more systematic and effective decision-makers than previously suggested. \end{abstract} \end{titlepage} \clearpage \doublespace \par In the studies of voter competence, the non-randomness and ``correctness'' of voting decisions are believed to be strictly increasing in the level of political knowledge. Uninformed voters, under this view, are portrayed as the most unsystematic and misguided decision-makers. Their individual preferences are unstable \citep{Converse1964thna, Zaller1992thna}, internally inconsistent \citep{Broockman2016apto}, or misinformed \citep{Kuklinski2000mian, Fowler2014thpo}. The evidence often leads to the conclusion that uninformed voting cannot be explained systematically. Any deviation of uninformed votes from informed votes \citep{Bartels1996unvo} is attributed to biased preference formation of uninformed voters. Yet, this conclusion is the assumption rather than the explanation of the nature of uninformed voting. Previous studies rarely explore the possible reasoning behind uninformed voting patterns. \par Given this gap in literature, this paper asks the following research question: \textit{how can the behavior of uninformed voters be systematically explained?} There are two propositions. First, I argue that individual preferences \textit{do not} explain the behavior of uninformed voters. Uninformed voters know that their preferences are weakly reasoned and potentially biased, and thus have no incentive to use their own preference to inform their decisions. Second, partisan environment, represented by aggregated partisan voting patterns from past elections \citep{Miller1956onpo, Putnam1966poat}, explains uninformed voting. If a voter is informed, he or she has no reason to refer to partisan contexts. For uninformed voters, however, two contrasting sets of logic, \textit{bandwagon} and \textit{balance}, explain how and why contexts can be useful for their voting decisions. \par This paper consists of two studies that assess the implication of context-based uninformed voting. Both studies test propositions through the analysis of American presidential elections. The first study assesses the role of local partisan environments through the 2008 and 2016 Cooperative Congressional Election Study (CCES), and the second study examines the role of national partisan environments through 1972--2016 American National Election Studies (ANES). \par The empirical results show that uninformed and informed voters base their decisions on separate sets of resources. Partisan environments explain uninformed voting but not informed voting; individual perceptions of ideological proximity and valence differential explain informed voting but not uninformed voting. Furthermore, the role of partisan context changes by the level of context: uninformed voters bandwagon with local partisan environments but balance against national partisan environments. % The simulation results imply that especially when the knowledge level is highly unequal across partisan groups, the conditional strategy of context-based uninformed voting produces more favorable democratic outcomes than alternative strategies. \par The inquiry in the current paper sheds new lights on the studies of voting behavior. First, it helps to understand the decision-making process of uninformed voters. Previous empirical studies often emphasize the inconsistent and misguided nature of uninformed preferences, but this paper offers the picture of uninformed voters who can (at least partially) cope with this disadvantage. Second, it opens up a new way to assess voter competence. Instead of asking if voters are informed enough to acquire ``correct'' preference, we can ask whether the behavioral rules of uninformed voters contribute to the individual or social welfare. Having uncertain and ``incorrect'' individual preferences does not necessarily undermine the quality of democratic decision-making. \section*{Information Effects Unexplained} \par The capacity of citizens to make competent decisions is one of the main targets of endeavor in the studies of democratic voting behavior. Scholars repeatedly suggest that ordinary voters rarely hold a sufficient level of political knowledge to form consistent political preferences. American voters are uninformed about the wide range of political facts \citep{Dellicarpini1996wham}, and those ill-informed voters cannot hold a political ideology that is consistent across time and issues \citep{Converse1964thna, Zaller1992thna, Broockman2016apto}. While some argue that voters need only necessary signals, not the full set of factual knowledge, to form ``informed'' preferences \citep[e.g.,][]{Lupia1994shve}, it is shown that signaled preferences can be biased and misleading \citep{Kuklinski2000mian, Fowler2014thpo, Boudreau2015loin}. \par In the search for implications of this widely documented ignorance, scholars have been making the empirical assessment of ``information effects.'' They are interested in how uninformed voting patterns deviate from informed counterparts. In his canonical study, \cite{Bartels1996unvo} analyzes the presidential vote choice in the American National Election Study and demonstrates that informed and uninformed voters have different tendencies in how demographic characteristics influence the voting decisions. He further shows that those differential voting patterns have a sizable influence on aggregated electoral outcomes. Similarly, \cite{Arnold2012thel} analyzes Comparative Study of Electoral Systems (CSES) and shows that hypothetical fully informed voting may change the electoral outcome across a wide range of democracies. \par The substantive implications of these empirical findings, however, are not always clear and generalizable. The interpretations of differences are mostly descriptive and rarely offer a systematic explanation. As a result, we have little understanding as to why there are differences between uninformed and informed voting and how they influence the democratic outcome. The next section introduces theories of voting that can offer those missing explanations. %to move the scholarly discussion forward. \section*{Uninformed Voting and the Partisan Environment} \par The accumulation of voting research offers several candidate explanations regarding how and why uninformed voting patterns differ from informed voting patterns. The most straightforward explanation is that uninformed voters make decisions based on the perception of the state of the world that deviates from the informed perception. This explanation implies that individual preferences (i.e., ideology and valence evaluation) predict uninformed and informed voting in the same manner; differences appear because less informed voters receive preference signals that are potentially more biased and less stable. \par Suppose that two candidates from two parties, $B$ and $R$, are running in a plurality election. The following probabilistic voting function can describe the above voting calculus: \begin{align} Pr(\text{Vote R})_i &= \Lambda\left( b_1\left( \left(\widehat{\delta_i-\Delta_{R}}\right)^2 - \left(\widehat{\delta_i-\Delta_{B}}\right)^2 \right) + b_2\left(\widehat{\Theta_{R} - \Theta_{B}}\right) + \gamma_i + \epsilon_i \right) \label{eq1} \\ \widehat{\delta_i-\Delta_{R}} &= \delta_i-\Delta_{R} + \eta_{\Delta_R}(k_i) \label{eq2} \\ \widehat{\delta_i-\Delta_{B}} &= \delta_i-\Delta_{B} + \eta_{\Delta_B}(k_i) \label{eq3} \\ \widehat{\Theta_{R} - \Theta_{B}} &= \Theta_{R} - \Theta_{B} + \eta_{\Theta}(k_i) \label{eq4} \end{align} \noindent In \autoref{eq1}, the probability of voter $i$ choosing party $R$ is the inverse logit function $\Lambda$ of following components: The perception of relative proximity of $i$'s ideology $\delta_i$ to party $R$'s ideology $\Delta_R$ than to party $B$'s ideology $\Delta_B$ \citep{Downs1957anec}; the perception of valence or quality advantage (or disadvantage) of party $R$ over party $B$ candidates $\Theta_{R} - \Theta_{B}$ \citep{Adams2011whca}; utility from group identity $\gamma_i$ such as partisanship and demographic groups \citep{Campbell1964tham,Tajfel1986thso}; and other random disturbances $\epsilon_i$. $b_1, b_2 \geq 0$ are coefficients attached to relative ideological proximity and valance differential. \autoref{eq2}, \autoref{eq3}, and \autoref{eq4} show that the perception of ideological proximity and valence evaluation consists of true proximity or evaluation and error $\eta(k_i)$ (unobservable to $i$) drawn from some distribution $f_\eta(k_i)$. Both expected value (i.e., bias) and variance (i.e., instability) of $f_\eta(k_i)$ are shrinking in political knowledge $k_i \in [0,1]$ and converge to $0$ when $k_i=1$. In this model, uninformed voters (i.e., low $k_i$) make different choice than informed voters (i.e., high $k_i$) only because random errors are contaminating their perception of ideology and valence. \par However, there are reasons to expect that uninformed voters rely less on their perception of ideology and valence than informed voters do. Extending the voting model suggested in \cite{Downs1957anec}, \cite{Matsusaka1995exvo} argues that uninformed voters are less certain than informed voters about their perception of electoral preferences. While his primary interest is in the turnout decisions, his theory implies that uninformed voters, compared to informed voters, may discount their preferences when making vote choice. Empirical evidence supports this claim. For example, \cite{Dellicarpini1996wham} find that the predictive power of ideology in voting is weaker among less-informed voters. Therefore, the first hypothesis is constructed as follows: \begin{verse} H1: The less-informed that voters are, the less strongly the perception of ideology and valence explains their voting decisions. \end{verse} \par If uninformed voters do not rely on their individual preferences, are their voting decisions inherently based on identity and random disturbances? The long history of research on social context influence on voting suggests that is not necessarily the case. Even when voters do not possess the sufficient information to form their own preference with certainty, they may still learn and utilize the distribution of others' partisan preferences in the society (called \textit{partisan environment}). Voters obtain this knowledge from past election results and preferences of others in the social network. The second hypothesis states that the differences between informed and uninformed voting originate from the use of this contextual information: \begin{verse} H2: The less-informed that voters are, the more strongly a partisan environment explains their voting decisions. \end{verse} \noindent Voters may respond to a partisan environment in either of two different ways: \textit{bandwagoning} and \textit{balancing}. Both patterns have theoretical and empirical support to validate their underlying logic. The following subsections explain why uninformed voters, and only uninformed voters, have a reason to bandwagon with or balance against the partisan environment. \subsection*{Bandwagoning} \par Bandwagoning indicates the pattern of behavior to vote in line with the majority in a partisan environment. This pattern of context-based voting is supported by ample empirical evidence. Early inquiries of local context and voting suggest that voters living under a highly skewed partisan environment, captured by partisan voting patterns in past election results, have a strong tendency to vote in line with the majority party \citep{Miller1956onpo, Putnam1966poat}. The collection of social network studies also finds that majority preference in one's political discussion network predicts voting decisions \citep{Huckfeldt1987nein, Huckfeldt1995cipo, Huckfeldt2014nobi}. In addition, experimental studies confirm bandwagoning behavior both in lab \citep{Bischoff2013soin, Morton2015whmo, Tyran2016exev} and online survey \citep{Roy2015anex,vanderMeer2016ofth, Dahlgaard2017hoel}. In those experiments, participants are randomly assigned to receive the signal about the majority preference (or the candidate/party winning the election) or not. The results show that receivers of the contextual signal are more likely to act in line with the majority than those who do not receive the signal. \par There are numbers of theoretical rationales as to why bandwagoning should occur \citep{Hardmeier2008thef}. While psychological explanations emphasize the natural instinct or ``feels good'' aspect of joining on the winner's side, voters may have a logical incentive to use majority preference as a heuristic to find ``correct'' choices in elections \citep{Lupia1994shve, Lau2001adan}.\footnote{The separate set of theoretical studies focuses on the role of bandwagoning to maximize the impersonal utility \citep{Coate2004grru, Feddersen2006thca, Feddersen2006thof} but this theory does not speak to the difference between informed and uninformed voters.} It is reasonable to expect that the heuristic-based bandwagoning is weaker among informed than uninformed voters because those with abundant resources to support their own preferences tend to be resistant to the reception of additional signals \citep{Zaller1992thna}. Empirical evidence is consistent with this claim. For example, the mock election experiment conducted by \cite{Huckfeldt2014nobi} shows that voting decisions of participants who are able to receive more private information about their preference are affected less by the preference signals obtained through communication with other participants. Similarly, the survey experiment conducted by \cite{Roy2015anex} indicates that the depth of obtained candidate information moderates bandwagoning behavior. \cite{Roy2015anex} design a mock election in which respondents can search for information about candidates. After the search, the random subset of participants receives the result of the pre-election polling that contains the information regarding the leading candidate in the election. The experiment results show that the bandwagoning effect of pre-election polling treatment is weaker for those who searched more information about candidates (i.e., informed voters) than for those who searched less (i.e., uninformed voters). \par The heuristic explanation of bandwagnoning implicitly assumes that uninformed voters share the common preference with other voters in the same environment. In turn, the ideological homogeneity of the society would strengthen the relationship between a partisan environment and bandwagoning behavior. Under the context of American presidential elections, the ideological preferences are highly heterogenous at the national level. On the other hand, the patterns of racial and partisan geographic sorting (\citealt{Charles2003thdy,TamCho2013vomi} but \citealt{Mummolo2017whpa}) suggest that local distributions of ideological preferences are more homogenous than the national distribution. Local partisan environments often have a weak connection to winning or losing in the national election, but can be an effective heuristic in inferring how others with similar ideology vote. \par In sum, the third hypothesis suggests that bandwagoning is primarily occurring in response to the local partisan environment among uninformed voters: \begin{verse} H3: The more skewed the local partisan environment, the more likely uninformed voters (and not informed voters) vote with the majority party in the local partisan environment. \end{verse} \subsection*{Balancing} \par Under the context of the American presidential system, scholars frequently discuss balancing in terms of vote switching in mid-term elections \citep{Alesina1995papo}. More generally, scholars understand balancing as the strategy of ideologically moderate voters to prevent policy outcomes from skewing overly toward one direction. While balancing tends to require the cognitively demanding task of understanding complex electoral and policy-making institutions, empirical studies show that many voters do get involved in balancing behavior under various contexts \citep{Kedar2005whmo, Kedar2006hovo, Folke2012gumi}. \par In the scope of the current research, balancing is the pattern of behavior to vote against the majority in the partisan environment. The voting model presented in \cite{Feddersen1996thsw} explains why uninformed voters, and not informed voters, have an incentive to conduct such balancing. Their model categorizes voters into partisans, informed independents, and uninformed independents and explains the behavior of each type of voters. Through the equilibrium analysis, they show that informed independents and partisans vote for their individual preference, but uninformed independents vote to ``maximize the probability that the informed independent agents determine the winner'' \citep[][p.414]{Feddersen1996thsw}. To achieve this purpose, when they vote, uninformed independents have an incentive to balance out the partisan imbalance in society. This balancing increases the likelihood of informed independents determining the outcome, which favors the interest of uninformed independents. \par To get an intuitive sense of balancing, back to \autoref{eq1}. $B$ and $R$ partisans (i.e., having large difference between $\left(\widehat{\delta_i-\Delta_{R}}\right)^2$ and $\left(\widehat{\delta_i-\Delta_{B}}\right)^2$) have a strong ideological preference toward parties, so they almost always vote for their own party. Here, ideologically moderate voters benefit more from electing the \textit{better} party in terms of common quality (i.e., $\Theta_{R} - \Theta_{B}$ differential). Informed voters know, while uninformed voters don't know, which party has the better quality. The balancing logic suggests that moderate uninformed voters are less likely to vote for the parties with a larger number of partisans. This balancing behavior increases the likelihood of a moderate informed voter, whose decision should benefit moderate uninformed voters, being the median pivotal voter in the election. The series of lab experiments with university student subjects provide the empirical support for this mechanism \citep{Battaglini2008inag, Battaglini2010thsw}. \par The balancing concerns the type of voters determining the electoral outcome. In American presidential elections, the outcome of interest is at the national level.\footnote{Technically, American presidential election is a two-step process. The national-level vote share does not always determine, but plays an important role in the electoral outcome.} It makes sense for uninformed voters to balance their votes against the national than the local partisan environment. Therefore, the fourth and last hypothesis of this paper is constructed as follows: \begin{verse} H4: The more skewed the national partisan environment, the more likely uninformed voters (and not informed voters) vote against the majority party in the national partisan environment. \end{verse} \subsection*{The Model of Context-based Uninformed Voting} \par If implications from H1, H2, H3, and H4 hold, \autoref{eq1} can be modified as follows: \begin{align} Pr(\text{Vote R})_i =& \Lambda( k_i \times \left( b_1\left(\widehat{(\delta_i-\Delta_{R})}^2 - \widehat{(\delta_i-\Delta_{B})}^2 \right) + b_2\left(\widehat{\Theta_{R} - \Theta_{B}}\right) \right) \notag \\ &+ (1-k_i) \times \left( b_3\pi_R(Local) - b_4\pi_R(National)\right) + \gamma_i + \epsilon_i ) \label{eq5} \end{align} \noindent Modified part of \autoref{eq5} is expressed as the weighted average of preference perceptions and partisan environments. While the role of ideology and valence perceptions is strengthening in political knowledge $k_i$ (H1), the role of partisan environments is strengthening in the lack of knowledge $1-k_i$ (H2). In the second line, $\pi_R \in [0,1]$ represents party $R$ supporters' proportion in a given partisan environment and $b_3, b_4 \geq 0$ are coefficients attached to each partisan environment. Then, the probability of $R$ vote is increasing in the local partisan environment $\pi_R(Local)$ (H3) and decreasing the national partisan environment $\pi_R(National)$ (H4). \section*{Study 1: Local Partisan Environment and Uninformed Voting} \par This section examines the relationship between local partisan environment and vote choices in American presidential elections. Specifically, I use the Cooperative Congressional Election Study (CCES) conducted in 2008 and 2016. Each respondent is matched with past presidential election outcomes aggregated at the level of their local environment (i.e., state and county of residence). The election data are obtained from CQ Press Voting and Election Collections.\footnote{\url{http://library.cqpress.com/elections}. Accessed through subscription at the library of the University of California, Davis.} \par The advantage of using CCES for the analysis of the local partisan environment is that the dataset includes a large number of respondents from different locations across the United States, providing sufficient variations and data points at both county and state level. While CCES recruits respondents from online and thus the sample is not nationally representative, it is validated that most of the actual election results do fall within the 95\% confidence interval of the weighted estimates from CCES samples \citep{Ansolabehere2011guto, Ansolabehere2017guto}. The following analysis applies weights provided by CCES organizers to correct for the potential bias in sampling.\footnote{Since the analysis includes the post-election variable of vote choice, post-election weights are used for CCES2016. General weights are used in CCES2008 because post-election weights are not provided.} \subsection*{Variables} \par The primary dependent variable for this study is the vote choice in presidential elections. To simplify the analysis, I focus on the binary choice between Republican and Democratic candidates and thus drop reported third-candidate choosers and abstainers from the analysis.\footnote{Turnout decision and third-candidate voting deserves a separate set of discussions, but that is not the central focus of this paper.} To explain the dichotomous vote choice, four sets of predictors are relevant. First, the measure of \textit{political knowledge} is constructed from eight factual test questions in CCES concerning the knowledge of the majority party in legislatures and party name of incumbent politicians. The correct answers are aggregated to construct the information scale (the Cronbach's alpha is 0.84 in 2008, 0.87 in 2016). The final score is normalized between 0 and 1. \par Second, the measures of \textit{local partisan environment} are constructed by linking the results of past elections with each respondent in CCES. Past electoral outcomes may not be the direct representation of the partisan composition of the society, but the evident pattern in the past elections gives a strong clue as to how partisan preferences are distributed within society. From CQ Press Voting and Election Collections, I collected county and state level outcomes of presidential elections. Using two-party Republican vote share in each local unit, the adapted versions of \textit{Cook Political Report} Partisan Voter Index (PVI)\footnote{\url{https://www.cookpolitical.com/pvi}} are calculated. State level PVI uses the following formula for each state $i$ and election cycle $t$: \begin{align*} \textit{(State PVI)}_{t,i} = \{ & (\textit{(State Republican Share)}_{t-1, i} - \textit{(National Republican Share)}_{t-1}) + \\ &(\textit{(State Republican Share)}_{t-2, i} - \textit{(National Republican Share)}_{t-2}) \}/2 \end{align*} \noindent In the above formula, $t$ represents the current election, and $t-1$ and $t-2$ represent the previous two elections. $\textit{(State Republican Share)}_i$ indicates two-party vote share of the Republican party against the Democratic party in state $i$. Thus, \textit{State PVI} represents the average advantage of Republican party vote share over the Democratic party, relative to the national tendency in the previous two elections. Two elections are averaged to determine the long-term trend in a partisan environment. The positive scores indicate the advantage in Republican vote share in percentage points, and the negative scores indicate the advantage in Democratic vote share in percentage points. In addition, county level PVI measures are calculated using the following formula: \begin{align*} \textit{(County}& \textit{PVI)}_{t,i,j} = \\ \{ &(\textit{(County Republican Share)}_{t-1, j} - \textit{(State Republican Share)}_{t-1,i}) + \\ &(\textit{(County Republican Share)}_{t-2, j} - \textit{(State Republican Share)}_{t-2, i}) \}/2 \end{align*} \noindent Notice that the above formula adjusts the PVI for county $j$, not by national-level vote shares but by state-level vote shares. Therefore, the measure captures the partisan deviation of each county from the state average. Consequently, the correlations between state and county or district PVI are very low (ranges from $-0.01$ to $0.14$) and inlusion of both variables in one model makes sense. % (-0.086 for 08 county, -0.01108314 for 08 district -0.04256889 for 16 county -0.1421199 for 16 district) \par The third set of predictors is \textit{individual preferences}. relative ideological proximity (based on squared ideological distance from each candidate, ranges from $-36$ to $36$), and the retrospective economic evaluation (ranges from $-2$ to $2$). All variables are scaled so that the higher score indicates preference toward Republican party candidates (see Online Appendix A for detailed constructs). All preference variables are based on the subjective perceptions. In contrast to objective measures that aim to capture ``true'' preference of voters, the subjective preference, even when it is potentially biased, is most likely to influence voting decisions. \par The last sets of predictors are \textit{partisanship} and \textit{demography}, including party identification (ranges from $-3$ to $3$), gender, age, race, income, education, and religiosity. The theory has no specific expectations regarding how the impact of partisanship demographic variables interact with political knowledge, but those variables are included in the analysis to control for the baseline tendency in voting behavior. \subsection*{Model} \par The predictive model intends to identify the differential behavioral patterns between uninformed and informed voters. This paper follows the approach taken by \cite{Bartels1996unvo} to include the complete set of linear interactions between predictors and political knowledge. %The final model appears as follows: % \begin{align*} % Pr&(\text{Vote Republican})_i = \Lambda\{\\ % &\alpha + \beta (\text{Political Knowledge})_i + \\ % &\delta_{1-2} (\text{Local Partisan Environments})_i + \gamma_{1-2} (\text{Local Partisan Environments})_i \times \text{Knowledge}_i + \\ % &\delta_{3-5} (\text{Individual Preferences})_i + \gamma_{3-5} (\text{Individual Preferences})_i \times \text{Knowledge}_i + \\ % &\delta_{6-10} (\text{Demographics})_i + \gamma_{6-10} (\text{Demographics})_i \times \text{Knowledge}_i \} % \end{align*} % \noindent Assuming that a linear relationship exists between political knowledge and the impact of each predictor, this method allows estimation of conditional coefficients of each predictor at different levels of political knowledge. After the estimation, I generate the conditional coefficient of each predictor for fully informed (political knowledge $= 1$) and uninformed (political knowledge $=0$) voters. \par I use logistic regression to estimate the impact of predictors on vote choice (Democrat = 0, Republican = 1). Standard errors are clustered by states and counties to incorporate common behavioral tendencies within those areas.\footnote{The results are robust to the alternative specification using mixed effects logistic regression with states and counties random effects. See Online Appendix.} In addition, the multiple imputation procedure \citep{King2001anin} is used to handle the missing responses in the dataset. The presented results are the average from analyses of five imputated datasets. \subsection*{Results} \begin{figure}[t!] \caption{The impact of local partisan environments and individual preferences on informed and uninformed presidential vote choice in CCES (2008, 2016)} \label{fig:ccescoefplot} \includegraphics[width=\linewidth]{../outputs/ccescoefplot.png} \end{figure} \par The summary of presidential vote choice model is presented in \autoref{fig:ccescoefplot}. The figure shows conditional odds ratios for fully informed voters (political knowledge $= 1$) and uninformed voters (political knowledge $= 0$) with 95\% confidence intervals. The figure omits demographic variables and intercepts (see Online Appendix A). The first and second rows of the plot show the impact of the local partisan environment on voting behavior. Consistent with H2, the odds ratios of state and county PVI are larger for uninformed voters than for informed voters. All PVI coefficients for uninformed voters are statistically significant ($p<0.05$), while three out of four PVI coefficients for informed voters are not statistically significant ($p>0.05$). The result also gives consistent support of H3 (bandwagoning): By every 10\% movement in local PVI towards Republican, uninformed voters are 1.5 to 2 times (for state PVI), and 1.2 times (for county PVI) more likely to vote for Republican than Democrat. \par For individual preference variables, the result generally supports H1. Ideology and retrospective economic evaluation variables have significantly larger odds ratios for informed than for uninformed voters. The partisanship variable, on the other hand, has a similar level of power to explain vote choice among informed and uninformed voters. Uninformed voters do not rely on ideological preferences and retrospective evaluations while using party identity in a way similar to that of informed voters. \par To gain deeper understandings of results regarding contextual variables, I simulate the predicted probability of the Republican vote through the Monte Carlo method. Since the estimated model includes complex sets of interacted variables, the simulation takes two steps. First, I estimated the predicted probability of vote choice for each respondent in surveys by manipulating the values only of the local partisan environment (from 5th percentile to 95th percentile), and political knowledge ($0$ and $1$). Each prediction is made by drawing 1,000 random coefficients according to the multivariate normal distribution, with mean as a point estimate and clustered standard error. Second, I averaged predicted probabilities according to the population weight provided in CCES. This procedure gives the approximation of the average two-party Republican vote probability under the given local partisan environment and knowledge level. \begin{figure}[t!] \caption{Local partisan environments and average predicted probability of two-party Republican vote in CCES (2008, 2016)} \label{fig:ccespredplot} \includegraphics[width=\linewidth]{../outputs/ccespredplot.png} \end{figure} \par \autoref{fig:ccespredplot} plots the simulation result. The shaded area represents 95\% confidence intervals in prediction. Here, uninformed voters strongly respond to the local partisan environment, while informed voters don't. Especially in 2016, the local partisan environment significantly raises the predicted vote probability of Trump among uninformed voters. The movement from the 5th percentile to the 95th percentile in state PVI and county PVI each correspond with a 10 to 13\% increase in Republican vote share. The impact of the local partisan environment in the 2008 election is somewhat weaker, but the predicted Republican vote probability for uninformed voters increase from $41.0$\% to $52.7$\% when the state PVI moves from and 11.7\% Democratic advantage (5th percentile) to an 11.4\% Republican advantage (95th percentile). \par The current study highlights the importance of the local partisan environment in explaining uninformed voting (but not informed voting). Uninformed voters show a clear tendency of bandwagoning: to vote in line with the majority in the local environment. Informed voters, on the other hand, rely mostly on their individual responses to make voting decisions and are unresponsive to the local partisan environment. \section*{Study 2: National Partisan Environment and Uninformed Voting} \par This section examines the relationship between the national partisan environment and voting choice in American presidential elections. I use the collection of data from American National Election Studies (ANES), conducted during presidential elections that occurred between 1972 and 2016.\footnote{I limit the analysis to respondents interviewed in face-to-face or phone mode. Online survey samples are dropped from the analysis due to comparability issues.} This dataset covers 12 presidential elections over a 44-year period, which reveals sufficient variations to assess the impact of the national partisan environment on uninformed voting. As in Study 1, the national partisan environment at each election is captured by the data obtained from CQ Press Voting and Election Collections. \subsection*{Variables} \par The primary dependent variable for this study is the vote choice in presidential elections. As in Study 1, I focus on the binary choice between Republican and Democratic candidates and drop reported third-candidate choosers and abstainers from the analysis. The interviewer's rating of knowledge is used as the measure of \textit{political knowledge}. While this measure does not directly reflect the factual knowledge of respondents, it has a favorable quality to ensure comparability across surveys over time. The resultant measure is normalized to the range between 0 and 1, following the procedure that is taken by \cite{Bartels1996unvo}. The robustness check using the knowledge measure based on factual test questions yields similar results (see Online Appendix B). \par The measures of \textit{national partisan environment} are constructed in a way similar to those for the local partisan environment. Specifically, the national PVI uses the following formula for each election cycle $t$: \begin{align*} \textit{[National PVI]}_{t} = \{ & \textit{[National Republican Share]}_{t-1} + \\ &\textit{[National Republican Share]}_{t-2} \}/2 \end{align*} \noindent \textit{National PVI} represents the national advantage of Republican party vote share over the Democratic party. The measure averages two previous elections to adjust for any short-term noise imposed during a single election cycle. \par In addition, \textit{individual preference} predictors (i.e., relative ideological advantage is based on squared distance from candidates, partisanship, and the retrospective economic evaluation) and \textit{demographic} controls (i.e., gender, age, race, income, education, and religion) are collected from each survey. All variables are scaled identically to the method used in Study 1 (see Online Appendix). \subsection*{Model} \par In conducting the analysis, pooling all election years into one dataset over-complicates the model, because the distribution and the predictive power of voter characteristics may differ across elections. To cope with this heterogeneity, I follow a three-step procedure. First, the model of vote choice is constructed and estimated for each election year. As in Study 1 and \cite{Bartels1996unvo}, each model includes the complete set of interactions between political knowledge and all other covariates and estimated by the logistic regression with population weights provided for each survey. \par In the second step, following the procedure in Study 1, I simulated the weighted average of predicted probability of the Republican vote for fully informed (political knowledge $=1$) and uninformed voters (political knowledge $=0$). To control for the change in voter characteristics across years, the prediction is made by fixing the voter profiles to the specific election year. By fixing the voter profile, the simulation produces the prediction of how informed and uninformed voters would have voted in each election if the distribution of individual preferences and demographic characteristics stays constant. \par Lastly, simulated Republican vote probabilities are regressed on the national partisan environment using the Estimated Dependent Variable (EDV) model \citep{Lewis2005esre, Kasara2015whdo}. The EDV model incorporates measurement uncertainty in the dependent variable, in the current case the vote probability estimate, in running OLS regression. This procedure allows both flexible modeling of vote choice in each election and controlling of the change in distributions of voter preferences and characteristics over the years. \subsection*{Results} \begin{figure}[t!] \caption{The impact of individual preferences on presidential vote vhoice in ANES (1972--2016)} \label{fig:anescoefplot} \includegraphics[width=\linewidth]{../outputs/m1sq_anescoefplot.png} \end{figure} \par \autoref{fig:anescoefplot} shows the conditional coefficients of individual preferences extracted from the vote choice models estimated in each election year. The results are consistent with the findings from Study 1, supporting H1 except for partisanship. The left panel indicates that the coefficient of ideological alignment is larger for informed voters than for uninformed voters in 11 out of 12 elections except for the year 2000. Similarly, the coefficient of retrospective economic evaluation (the right panel) is larger for informed voters than for uninformed voters in 10 out of 12 elections except for 1980 and 1996. On the other hand, the results on partisanship variables (the central panel) are mixed. Informed voters have larger coefficients in only seven elections, and uninformed voters have larger coefficients in five other elections. \par Following the described modeling procedure, \autoref{fig:anespredplot} presents the average predicted probabilities of Republican vote for fully informed (political knowledge $=1$) and uninformed (political knowledge $=0$) in each election year based on the 1992 voter profile. The voter profile from any given year yields similar results (see Online Appendix). \begin{figure}[t!] \caption{National partisan environment and average predicted probability of two-party Republican vote in ANES (1972--2016)} \label{fig:anespredplot} \includegraphics[width=\linewidth]{../outputs/m1sq_1992_anespredplot.png} \end{figure} \par The left panel of \autoref{fig:anespredplot} implies that the national-level pattern of uninformed voting is a function of two factors: the national partisan environment and the incumbent party. Uninformed voters favor the incumbent party but balance against the overly skewed national partisan environment. The left panel shows that after controlling for incumbent party, the stronger the partisan skew towards Republican (the higher the score of national PVI), the less likely uninformed voters are to vote for Republican candidates. The fitted lines with the 95\% confidence interval from the EDV regression show that two factors produce almost a perfect fit of the variations in average uninformed Republican vote probabilities. The right panel indicates that this pattern does not exist for informed voters: The average predicted probability of informed Republican votes are mostly unresponsive to the change in incumbent party and the national partisan environment. \section*{Discussion} \par Overall, the analysis suggests that local and national partisan environments play a crucial role in explaining the behavior of uninformed voters. While informed voting is dominated by individual preferences, uninformed voters bandwagon with the local partisan environment (Study 1) and balance against the national partisan environment (Study 2). The result of this study provides consistent evidence that informed and uninformed voters apply different logic in making vote choices. Especially, the evidence of balancing implies that uninformed voters have an incentive and an ability to vote strategically. \par This study offers clues to answer the question left in the empirical studies of information effects: \textit{Why} does information make a systematic difference in the outcome of voting choices? The partisan environment is one of the key variables to understand why informed and uninformed voters act in different ways. Using this result, the scholarly debate on civic competence can move forward instead of focusing on how illogical and unpredictable uninformed voters are, the conversation can shift to the exploration of how the logic of uninformed voting relates to the quality of democratic outcomes. \par Three caveats remain. First, it is not clear if the findings can be generalized outside of the context of presidential elections under a two-party system. Further exploration of voting patterns under parliamentary and multi-party system is needed. Second, the measurement of the partisan environment has an issue of visibility. The objective measure used in this study does not ensure that this information is directly observed by the respondent. Further research should consider how the knowledge of the partisan environment is communicated to uninformed voters (e.g., social network and public opinion polls). Third, this study ignores the participation incentives of uninformed voters. Many studies suggest that political knowledge plays an important role in turnout decisions \citep{Matsusaka1995exvo, Dellicarpini1996wham, Feddersen1996thsw, Lassen2005thef, Larcinese2007dopo, Gemenis2014voad}. To fully grasp the picture of uninformed voting, one needs to take abstention incentives into account. \par Finally, one promising direction in which to extend the current analysis is exploration of the interactive nature of context-based uninformed voting. The recent developments in game theoretic and simulation research suggest that after accounting for the dynamic interaction process among voters or between voters and a policymaker, the lack of voter information is not crucial for, or even improving, the quality of democratic decisions \citep{Ashworth2014isvo, Couzin2011unin}. Exploring the implications of those interactions for context-based uninformed voting would be an important contribution to this line of inquiry. %\clearpage % \theendnotes \singlespacing %\nocite{*} \bibliographystyle{apsr} % \bibliography{C:/GoogleDrive/Reference/list_jabref.bib} %\bibliography{/home/gentok/GoogleDrive/Reference/list_jabref.bib} \bibliography{uninformedchoice.bib} \input{Kato2019loba_v14_appendix.tex} \end{document}
{ "alphanum_fraction": 0.7948696115, "avg_line_length": 170.1992753623, "ext": "tex", "hexsha": "2af3a4df5a1d6b6b397a621731398ff189086a44", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1b0615afa02805fa31e24c604ef5a2db78fd8e40", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gentok/UninformedChoice", "max_forks_repo_path": "papers/Kato2019loba_v14.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1b0615afa02805fa31e24c604ef5a2db78fd8e40", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gentok/UninformedChoice", "max_issues_repo_path": "papers/Kato2019loba_v14.tex", "max_line_length": 1963, "max_stars_count": null, "max_stars_repo_head_hexsha": "1b0615afa02805fa31e24c604ef5a2db78fd8e40", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gentok/UninformedChoice", "max_stars_repo_path": "papers/Kato2019loba_v14.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10115, "size": 46975 }
\documentclass[11pt, titlepage ]{article} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \usepackage{natbib} \usepackage{titlesec} \usepackage{graphicx} \usepackage{float} \usepackage{titling} \usepackage{hyperref} \usepackage{pgffor} \usepackage{wrapfig} \usepackage{scrextend} \usepackage{listings} \usepackage{listingsutf8} \usepackage{color} \usepackage[final]{pdfpages} \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} \lstset{ backgroundcolor=\color{white}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor}; should come as last argument basicstyle=\footnotesize, % the size of the fonts that are used for the code breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace breaklines=true, % sets automatic line breaking captionpos=b, % sets the caption-position to bottom commentstyle=\color{mygreen}, % comment style deletekeywords={...}, % if you want to delete keywords from the given language escapeinside={\%*}{*)}, % if you want to add LaTeX within your code extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8 firstnumber=1, % start line enumeration with line 1000 frame=single, % adds a frame around the code keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible) keywordstyle=\color{blue}, % keyword style language=Octave, % the language of the code morekeywords={*,...}, % if you want to add more keywords to the set numbers=left, % where to put the line-numbers; possible values are (none, left, right) numbersep=5pt, % how far the line-numbers are from the code numberstyle=\tiny\color{mygray}, % the style that is used for the line-numbers rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here)) showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' showstringspaces=false, % underline spaces within strings only showtabs=false, % show tabs within strings adding particular underscores stepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered stringstyle=\color{mymauve}, % string literal style tabsize=2, % sets default tabsize to 2 spaces title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title } \title{ Term Project\\ Cool Running} \author{ Dylan Duan \\ Eric Yoo \\ Andreas Rebsamen \\ Fall-2019-CNIT-35500-001\\ Software Development For Mobile Computers\\ Purdue Polytechnic\\ Professor: Byung-Cheol Min } \date{\today} \bibliographystyle{unsrtnat} \begin{document} \begin{titlepage} \maketitle \end{titlepage} \tableofcontents \clearpage \section{Readme} The pretty formatted markdown readme is at the end of the document at chapter \ref{readmeFormatted} beginning from page \pageref{readmeFormatted} \lstinputlisting[language=TeX]{../README.md} \clearpage \section{java} \subsection{Activities} \subsubsection{MainActivity.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/MainActivity.java} \subsubsection{ActivityRunning.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/ActivityRunning.java} \subsubsection{ActivityResults.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/ActivityResults.java} \subsubsection{MapsActivity.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/MapsActivity.java} \clearpage \subsection{Services} \subsubsection{SpeedService.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/SpeedService.java} \subsubsection{SpeedMonitorService.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/SpeedMonitorService.java} \clearpage \subsection{other classes} \subsubsection{CoolRunningCom.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/CoolRunningCom.java} \subsubsection{GPXGenerator.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/GPXGenerator.java} \subsubsection{SpeedMonitor.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/SpeedMonitor.java} \subsubsection{TargetSpeedUpdater.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/TargetSpeedUpdater.java} \clearpage \subsection{Enums} \subsubsection{RunningError.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/RunningError.java} \subsubsection{RunningMode.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/RunningMode.java} \subsubsection{State.java} \lstinputlisting[language=java]{../CoolRunning/app/src/main/java/ch/arebsame/coolrunning/State.java} \clearpage \section{AndroidManifest.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/AndroidManifest.xml} \section{Gradle} \subsection{build.gradle CoolRunning} \lstinputlisting[language=java]{../CoolRunning/build.gradle} \subsection{build.gradle app} \lstinputlisting[language=java]{../CoolRunning/app/build.gradle} \clearpage \section{res} \subsection{layout} \subsubsection{activity\_main.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/layout/activity_main.xml} \subsubsection{activity\_maps.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/layout/activity_maps.xml} \subsubsection{activity\_running.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/layout/activity_running.xml} \subsubsection{activity\_results.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/layout/activity_results.xml} \clearpage \subsection{values} \subsubsection{colors.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/values/colors.xml} \subsubsection{strings.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/values/strings.xml} \subsubsection{styles.xml} \lstinputlisting[language=xml]{../CoolRunning/app/src/main/res/values/styles.xml} \section{Appendix} \subsection{Readme formatted} \label{readmeFormatted} \includepdf[pages=-,fitpaper=true]{README.pdf} \end{document}
{ "alphanum_fraction": 0.7467513716, "avg_line_length": 41.7228915663, "ext": "tex", "hexsha": "86d4013206ec559109e4004626d4bd5fd69a8e60", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b50c0e84044165dba55193d45c16a43b5291b21", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Andi-Sail/CoolRunning", "max_forks_repo_path": "doc/CNIT355_CoolRunning_doc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b50c0e84044165dba55193d45c16a43b5291b21", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Andi-Sail/CoolRunning", "max_issues_repo_path": "doc/CNIT355_CoolRunning_doc.tex", "max_line_length": 150, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b50c0e84044165dba55193d45c16a43b5291b21", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Andi-Sail/CoolRunning", "max_stars_repo_path": "doc/CNIT355_CoolRunning_doc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1780, "size": 6926 }
\chapter{State of the art} \label{chap:sota} \resetallacronyms \begin{shaded} This chapter exposes what are the different ways to improve heat pump systems through a review of the available literature. Multistage, oil-free, and variable-speed technologies show promising perspectives in the heat pumps application fields and allow to make more efficient, more compact, more silent heat pump circuits built with less raw material and using a lower refrigerant charge. \end{shaded} As explained in \cref{sec:intro-slow-energy}, there are different types of heat pumps. The interest in this chapter is focused on electrically-driven vapor compression domestic heat pumps. \section{Types of refrigerant compressors} \subsection{Dynamic versus volumetric} \label{sec:sota-dyn-vs-vol} Refrigeration systems equipped with dynamic compressors are expected to develop better seasonal performance than those equipped with volumetric compressors. Indeed, soon after the scroll compressor technology release, \citet{Purvis-1987a} described scroll-based heat pump systems capacity response to the residential heating demand. Heat pumps using volumetric compression devices, like scroll compressors (the other types of volumetric compressors are even more concerned by this trend according to \citet{ASHRAE-HVACeq-2008a-Compressor}), are characterized by a decrease of their capacity as the outside temperature decreases, while the residential demand evolves inversely. This behavior explains why the heat pumps used for space or water heating in residences must be designed to provide the heating demand for the lowest expected outside temperature in the geographic area. It implies that the heat pump, whose function would be to heat the houses without any auxiliary device (like a boiler or an electric resistor to compensate for the heat pump, or replace it totally when outside temperature becomes too low), will be significantly over-sized, during most of the heating period. As explained by \citet[p.\,1922]{Schiffmann-Favrat-2009a}, variable-speed dynamic compressors do not behave this way and stick with the residence demand curve. They are also expected to develop better isentropic efficiencies\footnotep{The isentropic efficiency of a compressor is defined in \cpref{sec:methodo-indicators}.}. Consequently using dynamic compressors in domestic heat pump circuits would result in a much better energy use and, potentially, in a maximization of the efficiency, since it becomes possible to choose the exact compressor speed which maximizes the compressor efficiency for the given mass flow rate and pressure ratio needed. Additionally, the compression unit would not need to be over-sized which would result in a more rational use of raw materials and result in a smaller compression unit, which also increases the compactness potential. \subsection{Volumetric compressors used in domestic heat pumps} \label{sec:sota-vol-cp} The main volumetric compressor (also called positive-displacement compressor) technologies used in refrigeration circuits are: \begin{description} \item[Reciprocating compressors:] Piston compressors are also known as reciprocating compressors. Linear stroke compressors are also reciprocating compressors. Historically, that compression technology is the first one to have powered vapor compression domestic heat pumps, from after somewhere between the two World Wars \citep[p.\,23]{zogg-2008a} to the 1980s, when the scroll compressors have been introduced. From that point in time, reciprocating compressors started to be replaced by scroll compressors in the domestic heat pump application. Scroll compressors were cheaper, more reliable, needed less maintenance, and were less noisy. In the 1990s, most of the vapor compression domestic heat pumps are equipped with scroll compressors, instead of reciprocating compressors. Reciprocating compressors had to be lubricated to work properly and to not fail. With the increase of the accuracy of the manufacturing methods and the development of new design, linear stroke compressors without lubrication start to be produced. They target first the domestic refrigerators application, but are also used in some other applications, like in the study made by \citet{Marcinichen-Michel-2014a}. For instance, \citet{Marcinichen-Michel-2014a} used an oil-free 125W \citep[Tab.\,1, p.\,183]{Marcinichen-Michel-2014a} linear stroke mini-compressor capable of modulating its volumetric displacement on a share of the stroke \citep[p.\,183]{Marcinichen-Michel-2014a} and, an oil-free magnetically driven liquid gear pump to perform two-phase chip cooling. The compressor power of the compressor selected in the paper from \citet{Marcinichen-Michel-2014a} would not fit for domestic heating applications as its power is very low. Generally speaking, totally oil-free linear stroke compressors are limited in power range and are usually dedicated to household refrigerators applications, where they greatly contributed to increase the system performance \citep{Bansal-Abdelaziz-2011a}. However, \citet[p.\,186]{Marcinichen-Michel-2014a} consider the efficiency of such a compressor still low compared with conventional domestic heating heat pump compressors. \item[Scroll compressors:]\label{sec:sota-scroll}A scroll compressor is an involute profile, mounted on a rotor, that rolls onto an other involute profile, which is usually fixed, and that is sightly offset. The involutes are drawn in a way that reduces the volume of the compression space further and further during the rotation of the shaft. The geometry of the scroll compressor has been invented by \citet{creux-1905a} at the beginning of the 20\th{} and did not really changed ever since. The involute profile is described mathematically by two Archimedean spirals with the same generating circle, and separated by a constant offset. They also can be generated from hybrid curves. The scroll compressor has not been commercialized until the early 1980s, as it had to wait for the development of effective high-accuracy, high volumes manufacturing techniques to be developed because of the complexity of the shapes involved \citep[p.\,16]{zogg-2008a}. Between the 1980s and 1995, the performance of the scroll compressors has been optimized and increased, then, after the introduction of the last generation of scroll compressors between 1992 and 1995, the increase of performance stopped. The data collected by \citet{Eschmann-2009a}\footnotep{\citet{Eschmann-2009a} performed a monitoring study on Brine/Water and Air/Water domestic heat pumps whose performance had been measured at the Swiss heat pump certification center between 1992 and 2007.} and summarized in his report of 2008 highlights the stagnation of the heat pump performance since the introduction of this last generation of scroll compressors. Nowadays development of this technology takes new paths, like the one explored by \citet{Iglesias-favrat-2014a}. They presented a prototype of oil-free scroll air compressor with two mobile involutes working in synchronized co-rotation one relative to another. The prototype can also work as a turbine, as the compressor is reversible. This concept could theoretically be applied with refrigeration compression, even if the technical challenges are significant. For instance, injection of liquid refrigerant during the compression in the volutes could also be done. \citet{Zehnder-2004a} has documented this kind of humid vapor injection in his work \citep{zehnder-favrat-2010a}, but in his case, one of the involutes was fixed, as he was working with an orbital scroll, which was making the process easier. Using co-rotative scrolls greatly limit the efforts on the bearings and results in a balanced setting \citep[Fig.\,2 p.\,567]{Iglesias-favrat-2014a}, which makes it an interesting concept for oil-free applications. Nonetheless, it seems that the development of the scroll compressors reaches technological limits and will not increase its performance significantly, with the current technology state. \item[Rotary vane compressors:]Rotary vane compressors are made of a rotor with blades inserted in radial slots in the rotor. When the rotor turns, the blades slide in and out of the slots, keeping contact with the outer wall of the compressor housing. As the rotor is not at the center of the housing, the gas is being compressed by a reduction of the volume between the blades. Those compression devices are being introduced recently in the domestic heat pump sector as a second compression stage on top of a scroll compressor \citep{Kondo-Kimata-2010a,Mitsubishi-2011a,Sato-Kobayashi-2012a}. A rotary vane compressor can be multistage and is a lot quieter than the reciprocating technology, at equivalent power. \end{description} \subsection{Dynamic compressors used in domestic heat pumps} \label{sec:sota-dyn-cp} There is no dynamic compressor technology used in the heat pump domestic sector currently, but one is coming with the developments performed since the beginning of the 21\th century, notably with the work of \citet{schiffmann-2008a}. \paragraph{Radial compressors} The first radial compressors were manufactured at the beginning of the $20^{\text{th}}$ century. They were originally developed by steam turbine manufacturers and were widely used for ventilation purposes in deep mining. At that time, the possibilities of producing an impeller were rather limited by the manufacturing technology available. Decades later, the manufacturing technologies had evolved and started to allow the manufacturing of highly efficient radial compressors. Carrier has been the first to work seriously on radial compressors from 1911 for refrigeration applications (at the time, it was for air conditioning). In about 1919, he first tried a German radial compressor with di-chloroethylene, and then a compressor made by Eastman Kodak in the United States with dichloromethane \citep[p.\,16]{zogg-2008a}. They have been used in industrial refrigeration circuits from the beginning of the 20\th{} century to nowadays, and they now can be used in domestic heat pump applications due to the recent development of the manufacturing processes and the development of small scale gas bearings sets specially designed for this application, with an integrated-design approach \citep{schiffmann-2008a}. \subsection{About the compression units used in this thesis work} \label{sec:sota-jurg-cp} \citet{Schiffmann-Favrat-2009a} presented in 2009 really encouraging experimental results with the testing of a single-stage compression unit. They also presented promising preliminary simulation results for a twin-stage heat pump based on the radial compressor unit that \citet{schiffmann-2008a} was developing. That compression unit, and its successors, the twin-stage compression units used in this thesis work, are at the cutting edge of the technique and the technology reachable nowadays, as shown in \cref{fig:zwyssig}. Those compression units are several times lighter and smaller than their equivalent in the scroll technology, as illustrated in \cref{fig:cp-unit-volume-comparison} and are expected to demonstrate the greatest potential in many processes where gas compression is needed. A similar design has been used by \citet{demierre-wegele-2014a} with a compressor-turbine unit, using a radial compressor and a radial turbine, mounted at the two sides of a same shaft\footnotep{Details about this specific design can be found in the thesis work of \citet{demierre-2012a}.}. The twin-stage compression units powering the \AWP{} and the \BWP{} have been developed between 2002, when the feasibility study of the twin-stage unit has been presented by \citet{schiffmann-godat-2002a}, and 2012, when the first compression unit prototype has been able to power a heat pump cycle in an experimental setup. The results of those first experiments are presented in \cpref{chap:awp}. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{zwyssig-round-2009a-fig1p565-augmented} \caption[Trends for high speed electrical drives and turbomachineries.] {Emerging application areas and trends for high speed electrical drives and turbomachineries, based on the work of \citet[Fig.\,1, p.\,565]{zwyssig-round-2009a}.} \label{fig:zwyssig} \end{figure} \begin{figure}[htbp] \centering \subfloat[Compression unit aside a 1.5-liter water bottle] {\label{fig:cp-unit-1L}\includegraphics[width=0.45\linewidth]{20121219T094101-00066bis}} \hspace{1em} \subfloat[The compression unit is equivalent to 2 scroll compressors] {\label{fig:cp-unit-scrolls}\includegraphics[width=0.45\linewidth]{20121218T525614-0049+0050}} \caption[Volume of the twin-stage compression unit]{The volume and the weight of the twin-stage compression unit are several times lower than a single-stage scroll compressor. The 6 kW twin-stage compression unit is roughly equivalent to 2 single-stage 3 kW scroll compressors.} \label{fig:cp-unit-volume-comparison} \end{figure} \section{Noise reduction in refrigeration circuits} As very accurately balanced and high rotation speed devices, the compression unit developed by \citet{schiffmann-2008a} and its successors produce no vibrations and can be made very silent. In opposition to typical scroll compressors that produce a lot of vibrations and noise (about 65 dBA, on average, according to \citet{ARI-270-94} compliant measurements, at the nominal compressor speed of 50 Hz \citep[Fig.\,43, p.\,37.26]{ASHRAE-HVACeq-2008a-Compressor}), widely spread on the audible sound spectrum. This implies that this noise is difficult to insulate, on the contrary to the radial compression units, which produce high frequency noises due to their high rotational speed and the gas bearings technology (no friction). As those high frequencies are easy to staunch with basic sound insulation, radial compression unit rotating on gas bearings can be made very silent. Furthermore, regular Air/Water domestic heat pumps available in th \citet{Eurovent-2010a} database release between 53 and 83 dBA (with fans), according to ISO\,9614 \citep{EN-ISO-9614-1} ans ISO\,3744 \citep{ISO-3744-2010a} compliant measurements. Consequently, compressor noise constitutes a significant part of the heat pump noise. Thus, switching to radial compression unit rotating on gas bearings and to silent fans open the way to very quiet heating machinery. \section{Variable-speed in refrigeration circuits} Variable-speed capacity control has been proven to increase heat pump efficiency \citep{Karlsson-2003a,Karlsson-Fahlen-2008a}. There are different ways to obtain this capacity control. Heat pump capacity control is performed by reducing the compressor capacity. In domestic heat pumps, most of the compressor used are scroll compressors. Those device are currently using one of the three technologies detailed below to control their capacity. \paragraph{Variable displacement based capacity control:} This mechanism is dedicated to scroll compressors and consists in ports incorporated in the fixed scroll. The control consists in the connection or not of the compression chamber to the suction side by respectively closing or opening the ports. Then, when the ports are all closed, the compressor runs at its full capacity. To provide only a share of the full capacity, some holes are open. The number of different capacities and extent of the capacity reduction available is governed by the locations of the ports. \paragraph{Pulse Width Modulation based capacity control:} This mechanism is also dedicated only to scroll compressors and consists in a device that modulates the axial pressure that maintains sealing contact between the scroll tips and its base. The control is done by cycling the loading and unloading of the fixed scroll without changing the motor speed. The cycle is controlled by an electrical devices which adapt the loading and unloading phases to make the compressor deliver the exact capacity required. \paragraph{Variable speed based capacity control:} The compressor is driven by an inverter to convert the 50 Hz fixed-frequency alternative current coming from the power network to an adjustable voltage and frequency signal. This signal is then used to control the speed of the motor, which is correlated with the mass flow rate of the refrigerant through an equation or a compression map. This capacity control strategy is used on some scroll compressors and is the solution selected for the control of the radial compression units used in this thesis work. \section{Multistage refrigeration circuits} \label{sec:sota-multistage} Two-stage compression cycles has been proved to reach higher performance than single-stage cycles in various studies \citep{Favrat-Courtin-1997a,Zehnder-2004a}. Moreover, this statement is particularly true for high temperature differences between the hot source and the cold source. \citet{Zehnder-2004a} presented several twin-stage heat-pump configurations and tested some of them \citep{Zehnder-Favrat-1998a,Zehnder-Perevozchikow-2002a}. The most promising cycles were the following: \begin{description} \item[Solution \#1:] Addition of a separate single-stage heat-pump cycle to a main single-stage heat-pump cycle. The main cycle is dedicated to the heating of the house while the additional cycle uses the subcooled liquid at the outlet of the main condenser as a cold source to produce tap water. \item[Solution \#2:] Superposition of two single-stage heat-pump cycles coupled with a shared heat exchanger acting as the condenser for the bottoming cycle and an evaporator for the topping cycle. \item[Solution \#3:] A single-stage cycle using a single-stage compressor with intermediate vapor-injection. \begin{description} \item[\#3.1:] The vapor injected during the compression process is produced by the expansion of subcooled liquid removed at the outlet of the condenser. Before being injected, the wet vapor is heated up by going though an intermediate heat exchanger, exchanging heat between the main subcooled liquid line and the wet vapor previously expanded \citep{Zehnder-Favrat-2002a,Beeton-Pham-2003a}. When the vapor exchanges heat in the intermediate heat exchanger, its vapor quality increases. Often, the vapor injected in the compressor is just saturated. Wet vapor injection is only needed if the outlet temperatures are getting too high. \item[\#3.2:] The subcooled liquid coming from the condenser is expanded to an intermediate pressure and enters a flash tank where vapor and liquid are separated. Liquid is expanded and enters the evaporator while vapor is injected in the compressor, during the compression process. \end{description} \item[Solution \#4:] A twin-stage heat-pump using a twin-stage compressor and an intermediate heat exchanger or a flash tank in order to inject vapor between the two compression stages, as in the two versions of the above solution. \end{description} \citet{Schiffmann-Favrat-2005a} have analyzed those different concepts in order to design a domestic, high temperature lift, air-water heat pump. They sum up those concepts in a later article \citep{Schiffmann-Favrat-2009a} and conclude notably that the solution \#4, with a flash tank acting as an economizer, is the most interesting one, when taking in account the radial compressors characteristics and limitations. Moreover, they pointed out that this solution is an elegant one in terms of number of components and control, and a promising one in term of \COP{} \citep{schiffmann-2008a,Favrat-Courtin-1997a}. They also indicate that, with this cycle configuration, inverting the cycle in order to defrost the evaporator, could allow to use the economizer as an internal energy source. Since the final goal is the development of a twin-stage oil-free air-water heat-pump using a twin-stage radial compressor rotating on gas bearings, the configuration \#4 is favored. This configuration is referred as a twin-stage compression cycle with flash cooling, as it has been named by the ASHRAE \citep[Fig.\,49, p.\,37.29]{ASHRAE-HVACeq-2008a-Compressor}. \section{Lubrication in refrigeration circuits} \citet{YoubiIdrissi-Bonjour-2008a} highlight that, currently, almost every refrigeration vapor compression systems need a lubrication agent, which is generally a mineral or synthetic lubricant oil depending on the refrigerant used in the system. Oil functions are (1) to protect the mechanical moving elements against the wear with a thin lubricant film, (2) to act as a sealing element, (3) to limit the noise made by the mechanisms, (4) to help the evacuation of chemical impurities or deposits which may be present in the circuits, and (5) to act as a heat transfer medium for cooling the compressor, in many systems. Those favorable or vital functions clearly assert that oil in refrigeration compression systems is generally compulsory and useful. However, that oil brings also severe drawbacks to the refrigeration circuits. Most of the time, it reduces the heat transfer coefficient in heat exchangers, it changes the flow configurations, it increases the pressure drops, and it modifies the thermodynamic equilibrium and the thermodynamic properties of the refrigerant \citep{YoubiIdrissi-Bonjour-2008a}. \subsubsection{Migration of the oil inside the heat pump loops} \label{sec:migration} \citet{Zehnder-2004a} studied the migration of the oil into a twin-stage heat pump loop. He concluded that the oil is migrating from the topping stage compressor to the bottoming stage compressor and could not identify a stable situation where this statement was false. The topping compressor, a scroll compressor, without lubricant oil recovery circuit, dries and is doomed to failure. Moreover, \citet{Navarro-Corberan-2005a} studied the oil circulation ratio and its return to the compressor with R290/\POE{} and R407C/\POE{}, on a reciprocating compressor installation and compared their results with mineral oil experiments. They found that the oil would return easier to the compressor if it is a \POE{} than if it is a mineral oil. They concluded also that there is no behavior difference, from the oil point of view, with R290 or R407C but this conclusion is deduced from a single-stage heat pump loop. \citet{Winandy-Cuevas-2003a} has studied the oil level in two scroll compressor in parallel, in a refrigeration installation. They demonstrated that the oil was not returning equally to the two compressors, especially when working under part load. They also linked the two compressor housings with a straight pipe, welded at the normal height of the oil-levels, allowing an oil-level adjustment between the two compressors and concluded the oil migration phenomenon has to be taken seriously, even more seriously when running under part-load. Unfortunately, in twin-stage heat pumps based on 2 scroll compressors, since the pressure level is not the same for the two compression devices, in opposition to the two scroll compressors in parallel presented by \citet{Winandy-Cuevas-2003a}, this solution is not applicable. The direct consequence of those studies is that it should be a lot easier to make multistage heat pump devices without having to consider the return of the lubricant agent to the compressors. Those heat pumps are likely to be more reliable also, as they won't fail because of lubrication issues. Additionally, using oil-free circuits allows to get free from the design rules of circuits with oil. For instance, the pipe diameters can be increased in the suction lines in order to decrease the pressure drop on the vapor line, generating exergy losses\footnotep{Exergy and exergy efficiency are defined in \cpref{sec:methodo-indicators}.}. Those pipes diameters are limited in the circuits with oil because the oil has to be recovered in the compressor housing \citep{kesim-ileri-2000a,Guo-Shen-2011a}. This problem is even more acute with variable flow capacities linked to variable speed compressors. \subsection{Impact of the lubricant on the expansion process} \label{sec:oil-dv} \subsubsection{Electronic expansion valves} \citet{Liang-Zhijiu-2009a} have established models of electronic expansion valves for R22, R407C and R410a based on Bernoulli equation giving accurate results. The models they propose differ from some conventional models using the two-phase outlet pressure and corrected flow coefficient since they consider metastable phenomenona caused by rapid depressurization and employ the throat pressure of the electronic expansion valves and the single-phase incompressible flow coefficient. \citet{Park-Kim-2007a} have used a different approach using a model with a set of parameters and variables including the valve geometry, its inlet and outlet conditions, and the refrigerant thermodynamics properties to describe the valves behavior. Both of those studies are dealing with pure refrigerant or neglect the presence of oil. For now, influence of oil seems not to be documented or considered as negligible, but it could be a problem with oil-free circuits. \subsubsection{Capillary tubes} There are two groups of studies dealing with the effect of oil in the capillary tubes. The first deals with oil-rich mixtures, with typically more than 5\% in weight, while the second treats the oil as a contaminant, present in quantities lower than 5\% in weight. The studies where important quantities of oil are mixed with the refrigerant are quite seldom but are interesting to understand and interpret the foaming phenomenon occurring in compressors, as the oil concentration is the highest in those devices. For instance, \citet{Poiate-Gasche-2006a} studied the foaming phenomenon inside small tubes. The second case is more usual and several studies have been published. Most of them aim to improve the knowledge available on the phenomena which occur in capillary tubes, like two-phase flow pressure drop or metastable flow\footnotep{A metastable flow remains liquid over a distance longer than the one predicted by conventional pressure drop models.}. \citet{Motta-Braga-2002a} performed visual experiments to determine the position of the vaporisation point of a R404a-oil mixture inside a capillary tube and quantified the effect of a given percentage of oil on the capillary tube behavior. Some of the studies observed a reduction in the mass flow rate with an increase of the oil concentration \citep{Motta-Parise-2001a,Fukuta-Ogi-2003a}. \subsection{Impact of the lubricant on the evaporation process} \label{sec:oil-ev} The evaporation process is known for decades to be the more sensitive process of the heat pump cycle regarding the presence of compressor lubrication oil in the refrigerant. \citet{McMullan-Murphy-1992a} have shown that the viscosity of the lubrication oil has a negative effect on the evaporator performance for a fully miscible oil-refrigerant mixture. For shell-and-tubes evaporators, they concluded that the addition of oil produces a change in the refrigerant two-phase flow regimes and a decrease of the overall evaporator performance. Previous studies \citep{McMullan-Hughes-1988b,McMullan-Hughes-1988a,Hughes-Morgan-1984a,Hughes-Morgan-1982a,Hughes-Sutcliffe-1980a,Hughes-Morgan-1984b}, observed the lubricant oil influence on the heat pump performance and concluded that the presence of oil in the evaporator was responsible of a significant decrease of performance. They also concluded that the accumulation of oil at the end of the evaporator (refrigerant vaporizes, oil just flows), has a significant influence on the decrease of the heat transfer coefficient which is observed. They have estimated that, in principle, an evaporator working with no oil at all would allow an increase of the whole evaporator heat exchange coefficient by about 40\% \citep[p.\,123]{McMullan-Morgan-1983a}. Indeed, the more refrigerant evaporates, the more the remaining liquid refrigerant is charged in oil, and the more it is difficult for it to evaporate. The potential improvement of 40\% is certainly quite optimistic, but it remains that improvements of the heat exchange coefficient are observed with the decrease of the oil mass faction. If oil can not be removed, from an heat exchange point of view, \citet{McMullan-Morgan-1983a} added that, for low amounts of oil in the refrigerant, a low viscosity oil leads to better results, while for bigger amount of oil in the refrigerant, high viscosity oil would be the better choice. They also observed that the presence of oil increases the pressure drop into the evaporator. \citet{Spindler-Hahne-2009a} have studied the influence of oil on nucleate pool boiling heat transfer with enhanced surface tubes. They concluded that, except under very specific conditions, oil always decreases the heat transfer coefficient \citep[Fig.\,19 p.\,990]{Spindler-Hahne-2009a} and \citep{Moller-1998a}. The more oil there is, the less efficient the evaporation is. \citet{Nidegger-Thome-1997a} and \citet{Zurcher-Favrat-1998a} have studied the intube flow boiling of R-407C and R-407C-oil mixtures \citep{Zurcher-Favrat-1998a,Zurcher-Favrat-1998b}, and the R134a and R134a-oil mixtures \citep{Zurcher-Favrat-1997a,Nidegger-Thome-1997a}, both on plain tubes and microfins tubes, and confirmed the observations of \citet{McMullan-Morgan-1983a}: with increasing oil concentration, the heat transfer coefficient drops. Some authors, like \citet{Cawte-Poland-1996a}, observed, in the contrary, an increase of the heat transfer if the oil concentration reaches a certain range (2 to 10\% in the case of the study published by \citet{Cawte-Poland-1996a}). \citet{BandarraFilho-Thome-2009a} reviewed a great number of papers involving refrigerant–oil mixtures that can be found for different test conditions and found that they often present conflicting results, unfortunately. Even so, it is still possible to affirm that some thermodynamic properties of refrigerant/oil mixtures, such as density, viscosity, surface tension and miscibility, can modify, specifically, the heat transfer and pressure drop, and thus affect directly the \COP{} of the system \citep[p.\,186]{BandarraFilho-Thome-2009a}. \citet{Cawte-1992a} performed similar heat transfer studies on condensation processes with refrigerant/oil mixtures and observed a big non-linear decrease of the heat coefficient with the increase of the oil concentration. However, he concludes that this change of heat transfer coefficient has little impact on the whole heat pump performance. Of course, it is important to consider that most of those studies have been made on plain tubes, or conventional surfaces. \subsection{Oil-free heat exchange technologies} \label{sec:sota-oilfree-hx} Using oil-free compression devices opens the way to the use of existing or to-develop heat exchangers, which would be more efficient or become usable with oil-free compression technologies. Those heat exchangers include micro-channels heat exchangers, direct ground evaporators, and heat exchangers using enhanced surfaces, like enhanced tubes-based heat exchangers \citep{Ribatski-Jacobi-2005a,Habert-2009a,vanrooyen-2011a} or enhanced plate-based heat exchangers \citep{Furberg-2006a}, which would result in a reduction of the heat exchange surfaces, and consequently, in a reduction of the pressure drops inside the circuits and of the whole heat pump size. Furthermore, some of those heat exchange technologies open the way to heat exchangers with reduced temperature pinches between the refrigerant and the heat source and would contribute to decrease the exergy losses coming from the heat exchanges. \citet{Furberg-2006a} and \citet{Li-2008a} develop plates heat exchangers with enhanced surfaces with micro patterns \citep{Furberg-Muhammed-2009a}. Enhanced surfaces with micro-patterns are filled with oil, if used with refrigerant-oil mixtures \citep[Fig.\,10\,\&\,11 p.\,985--986]{Spindler-Hahne-2009a} and thereby become less efficient. Consequently, the enhanced plate heat exchangers developed by \citet{Furberg-Muhammed-2009a} mainly target oil-free applications. Oil free heat exchange technologies are in heavy development since the last 20 years \citep[Fig.\,1 p.\,186]{BandarraFilho-Thome-2009a}, as new market applications emerge. Their application in oil-free compact domestic heat pumps promises to increase even further the potential of the oil-free compression technologies. \subsection{Impact of the lubricant oil on the compression process} \label{sec:oil-cp} One of the main effect of oil in the compression process is the foaming phenomenon, which is due to the interactions between the oil and the refrigerant induced by the blade rotation or the vapor blow. The foaming phenomenon has been studied in a hermetic casing simulating a hermetic rotary compressor by \citet{Yanagisawa-Fukuta-1991a}. They observed that the foaming increases and become massive for high compressor blade speed combined with a high mass flow rate. Another effect of the oil on the compression process is the modification of the compressor performance. Indeed, because of the solubility of the oil into the refrigerants\footnotep{The solubility of the oil into the refrigerant is proved to increase with pressure \citep{Wahlstrom-Vamling-1997a}.}, the refrigerant-oil mixture enthalpy may be substancially different from the pure refrigerant enthalpy \citep[Fig.\,2--4, p.\,288--289]{YoubiIdrissi-Meunier-2003a}. As a consequence, the energy balance performed on the compressor may be false and, as shown with the \citet[Fig.\,2--4, p.\,288--289]{YoubiIdrissi-Meunier-2003a} diagrams, it leads to wrong estimations of the refrigerant mass flow rate. Indeed, considering pure refrigerant instead of the real oil-mixture that really flows out of the compressors leads to make a mistake on the enthalpies at the inlet and the outlet of the compressors, which is reflected on the energy balance, and finally on the mass flow rates. As some refrigerant remains dissolved in the oil, some liquid desorbs from the oil during the compression process, inducing a wet compression process. This phenomenon may considerably affect the compressor isentropic efficiency while it has no effect on the volumetric efficiency \citep{Wang-Dickson-2006a}. \section{Refrigerant charge reduction in refrigeration circuits} Because of their impact on the environment, European regulation concerning refrigerating systems has become more and more severe and imposes increased constraints related to the refrigerant charge of the installations. As a consequence, many studies aimed at minimizing the charge in refrigeration circuits were developed. Studying the behavior of the refrigerant charge in the refrigeration circuits and components aims at understanding it and reducing the charge to its minimal amount. \citet{Poggi-Bontemps-2008a} made a review of the studies aiming at reducing the refrigerant charge. They conclude that the optimal charge for each installation can be determined and that a reduction of the overall charge can be achieved by reducing the internal volume of exchangers, receivers, and liquid lines. In particular, exchangers with small internal volume should be used; compact exchangers (for instance based on the small channel technology) allow a considerable benefit without performance decline \citep[p.\,367]{Poggi-Bontemps-2008a}. \citet{Poggi-Bontemps-2008a} explain also that the use of electronic expansion valves allow to decrease the charge. The use of secondary circuits, when possible, also helps. \citet{Palm-2007a} arrived previously to the same conclusions than \citet{Poggi-Bontemps-2008a}, but added also that in indirect systems, the amount of refrigerant solved in the compressor oil may be comparable to the amount in the (compact) heat exchangers. A possible solution to reduce this amount is consequently to use compressors with less oil. \citet{Palm-2007a} also suggested that, instead of a high pressure receiver and a thermostatic expansion valve, which is a common heat pump circuit design, a capillary tube may be used in combination with a minimal low pressure receiver. This statement partially goes against the proposal of \citet{Poggi-Bontemps-2008a}, who suggest that using an electric expansion valve, more sophisticated than a thermodynamic valve, would help. In the opposite, \citet{Palm-2007a} suggested to use a capillary tube instead. Both approaches may be giving good results, as they use the whole circuit components and topology to handle the charge behavior. Obviously, it would be interesting to test them out within the same experimental setup. This is the kind of test that the \BWP{} had been designed for: testing different layouts, topologies, and components, in a domestic heat pump prototype. The specifications of the \BWP{} and its design are detailed in \cpref{chap:bwp} and \cpref{chap:bwp-components}. \subsection{Importance of the control strategy in heat pumps with low refrigerant charge} \label{sec:sota-control} Increasing the compactness and decreasing the refrigerant charge implies to better control the thermodynamic cycle, in order to prevent system failures and to reach the best performance. While several studies show that an optimized control in refrigeration systems allows to save a significant amount of energy \citep{jakobsen-rasmussen-1998a,abdelghaniidrissi-richalet-2001a,yao-zhou-2004a,leducq-trystram-2006a}, \citet{Fallahsohi-LinShi-2010a} demonstrate the importance of dynamic modeling in the optimization of the control strategies in thermodynamic systems, as the transient phases are of a great importance, especially if low superheat values are favored \citep{TamainotTelto-1993a,TamainotTelto-Lallemand-1996a,lin-yeh-2007a,nanayakkara-uehara-2002a}. \section{Defrosting strategies} \label{sec:defrosting-art} Defrosting strategies are needed in the case of Air-Water or Air/Air heat pump circuits. Indeed, when refrigerant colder that 0°C goes through the evaporator coil, the water in the air freezes and accumulates on the coil. The ice blocks the air flow and acts as an insulator, decreasing the coil performance \citep[p.\,169]{dincer-kanoglu-2010a}. Consequently, to maintain appropriate performance, the coil needs to be defrosted periodically. \citet[p.\,4]{bertsch-hubacher-2002a} state that the investigation of alternative defrosting strategies and the effects of natural defrosting, in addition to hot gas and reversed-cycle strategies, show that there is a big potential for improvements of the defrosting of evaporators. Many defrosting strategies are available: \begin{itemize} \item cycle defrost using a 4-way valve. This technique is commonly used in domestic heat pump devices. \item Electric heater rods inserted into formed holes through the aluminum fins (common solution in small commercial system not reversible). \item If the evaporator can be insulated from the the cold air (in a ducted system, for example), the ice can be melted by warm air coming from the house itself. \item It is possible to run hot water over the coil. In that case a careful design of the water lines around the evaporator is needed to avoid freezing of the water used for the defrosting \citep[p.\,169]{dincer-kanoglu-2010a}. This technique is usually reserved for large systems. \item hot gas from the compressor discharge. This technique is common in large systems, like the hot water solution. \end{itemize} The heating capacity of Air/Water and Air/Air heat pumps decreases when there is frost formation on the evaporator surfaces in humid climates. \FloatBarrier \bibliographystyle{plainnat} \bibliography{main} \label{sec:art-refs} \section*{Credits} \label{sec:art-credits} \addcontentsline{toc}{section}{Credits} \phantomsection \begin{description} \item[\figref{fig:zwyssig}] \ccbyjb{2013}. This figure is based on an original figure from \citet[Fig. 1, p. 565]{zwyssig-round-2009a}. \end{description}
{ "alphanum_fraction": 0.8076312395, "avg_line_length": 55.4442916094, "ext": "tex", "hexsha": "4fbf59f6f4ebcf3f537a978efdc0b189a9e3ccf5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d3cea3a238c7cccc309400b5686b6ef8ad1d72af", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "speredenn/epfl-leni-oilfree-radial-cp-hp", "max_forks_repo_path": "tex/sota.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d3cea3a238c7cccc309400b5686b6ef8ad1d72af", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "speredenn/epfl-leni-oilfree-radial-cp-hp", "max_issues_repo_path": "tex/sota.tex", "max_line_length": 135, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d3cea3a238c7cccc309400b5686b6ef8ad1d72af", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "speredenn/epfl-leni-oilfree-radial-cp-hp", "max_stars_repo_path": "tex/sota.tex", "max_stars_repo_stars_event_max_datetime": "2017-09-01T13:30:55.000Z", "max_stars_repo_stars_event_min_datetime": "2017-09-01T13:30:55.000Z", "num_tokens": 9863, "size": 40308 }
\section{Quotation Cont} \subsection{Quotation module} \begin{ocamlcode} type 'a expand_fun = Loc.t -> string option -> string -> 'a (** the second argument is _loc, for example: <:lam@_loc< >> , camlp4 will pass the second argument Some ``_loc'' *) \end{ocamlcode} In previous camlp4, Quotation provides a string to string transformation, then it default uses \verb|Syntax.expr| or \verb|Syntax.patt| to parse the returned string. following drawbacks \begin{itemize} \item needs a \textbf{more} parsing phase \item the resulting string may be syntactically incorrect, difficult to \textbf{debug} \end{itemize} When without antiquotaions, a parser is enought, other things are quite mechanical A comprehensive Example Suppose we have already defined an AST, and did the parser, meta part(\ref{transform}). The parser part is simple, as follows \inputminted[fontsize=\scriptsize,firstline=41,lastline=62]{ocaml} {camlp4/code/jake/json.ml} Now we do a mechanical installation to get a quotation expander All need is as follows: \inputminted[fontsize=\scriptsize, fontsize=\scriptsize, firstline=63,lastline=83]{ocaml}{camlp4/code/jake/json.ml} You could also refactor you code as follows: \inputminted[fontsize=\scriptsize, fontsize=\scriptsize, firstline=84,lastline=96]{ocaml}{camlp4/code/jake/json.ml} \section{Antiquotation Expander} The meta filter treat any other constructor \textbf{ending in Ant} specially. Instead of handling this way: \begin{ocamlcode} |Jq_Ant(loc,s) -> <:expr< Jq_Ant ($meta_loc loc$, $meta_string s$) >> \end{ocamlcode} They have: \begin{ocamlcode} |Jq_Ant(loc,s) -> ExAnt(loc,s) \end{ocamlcode} They translate it directly to \textit{ExAnt} or \textit{PaAnt}. \begin{ocamlcode} let try /(_* Lazy as x) ":" (_* as rest ) / = "ghsoghosghsog ghsohgo" in (x,rest) with Match_failure _ -> ("","");; \end{ocamlcode} Notice that \verb|Syntax.AntiquotSyntax.(parse_expr,parse_patt)| \verb|Syntax.(parse_implem, parse_interf)| provides the parser as a host language. The normal part is as follows: And also, \verb|Syntax.AntiquotSyntax| only provides \verb|parse_expr,parse_patt| corresponding to two postions where quotations happen. \inputminted[fontsize=\scriptsize, fontsize=\scriptsize, lastline=30]{ocaml}{camlp4/code/jake/json_ant.ml} Here we define the AST in a special way for the convenience of inserting code. The parser is modified: \inputminted[fontsize=\scriptsize, fontsize=\scriptsize, firstline=32,lastline=57]{ocaml}{camlp4/code/jake/json_ant.ml} \inputminted[fontsize=\scriptsize, fontsize=\scriptsize, firstline=57,lastline=125]{ocaml}{camlp4/code/jake/json_ant.ml} The procedure is as follows: \begin{ocamlcode} << $ << 1 >> $>> (* parsing (my parser) *) Jq_Ant(_loc, "<< 1 >> ") (* lifting (mechnical) *) Ex_Ant(_loc, "<< 1 >>") (* parsing (the host parser *) <:expr< Jq_number 1. >> (* antiquot_expand (my anti_expander ) *) <:expr < Jq_number 1. >> \end{ocamlcode} % \subsection{Part 10 Lexer } % This part is deprecated. Camlp4 is not vanilla, it's inappropriate % for not ocaml-oriented programming, since you have to do too much by % hand. Just follow the signature of module type Lexer is enough. % generally you have to provide module Loc, Token, Filter, Error, and % mk mk is essential % \begin{ocamlcode} % val mk : unit -> Loc.t -> char Stream.t -> (Token.t * Loc.t ) Stream.t % \end{ocamlcode} % the verbose part lies in that you have to use the Camlp4.Sig.Loc, % usually you have to maintain a mutable context, so when you lex a % token, you can query the context to get Loc.t. you can refer Jake's jq\_lexer.ml % for more details. How about using lexer, parser all by myself? % The work need to be done lies in you have to supply a plugin of type % expand\_fun, which is \\ % \verb|type 'a expand_fun = Ast.loc -> string option -> string -> 'a| % so if you dont use ocamllexer, why bother the grammar module, just % use lex yacc will make life easier, and you code will run faster . % \begin{ocamlcode} % type pos = { % line : int; % bol : int; % off : int % }; % type t = { % file_name : string; % start : pos; % stop : pos; % ghost : bool % }; % open Camlp4.PreCast % module Loc = Camlp4.PreCast.Loc % module Error : sig % type t % exception E of t % val to_string : t -> string % val print : Format.formatter -> t -> unit % end = struct % type t = string % exception E of string % let print = Format.pp_print_string (* weird, need flush *) % let to_string x = x % end % let _ = % let module M = Camlp4.ErrorHandler.Register (Error) in () % let (|> ) x f = f x % module Token : sig % module Loc : Camlp4.Sig.Loc % type t % val to_string : t -> string % val print : Format.formatter -> t -> unit % val match_keyword : string -> t -> bool % val extract_string : t -> string % module Filter : sig % (* here t refers to the Token.t *) % type token_filter = (t,Loc.t) Camlp4.Sig.stream_filter % type t % val mk : (string->bool)-> t % val define_filter : t -> (token_filter -> token_filter) -> unit % val filter : t -> token_filter % val keyword_added : t -> string -> bool -> unit % val keyword_removed : t -> string -> unit % end % module Error : Camlp4.Sig.Error % end = struct % (** the token need not to be a variant with arms with KEYWORD % EOI, etc, although conventional % *) % type t = % | KEYWORD of string % | NUMBER of string % | STRING of string % | ANTIQUOT of string * string % | EOI % let to_string t = % let p = Printf.sprintf in % match t with % |KEYWORD s -> p "KEYWORD %S" s % |NUMBER s -> p "NUMBER %S" s % |STRING s -> p "STRING %S" s % |ANTIQUOT (n,s) -> p "ANTIQUOT %S: %S" n s % |EOI -> p "EOI" % let print fmt x = x |> to_string |> Format.pp_print_string fmt % let match_keyword kwd = function % |KEYWORD k when kwd = k -> true % |_ -> false % let extract_string = function % |KEYWORD s | NUMBER s | STRING s -> s % |tok -> invalid_arg ("can not extract a string from this token : " % ^ to_string tok) % module Loc = Camlp4.PreCast.Loc % module Error = Error % module Filter = struct % type token_filter = (t * Loc.t ) Stream.t -> (t * Loc.t) Stream.t % (** stub out *) % (** interesting *) % type t = unit % (** the argument to mk is a function indicating whether % a string should be treated as a keyword, and the default % lexer uses it to filter the token stream to convert identifiers % into keywords. if we want our parser to be extensible, we should % take this into account % *) % let mk _ = () % let filter _ x = x % let define_filter _ _ = () % let keyword_added _ _ _ = () % let keyword_removed _ _ = () % end % end % module L = Ulexing % INCLUDE "/Users/bob/predefine_ulex.ml" % (* let rec token c = lexer *) % (* | eof -> EOI *) % (* | newline -> token *) % (** TOKEN ERROR LOC % mk : unit -> Loc.t -> char Stream.t -> (Token.t * Loc.t) Stream.t % Loc.of_tuple : % string * int * int * int * int * int * int * bool -> % Loc.t % *) % \end{ocamlcode} %%% Local Variables: %%% mode: latex %%% TeX-master: "../master" %%% End:
{ "alphanum_fraction": 0.649347153, "avg_line_length": 29.5976095618, "ext": "tex", "hexsha": "622308820ed5ffbd1b3381eb5d5d7dc152af2f5c", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-06-21T06:57:32.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-10T18:12:15.000Z", "max_forks_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mgttlinger/ocaml-book", "max_forks_repo_path": "camlp4/jake_blog.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_issues_repo_issues_event_max_datetime": "2018-12-03T04:15:48.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-09T13:53:43.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mgttlinger/ocaml-book", "max_issues_repo_path": "camlp4/jake_blog.tex", "max_line_length": 120, "max_stars_count": 142, "max_stars_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mgttlinger/ocaml-book", "max_stars_repo_path": "camlp4/jake_blog.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-15T00:47:37.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-12T16:45:40.000Z", "num_tokens": 2205, "size": 7429 }
\section{Introduction} \label{sec:intro} Grammar rules apply not to individual words (e.g. dog, eat) but to syntactic categories of words (e.g. noun, verb). Thus constructing syntactic categories (also known as lexical or part-of-speech categories) is one of the fundamental problems in language acquisition. Syntactic categories represent groups of words that can be substituted for one another without altering the grammaticality of a sentence. Linguists identify syntactic categories based on semantic, syntactic, and morphological properties of words. There is also evidence that children use prosodic and phonological features to bootstrap syntactic category acquisition \cite{ambridge2011child}. However there is as yet no satisfactory computational model that can match human performance. Thus identifying the best set of features and best learning algorithms for syntactic category acquisition is still an open problem. \begin{figure}[b] \centering \includegraphics[width=50mm]{paradigmatic.png} % \input{paradigmatic.tex} \caption{Syntagmatic vs. paradigmatic axes for words in a simple sentence \cite{chandler2007semiotics}.} \label{fig:paradigmatic} \end{figure} Relationships between linguistic units can be classified into two types: syntagmatic (concerning positioning), and paradigmatic (concerning substitution). Syntagmatic relations determine which units can combine to create larger groups and paradigmatic relations determine which units can be substituted for one another. Figure~\ref{fig:paradigmatic} illustrates the paradigmatic vs syntagmatic axes for words in a simple sentence and their possible substitutes. In this study, we represent the paradigmatic axis directly by building {\em substitute vectors} for each word position in the text. The dimensions of a substitute vector represent words in the vocabulary, and the magnitudes represent the probability of occurrence in the given position. Note that the substitute vector for a word position (e.g. the second word in Fig.~\ref{fig:paradigmatic}) is a function of the context only (i.e. ``the \_\_\_ cried''), and does not depend on the word that does actually appear there (i.e. ``man''). Thus substitute vectors represent {\em individual word contexts}, not word types. We refer to the use of features based on substitute vectors as {\em paradigmatic representations of word context}. Our preliminary experiments indicated that using context information alone without the identity or the features of the target word (e.g. using dimensionality reduction and clustering on substitute vectors) has limited success and modeling the co-occurrence of word and context types is essential for inducing syntactic categories. In the models presented in this paper, we combine paradigmatic representations of word context with features of co-occurring words within the co-occurrence data embedding (CODE) framework \cite{globerson2007euclidean,maron2010sphere}. The resulting embeddings for word types are split into 45 clusters using k-means and the clusters are compared to the 45 gold tags in the 1M word Penn Treebank Wall Street Journal corpus \cite{treebank3}. We obtain many-to-one accuracies up to .7680 using only distributional information (the identity of the word and a representation of its context) and .8023 using morphological and orthographic features of words improving the state-of-the-art in unsupervised part-of-speech tagging performance. % example substitute vectors (both syntactic and semantic) The high probability substitutes reflect both semantic and syntactic properties of the context as seen in the example below (the numbers in parentheses give substitute probabilities): \begin{quotation} \noindent {\em ``Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov.~29.''}\\ \noindent {\bf the:} its (.9011), the (.0981), a (.0006), $\ldots$\\ {\bf board:} board (.4288), company (.2584), firm (.2024), bank (.0731), $\ldots$ \end{quotation} Top substitutes for the word ``the'' consist of words that can act as determiners. Top substitutes for ``board'' are not only nouns, but specifically nouns compatible with the semantic context. This example illustrates two concerns inherent in all distributional methods: (i) words that are generally substitutable like ``the'' and ``its'' are placed in separate categories ({\sc dt} and {\sc prp\$}) by the gold standard, (ii) words that are generally not substitutable like ``do'' and ``put'' are placed in the same category ({\sc vb}). Freudenthal et al. \shortcite{freudenthal2005resolution} point out that categories with unsubstitutable words fail the standard linguistic definition of a syntactic category and children do not seem to make errors of substituting such words in utterances (e.g. {\em``What do you want?''} vs. {\em *``What put you want?''}). Whether gold standard part-of-speech tags or distributional categories are better suited to applications like parsing or machine translation can be best decided using extrinsic evaluation. However in this study we follow previous work and evaluate our results by comparing them to gold standard part-of-speech tags. Section~\ref{sec:related} gives a detailed review of related work. Section~\ref{sec:lm} describes the dataset and the construction of the substitute vectors. Section~\ref{sec:code} describes co-occurrence data embedding, the learning algorithm used in our experiments. Section~\ref{sec:exp} describes our experiments and compares our results with previous work. Section~\ref{sec:discuss} gives a brief error analysis and Section~\ref{sec:contrib} summarizes our contributions. All the data and the code to replicate the results given in this paper is available from the authors' website at \mbox{\url{http://goo.gl/RoqEh}}. %% Computational models of syntactic category acquisition rely mainly on %% distributional analysis: Words that share the same distribution %% (i.e. that occur in the same context) are grouped into the same %% category. The definition of ``the same context'' vary across studies. %% Algorithms based on the Hidden Markov Model use class based n-grams to %% specify context \cite{Brown:1992:CNG:176313.176316}, others use a %% frame of neighboring words around the target word %% \cite{Schutze:1995:DPT:976973.976994}. %% Our hypothesis is that potential substitutes of a word are directly %% indicative of its syntactic category and should be useful in acquiring %% syntactic categories in general. %% Our main contribution in this study %% is to introduce paradigmatic features, i.e. features based on %% potential substitutes of the target word, to represent word context. %% Both syntagmatic and paradigmatic relations of a word can be used to %% represent its context. In the syntagmatic case the context is %% represented by a selection of neighboring words, in the paradigmatic %% case it is represented by a set of possible substitutes. In previous %% studies of syntactic category learning the context representation has %% been primarily syntagmatic, either implicit in the class based n-grams %% of the standard Hidden Markov Model, or explicit in the construction %% and clustering of left and right neighbors. %% In this study we explore a paradigmatic representation of the context %% of a word in syntactic category acquisition. Specifically, the %% context of a word is represented by a list of its possible substitutes %% and their probabilities, which we call the {\em substitute vector}. %% Note that the substitute vector is a function of the context only, not %% the target word. Thus in effect we are clustering contexts, not %% words. When word contexts are clustered based on their substitute %% vectors they reveal a grouping that largely match the traditional part %% of speech boundaries (\bestResult many-to-one score using a %% 45-tag 24K word test corpus). %% % standard HMM-EM gives 42\% on the same data. %% Section~\ref{sec:related} gives a detailed review of related work. %% The construction of the substitute vectors is described in %% Section~\ref{sec:lm}. To find out how to best make use of this new %% paradigmatic representation, we explore different distance metrics %% (Section~\ref{sec:dist}), dimensionality reduction methods %% (Section~\ref{sec:dimreduce}), and clustering algorithms %% (Section~\ref{sec:clustering}) for substitute vectors. We note that %% close to 95\% of the word occurrences in human labeled data are tagged %% with their most frequent part of speech %% \cite{Lee:2010:STU:1870658.1870741}, making one-tag-per-word a fairly %% good first approximation. Even ambicategory words generally have %% fairly skewed part of speech distributions. %% Section~\ref{sec:sparsity} looks at ways to increase the sparsity of %% our solutions and demonstrates significant improvements using the %% one-tag-per-word assumption and similarity metrics that introduce %% sparsity. Section~\ref{sec:discussion} discusses the results and %% Section~\ref{sec:contrib} summarizes our contributions.
{ "alphanum_fraction": 0.7885569397, "avg_line_length": 53.674556213, "ext": "tex", "hexsha": "7cb9539ba2b0445368ba2b7e6cc300ce92e86227", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ai-ku/upos", "max_forks_repo_path": "papers/cl2012/emnlp12/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ai-ku/upos", "max_issues_repo_path": "papers/cl2012/emnlp12/introduction.tex", "max_line_length": 106, "max_stars_count": 4, "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_path": "papers/cl2012/emnlp12/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "num_tokens": 2107, "size": 9071 }
%% SECTION HEADER ///////////////////////////////////////////////////////////////////////////////////// \section{Determination of the MADIF} \label{sec:determination} %% SECTION CONTENT ////////////////////////////////////////////////////////////////////////////////////
{ "alphanum_fraction": 0.2867647059, "avg_line_length": 45.3333333333, "ext": "tex", "hexsha": "cba7681888a059e91847246d296fbe0c21050f72", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9e49fe23117fd320be14214e5ff6bafd2b1fc1a3", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "pfiborek/model-hc", "max_forks_repo_path": "docs/proposal/Dissertation/Chapters/Chapter8/sec:determination.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9e49fe23117fd320be14214e5ff6bafd2b1fc1a3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "pfiborek/model-hc", "max_issues_repo_path": "docs/proposal/Dissertation/Chapters/Chapter8/sec:determination.tex", "max_line_length": 103, "max_stars_count": null, "max_stars_repo_head_hexsha": "9e49fe23117fd320be14214e5ff6bafd2b1fc1a3", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "pfiborek/model-hc", "max_stars_repo_path": "docs/proposal/Dissertation/Chapters/Chapter8/sec:determination.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 29, "size": 272 }
\SetAPI{J-C} \section{ambeth.root.database.user} \label{configuration:AmbethRootDatabaseUser} \ClearAPI \TODO%% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.persistence.maria.MariaTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.maria.MariaTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.mssql.MSSqlTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.mssql.MSSqlTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.oracle.Oracle10gTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.oracle.Oracle10gTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.pg.PostgresTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.pg.PostgresTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.sqlite.SQLiteTestDialect} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.sqlite.SQLiteTestDialect} & \prettyref{module:Persistence} \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \begin{lstlisting}[style=Props,caption={Usage example for \textit{ambeth.root.database.user}}] ambeth.root.database.user=sa \end{lstlisting}
{ "alphanum_fraction": 0.7604512276, "avg_line_length": 35.0465116279, "ext": "tex", "hexsha": "e6cc6c70e8835ec45a5fc0d8e85f71a1ab40b07e", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/configuration/AmbethRootDatabaseUser.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/configuration/AmbethRootDatabaseUser.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/configuration/AmbethRootDatabaseUser.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 502, "size": 1507 }
\documentclass[11pt]{article} \usepackage[margin=1in]{geometry} \usepackage{setspace} \onehalfspacing \usepackage{graphicx} \graphicspath{report_images/} \usepackage{appendix} \usepackage{listings} \usepackage{float} \usepackage{multirow} \usepackage{amsthm} % The next three lines make the table and figure numbers also include section number \usepackage{chngcntr} \counterwithin{table}{section} \counterwithin{figure}{section} % Needed to make titling page without a page number \usepackage{titling} % DOCUMENT INFORMATION ================================================= \font\titleFont=cmr12 at 11pt \title {{\titleFont ECEN 429: Introduction to Digital Systems Design Laboratory \\ North Carolina Agricultural and Technical State University \\ Department of Electrical and Computer Engineering}} % Declare Title \author{\titleFont Reporter: Chris Cannon\\ \titleFont Partner: Nikiyah Beulah} % Declare authors \date{\titleFont March 1, 2018} % ====================================================================== \begin{document} \begin{titlingpage} \maketitle \begin{center} Lab 7 \end{center} \end{titlingpage} \section{Introduction} This lab tests our ability to integrate multiple, relatively complex synchronous devices into a single unit. When complete, this lab will produce a finite state machine that will conduct arithmetic operations in each state. \theoremstyle{definition} \newtheorem{definition}{Definition} \begin{definition} Synchronous Device: an electronic circuit that performs operations in time with a clock. \label{def:synchronous_device} \end{definition} \section{Background, Design Solution, and Results} \subsection{Problem 1 } \subsubsection{Background} For Problem 1, we were to complete an ALU that would perform operations on 2 2-bit operands. We were instructed to select 4 operations for our ALU to complete, and we chose $AND, OR, LOGICAL SHIFT LEFT,$ and $LOGICAL SHIFT RIGHT$. \subsubsection{Design Solution} We decided to design individual entities for each arithmetic operation that we wished to complete. The thought process behind this design is that it will reduce the complexity of our overall code as well as run faster because we are simulating separate hardware for each operation to be completed. Because the inputs are 2-bits, the output will be 2-bits only. All overflow will be ignored, and negative numbers will be displayed as "00". Inputs for this system is assigned in Table ~\ref{tab:alu_input_Ports} and outputs are assigned in Table ~\ref{tab:alu_output_Ports}. A different operation will be performed for each possible value of the select input SEL, and those operations are defined in Table ~\ref{tab:alu_sel_ops}. \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l |} \hline Bit & Label & Port \\ \hline in1[0] & Switch 5 & V15 \\ \hline in1[1] & Switch 4 & w15 \\ \hline in2[0] & Switch 3 & W16 \\ \hline in2[1] & Switch 2 & W17 \\ \hline sel[0] & Switch 1 & U19 \\ \hline sel[1] & Switch 0 & V16 \\ \hline \end{tabular} \caption{\label{tab:alu_input_Ports}Input port assignments for the arithmetic logic unit.} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l |} \hline Bit & Label & Port \\ \hline output[0] & LED 0 & U16 \\ \hline output[1] & LED 1 & E19 \\ \hline \end{tabular} \caption{\label{tab:alu_output_Ports}Output port assignments for arithmetic logic unit.} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{| l | l |} \hline SEL & Operation \\ \hline "00" & $output = in1 AND in2$ \\ \hline "01" & $output = in1 OR in2$ \\ \hline "10" & $output = in1 SHIFT_RIGHT_BY in2 BITS$ \\ \hline "11" & $output = in1 SHIFT_LEFT_BY in2 BITS$ \\ \hline \end{tabular} \caption{\label{tab:alu_sel_ops}Operation performed for a given select input SEL.} \end{center} \end{table} \subsubsection{Results} Our arithmetic logic unit operated as expected and results are shown in the figures below. \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_0148.jpg} \caption{\label{fig:alu_res1}Operation shown: 01 OR 01 = 01} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_0475.jpg} \caption{\label{fig:alu_res2}Operation shown: 10 OR 01 = 11} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_0525.jpg} \caption{\label{fig:alu_res3}Operation shown: 01 OR 01 = 01} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_1899.jpg} \caption{\label{fig:alu_res4}Operation shown: 10 Logical Shift Left by 01 = 00} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_2036.jpg} \caption{\label{fig:alu_res5}Operation shown: 01 Logical Shift Right by 01 = 00} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_2997.jpg} \caption{\label{fig:alu_res6}Operation shown: 01 Logical Shift Left by 01 = 10} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_5830.jpg} \caption{\label{fig:alu_res7}Operation shown: 10 AND 01 = 00} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p1/IMG_7938.jpg} \caption{\label{fig:alu_res8}Operation shown: 10 Logical Shift Right by 01 = 01} \end{center} \end{figure} \subsection{Problem 2 } \subsubsection{Background} For problem 2, we are instructed to create a simple Finite State Machine that will issue the four different commands needed to operate the arithmetic logic unit created in Problem 1. \subsubsection{Design Solution} Because there are four possible commands, and our select input from Problem 1 has 2-bits, we know that this state machine requires 2 bits and four possible states. We also utilized our clock divider from a previous lab to slow down the clock signal we are using the allow this machine to be tested and observed on the Basys3. The input and output ports for this design are shown in Tables ~\ref{tab:fsm_input_Ports} and ~\ref{tab:fsm_output_Ports}, respectively. The state diagram is displayed in Figure ~\ref{fig:fsm_state_diagram}. \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l |} \hline Bit & Label & Port \\ \hline clk & Clock & W5 \\ \hline enable & Switch 0 & V17 \\ \hline reset & Button Right & T17 \\ \hline \end{tabular} \caption{\label{tab:fsm_input_Ports}Input port assignments for the finite state machine.} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l |} \hline Bit & Label & Port \\ \hline output\_sel[0] & LED 0 & U16 \\ \hline output\_sel[1] & LED 1 & E19 \\ \hline \end{tabular} \caption{\label{tab:fsm_output_Ports}Output port assignments for the finite state machine.} \end{center} \end{table} \begin{center} \begin{figure}[H] \includegraphics[width=\textwidth]{./images/fsmStateDiagram.png} \caption{\label{fig:fsm_state_diagram}State diagram based off of the "enable" input.} \end{figure} \end{center} \subsubsection{Results} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p2/IMG_2172.jpg} \caption{\label{fig:fsm_res1}Finite state machine is staying in state 0 while receiving input 0.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p2/IMG_9245.jpg} \caption{\label{fig:fsm_res2}FSM is in state 1.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p2/IMG_1509.jpg} \caption{\label{fig:fsm_res3}FSM is in state 2.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p2/IMG_1522.jpg} \caption{\label{fig:fsm_res4}FSM is in state 3.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p2/IMG_1246.jpg} \caption{\label{fig:fsm_res5}FSM has returned to state 0.} \end{center} \end{figure} \subsection{Problem 3} \subsubsection{Background} Problem 4 integrates the two units developed in the previous problem. We are to use the finite state machine to control the arithmetic state machine, cycling through each operation on the given inputs. \subsubsection{Design Solution} For our solution, we tied the output signal of the finite state machine to the select input for the arithmetic logic unit. This allows the unit to cycle through all possible operations for the given input. We also chose to show the current operation in our LEDs to help with debugging. The inputs and outputs for this design is shown in Tables ~\ref{tab:integrated_input_Ports} and ~\ref{tab:integration_output_Ports}, respectfully. The truth table for these operations is shown in Tables ~\ref{tab:integrated_truth_table1} and ~\ref{tab:integrated_truth_table2}. \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l |} \hline Bit & Label & Port \\ \hline in1[0] & Switch 5 & V15 \\ \hline in1[1] & Switch 4 & w15 \\ \hline in2[0] & Switch 3 & W16 \\ \hline in2[1] & Switch 2 & W17 \\ \hline clk & Clock & W5 \\ \hline enable & Switch 0 & V17 \\ \hline reset & Button Right & T17 \\ \hline \end{tabular} \caption{\label{tab:integration_input_Ports}Input port assignments for the integrated circuit.} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l |} \hline Bit & Label & Port \\ \hline output[0] & LED 0 & U16 \\ \hline output[1] & LED 1 & E19 \\ \hline operation[0] & & U19 \\ \hline operation[1] & & V19 \\ \hline \end{tabular} \caption{\label{tab:integration_output_Ports}Output port assignments for the integrated circuit.} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l | l |} \hline in1 & in2 & state(sel) & output \\ \hline 00 & 00 & 00 & 00 \\ \hline 00 & 00 & 01 & 00 \\ \hline 00 & 00 & 10 & 00 \\ \hline 00 & 00 & 11 & 00 \\ \hline 00 & 01 & 00 & 00 \\ \hline 00 & 01 & 01 & 01 \\ \hline 00 & 01 & 10 & 00 \\ \hline 00 & 01 & 11 & 00 \\ \hline 00 & 10 & 00 & 00 \\ \hline 00 & 10 & 01 & 10 \\ \hline 00 & 10 & 10 & 00 \\ \hline 00 & 10 & 11 & 00 \\ \hline 00 & 11 & 00 & 00 \\ \hline 00 & 11 & 01 & 11 \\ \hline 00 & 11 & 10 & 00 \\ \hline 00 & 11 & 11 & 00 \\ \hline 01 & 00 & 00 & 00 \\ \hline 01 & 00 & 01 & 01 \\ \hline 01 & 00 & 10 & 01 \\ \hline 01 & 00 & 11 & 01 \\ \hline 01 & 01 & 00 & 01 \\ \hline 01 & 01 & 01 & 01 \\ \hline 01 & 01 & 10 & 00 \\ \hline 01 & 01 & 11 & 10 \\ \hline 01 & 10 & 00 & 00 \\ \hline 01 & 10 & 01 & 11 \\ \hline 01 & 10 & 10 & 00 \\ \hline 01 & 10 & 11 & 00 \\ \hline 01 & 11 & 00 & 01 \\ \hline 01 & 11 & 01 & 11 \\ \hline 01 & 11 & 10 & 00 \\ \hline 01 & 11 & 11 & 00 \\ \hline \end{tabular} \caption{\label{tab:integrated_truth_table1}Truth table for each set of given inputs in each possible state part 1.} \end{center} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{| l | l | l | l |} \hline in1 & in2 & state(sel) & output \\ \hline 10 & 00 & 00 & 00 \\ \hline 10 & 00 & 01 & 10 \\ \hline 10 & 00 & 10 & 10 \\ \hline 10 & 00 & 11 & 10 \\ \hline 10 & 01 & 00 & 00 \\ \hline 10 & 01 & 01 & 11 \\ \hline 10 & 01 & 10 & 01 \\ \hline 10 & 01 & 11 & 00 \\ \hline 10 & 10 & 00 & 10 \\ \hline 10 & 10 & 01 & 10 \\ \hline 10 & 10 & 10 & 00 \\ \hline 10 & 10 & 11 & 00 \\ \hline 10 & 11 & 00 & 10 \\ \hline 10 & 11 & 01 & 11 \\ \hline 10 & 11 & 10 & 00 \\ \hline 10 & 11 & 11 & 00 \\ \hline 11 & 00 & 00 & 00 \\ \hline 11 & 00 & 01 & 11 \\ \hline 11 & 00 & 10 & 11 \\ \hline 11 & 00 & 11 & 11 \\ \hline 11 & 01 & 00 & 01 \\ \hline 11 & 01 & 01 & 11 \\ \hline 11 & 01 & 10 & 01 \\ \hline 11 & 01 & 11 & 10 \\ \hline 11 & 10 & 00 & 10 \\ \hline 11 & 10 & 01 & 11 \\ \hline 11 & 10 & 10 & 00 \\ \hline 11 & 10 & 11 & 00 \\ \hline 11 & 11 & 00 & 11 \\ \hline 11 & 11 & 01 & 11 \\ \hline 11 & 11 & 10 & 00 \\ \hline 11 & 11 & 11 & 00 \\ \hline \end{tabular} \caption{\label{tab:integrated_truth_table2}Truth table for each set of given inputs in each possible state part 2.} \end{center} \end{table} \subsubsection{Results} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_1555.jpg} \caption{\label{fig:int_res1}FSM is in state 0 with input 0, 10 AND 01 = 00.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_9868.jpg} \caption{\label{fig:int_res2}FSM is in state 1 with output 11, 10 OR 01 = 11.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_4430.jpg} \caption{\label{fig:int_res3}FSM is in state 2. 10 shift right by 01 =01.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_0945.jpg} \caption{\label{fig:int_res4}FSM is in state 3. 10 shift left by 01 = 00.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_3247.jpg} \caption{\label{fig:int_res5}FSM is in state 0. 01 AND 01 = 01.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_0490.jpg} \caption{\label{fig:int_res6}FSM is in state 1. 01 OR 01 = 01.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_7881.jpg} \caption{\label{fig:int_res7}FSM is in state 2. 01 shift right by 01 = 00.} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{./images/p3/IMG_1796.jpg} \caption{\label{fig:int_res8}FSM is in state 3. 01 shift left by 01 = 10.} \end{center} \end{figure} \section{Conclusion} This lab gave us experience using a state machine to control a logical unit, and integral part of microprocessor design. The most challenging technical part of this lab was understanding how the different units would fit together, and which inputs or outputs needed to be tied to ports on the board and which could be used as signals. \pagebreak \textbf{Appendices} \begin{appendices} \section{Problem 1 VHDL Code} \begin{lstlisting}[language=VHDL] library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity bitwise_and is port(a, b : in STD_LOGIC_VECTOR(1 downto 0); v: out STD_LOGIC_VECTOR(1 downto 0)); end entity bitwise_and; architecture and_arch of bitwise_and is begin v <= a and b; end architecture and_arch; library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity bitwise_or is port(c, d : in STD_LOGIC_VECTOR(1 downto 0); x: out STD_LOGIC_VECTOR(1 downto 0)); end entity bitwise_or; architecture or_arch of bitwise_or is begin x <= c or d; end architecture or_arch; library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity bitwise_lsr is port(e, f : in STD_LOGIC_VECTOR(1 downto 0); y : out STD_LOGIC_VECTOR(1 downto 0)); end bitwise_lsr; architecture lsr_arch of bitwise_lsr is begin process(e, f) begin case f is when "00" => y <= e; when "01" => if(e = "00") then y <= "00"; elsif(e = "01") then y <= "00"; else y <= "01"; end if; when others => y <= "00"; end case; end process; end lsr_arch; library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity bitwise_lsl is port(g, h : in STD_LOGIC_VECTOR(1 DOWNTO 0); z : out STD_LOGIC_VECTOR(1 downto 0)); end bitwise_lsl; architecture lsl_arch of bitwise_lsl is begin process(g, h) begin case h is when "00" => z <= g; when "01" => if(g = "11") then z <= "10"; elsif(g = "01") then z <="10"; else z <= "00"; end if; when others => z <= "00"; end case; end process; end lsl_arch; library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity ArithmeticLogicUnit is Port ( in1 : in STD_LOGIC_VECTOR(1 downto 0); in2 : in STD_LOGIC_VECTOR(1 downto 0); sel : in STD_LOGIC_VECTOR(1 downto 0); output : out STD_LOGIC_VECTOR(1 downto 0)); end ArithmeticLogicUnit; architecture Behavioral of ArithmeticLogicUnit is component bitwise_and is port(a, b : in STD_LOGIC_VECTOR(1 downto 0); v : out STD_LOGIC_VECTOR(1 downto 0)); end component bitwise_and; component bitwise_or is port(c, d : in STD_LOGIC_VECTOR(1 downto 0); x : out STD_LOGIC_VECTOR(1 downto 0)); end component bitwise_or; component bitwise_lsr is port(e, f : in STD_LOGIC_VECTOR(1 downto 0); y : out STD_LOGIC_VECTOR(1 downto 0)); end component bitwise_lsr; component bitwise_lsl is port(g, h : in STD_LOGIC_VECTOR(1 DOWNTO 0); z : out STD_LOGIC_VECTOR(1 downto 0)); end component bitwise_lsl; signal and_output : STD_LOGIC_VECTOR(1 downto 0); signal or_output : STD_LOGIC_VECTOR(1 downto 0); signal lsr_output : STD_LOGIC_VECTOR(1 downto 0); signal lsl_output : STD_LOGIC_VECTOR(1 downto 0); begin and_comp : bitwise_and port map(in1, in2, and_output); or_comp : bitwise_or port map(in1, in2, or_output); lsr_comp : bitwise_lsr port map(in1, in2, lsr_output); lsl_comp : bitwise_lsl port map(in1, in2, lsl_output); process(sel) begin case sel is when "00" => output <= and_output; when "01" => output <= or_output; when "10" => output <= lsr_output; when "11" => output <= lsl_output; end case; end process; end Behavioral; \end{lstlisting} \section{Problem 1 Constraints File} \begin{center} \begin{figure}[H] \includegraphics[scale=1]{./images/Lab7Part1Const.png} \caption{\label{fig:Prob1Const}Constraints file for Problem 1.} \end{figure} \end{center} \section{Problem 2 VHDL Code} \begin{lstlisting}[language=VHDL] library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.NUMERIC_STD.ALL; entity Clockdivider is port(clk : in std_logic; start_timer : in std_logic; FastClock,MediumClock,SlowClock, led0 : out std_logic); end Clockdivider; architecture clockdivider_arch of Clockdivider is signal slowClock_sig : STD_LOGIC; begin process variable cnt : std_logic_vector(26 downto 0):= "000000000000000000000000000"; begin wait until ((clk'EVENT) AND (clk = '1')); if (start_timer = '1') then cnt := "000000000000000000000000000"; else cnt := STD_LOGIC_VECTOR(unsigned(cnt) + 1); end if; FastClock <= cnt(22); MediumClock <= cnt(24); SlowClock <= cnt(26); slowClock_sig <= cnt(26); if (slowClock_sig = '1') then led0 <= '1'; else led0 <= '0'; end if; end process; end clockdivider_arch; library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity FSM is Port ( clk : in STD_LOGIC; enable : in STD_LOGIC; reset : in STD_LOGIC; output_sel : out STD_LOGIC_VECTOR(1 downto 0); clock_led : out STD_LOGIC); end FSM; architecture Behavioral of FSM is component Clockdivider is Port (clk : in std_logic; start_timer : in std_logic; FastClock,MediumClock,SlowClock, led0 : out std_logic); end component Clockdivider; signal fastclocksig :STD_LOGIC; signal medclocksig :STD_LOGIC; signal slowclocksig :STD_LOGIC; signal current_state : STD_LOGIC_VECTOR(1 downto 0) := "00"; begin clockDiv : Clockdivider port map(clk, reset, fastclocksig, medclocksig, slowclocksig, clock_led); process(slowclocksig, reset) begin if(reset = '1') then current_state <= "00"; output_sel <= "00"; end if; if(slowclocksig'event and (slowclocksig = '1')) then if(enable = '1') then case current_state is when "00" => current_state <= "01"; output_sel <= "01"; when "01" => current_state <= "10"; output_sel <= "10"; when "10" => current_state <= "11"; output_sel <= "11"; when "11" => current_state <= "00"; output_sel <= "00"; end case; else current_state <= "00"; output_sel <= "00"; end if; end if; end process; output_sel <= current_state; end Behavioral; \end{lstlisting} \section{Problem 2 Constraints File} \begin{center} \begin{figure}[H] \includegraphics[scale=1]{./images/Lab7Part2Const.png} \caption{\label{fig:Prob1Const}Constraints file for Problem 2.} \end{figure} \end{center} \section{Problem 3 VHDL Code} \begin{lstlisting}[language=VHDL] library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity TopLevelDesign is Port ( input1 : in STD_LOGIC_VECTOR(1 downto 0); input2 : in STD_LOGIC_VECTOR(1 downto 0); clk : in STD_LOGIC; enable : in STD_LOGIC; reset : in STD_LOGIC; output : out STD_LOGIC_VECTOR(1 downto 0); clock_led : out STD_LOGIC; operation : out STD_LOGIC_VECTOR(1 downto 0)); end TopLevelDesign; architecture Behavioral of TopLevelDesign is component FSM is Port ( clk : in STD_LOGIC; enable : in STD_LOGIC; reset : in STD_LOGIC; output_sel : out STD_LOGIC_VECTOR(1 downto 0); clock_led : out STD_LOGIC); end component FSM; component ArithmeticLogicUnit is Port ( in1 : in STD_LOGIC_VECTOR(1 downto 0); in2 : in STD_LOGIC_VECTOR(1 downto 0); sel : in STD_LOGIC_VECTOR(1 downto 0); output : out STD_LOGIC_VECTOR(1 downto 0)); end component ArithmeticLogicUnit; signal select_signal : STD_LOGIC_VECTOR(1 downto 0); begin stateMachine : FSM port map(clk, enable, reset, select_signal, clock_led); logicUnit : ArithmeticLogicUnit port map(input1, input2, select_signal, output); operation <= select_signal; end Behavioral; \end{lstlisting} \section{Problem 3 Constraints File} \begin{center} \begin{figure}[H] \includegraphics[scale=1]{./images/Lab7Part3Const.png} \caption{\label{fig:Prob1Const}Constraints file for Problem 3.} \end{figure} \end{center} \end{appendices} \end{document}
{ "alphanum_fraction": 0.6695092161, "avg_line_length": 30.969738652, "ext": "tex", "hexsha": "53b69841a713cb12f81bd6339e92e53d5db8c94f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "ccannon94/ncat-ecen429-repository", "max_forks_repo_path": "Lab7/LabReport/Lab7Report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "ccannon94/ncat-ecen429-repository", "max_issues_repo_path": "Lab7/LabReport/Lab7Report.tex", "max_line_length": 727, "max_stars_count": null, "max_stars_repo_head_hexsha": "7a7be7becb73d0f2ec8db52213b7dd8961a32e5b", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "ccannon94/ncat-ecen429-repository", "max_stars_repo_path": "Lab7/LabReport/Lab7Report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7195, "size": 22515 }
\section{Individual Workload} The workload of this assignment was divided as follows. Davide Pedranz: \begin{itemize} \item Generation of the datasets. \item Implementation of the basic Rosenblatt algorithm. \item Experiments for different number of iterations and different values of $N$. \end{itemize} Samuel Giacomelli: \begin{itemize} \item Extension of Rosenblatt perceptron training algorithm for different values of $c$ and inhomogeneous hyperplanes. \item Experiments for different values of $c$ and inhomogeneous hyperplanes. \end{itemize} We worked together on the report: each of us focused more about the part that he implemented, but then we improved and refined the whole report together.
{ "alphanum_fraction": 0.7865013774, "avg_line_length": 38.2105263158, "ext": "tex", "hexsha": "700a8113e749b2bd4507ea5e5adb435d1c0758eb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "262a2b33d5c3fe67bbeb20fa6ef1f4870bdfa9a0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "davidepedranz/neural_networks_assignments", "max_forks_repo_path": "assignment_1/report/06_work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "262a2b33d5c3fe67bbeb20fa6ef1f4870bdfa9a0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "davidepedranz/neural_networks_assignments", "max_issues_repo_path": "assignment_1/report/06_work.tex", "max_line_length": 122, "max_stars_count": null, "max_stars_repo_head_hexsha": "262a2b33d5c3fe67bbeb20fa6ef1f4870bdfa9a0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "davidepedranz/neural_networks_assignments", "max_stars_repo_path": "assignment_1/report/06_work.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 167, "size": 726 }
% === [ Introduction ] ========================================================= \begin{quote} \textit{``Standing on the shoulders of dragons.''} \\ --- Anonymous \end{quote} \section{Introduction} \label{sec:introduction} % === [ Subsections ] ========================================================== \input{sections/1_introduction/1_project_aim_and_objectives} \input{sections/1_introduction/2_deliverables}
{ "alphanum_fraction": 0.5323741007, "avg_line_length": 29.7857142857, "ext": "tex", "hexsha": "8a743e2f2d26fc4da65351b34006f2009961e055", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2020-05-27T02:01:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-24T18:52:18.000Z", "max_forks_repo_head_hexsha": "5b0a9c42b013c0f2f0c922f0ea454dabe1ea05bf", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "mewmew/uc", "max_forks_repo_path": "report/sections/1_introduction.tex", "max_issues_count": 80, "max_issues_repo_head_hexsha": "5b0a9c42b013c0f2f0c922f0ea454dabe1ea05bf", "max_issues_repo_issues_event_max_datetime": "2019-06-05T09:47:38.000Z", "max_issues_repo_issues_event_min_datetime": "2016-02-03T13:41:15.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "mewmew/uc", "max_issues_repo_path": "report/sections/1_introduction.tex", "max_line_length": 80, "max_stars_count": 43, "max_stars_repo_head_hexsha": "5b0a9c42b013c0f2f0c922f0ea454dabe1ea05bf", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "mewmew/uc", "max_stars_repo_path": "report/sections/1_introduction.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-04T10:49:40.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-04T14:03:39.000Z", "num_tokens": 88, "size": 417 }
\documentclass{article} \usepackage{amsmath} \setlength\parindent{0pt} \title{Learning Notes on Statistical Learning Theory} \author{a1trl9} \date{} \begin{document} \maketitle \section{Lecture 3} We have achieved: \begin{equation} L(\hat{\theta})-L(\theta^{*}) \approx \frac{p}{2n} + o(\frac{1}{n}) \end{equation} The limitation of the result above is we assume the data are precisely distributed according to a particular ground truth parameter \(\theta^{*}\). Also, we ignore the dependence of higher order terms on other hyperparameters. \subsection{Uniform Convergence Framework} \textbf{Definition:} uniform convergence is a property of hypothesis class \(\mathcal{H}\) of the following form: \begin{equation} \mathrm{Pr}[\forall h \in \mathcal{H}, | \hat{L}(\theta)-L(\theta)\leq \epsilon] \geq 1 - \delta \end{equation} \textbf{Why uniform convergence framework implies generalization?} \vspace{2mm} \begin{equation} \begin{aligned} &L(\hat{h})-L(h^{*})\\ &=L(\hat{h}) - \hat{L}(\hat{h})+\hat{L}(\hat{h})-\hat{L}(h^{*})+\hat{L}(h^{*})-L(h^{*}) \end{aligned} \end{equation} Since \(\hat{h}\) is the optimal solution for the training example, \(\hat{L}(\hat{h})-\hat{L}(h^{*})\leq 0\), so we get: \begin{equation} \begin{aligned} L(\hat{h})-L(h^{*})&\leq L(\hat{h}) - \hat{L}(\hat{h})+0+\hat{L}(h^{*})-L(h^{*})\\ &\leq |L(\hat{h})-\hat{L}(\hat{h})| + |\hat{L}(h^*)-L(h^*)|\\ &\leq 2|L({h})-\hat{L}(h)| \end{aligned} \end{equation} Therefore, when uniform convergence holds: \begin{equation} \mathrm{Pr}(L(\hat{h})-L(h^{*})\leq 2\epsilon)\geq\mathrm{Pr}[\forall h \in \mathcal{H}, | \hat{L}(\theta)-L(\theta)\leq \epsilon]\geq 1-\delta \begin{aligned} \end{aligned} \end{equation} \subsection{Finite Hypothesis Classes} If \(\mathcal(H)\) is finite and \(l(x, y), h)\in [0, 1]\), we have the following statements (based on Hoeffding Inequality): \begin{enumerate} \item For any fixed \(h\in \mathcal{H}\) and \(\epsilon > 0\): \begin{equation*} \mathrm{Pr}[|\hat{L}(h)-L(h)|\leq \epsilon] \geq 1 - 2e^{-2n\epsilon^2} \end{equation*} \item For any \(\epsilon > 0\): \begin{equation*} \mathrm{Pr}[\forall h\in \mathcal{H}, | \hat{L}(h) - L(h)| \leq \epsilon]\geq 1 - 2|\mathcal{H}|e^{-2n\epsilon^2} \end{equation*} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6581158779, "avg_line_length": 33.25, "ext": "tex", "hexsha": "8279c8bf510916adb086dd90788c1c03fb666ca8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f6d58624bb9d7dabed13cf6e2ed01de598ef7426", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "a1trl9/stle", "max_forks_repo_path": "slt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f6d58624bb9d7dabed13cf6e2ed01de598ef7426", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "a1trl9/stle", "max_issues_repo_path": "slt.tex", "max_line_length": 143, "max_stars_count": null, "max_stars_repo_head_hexsha": "f6d58624bb9d7dabed13cf6e2ed01de598ef7426", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "a1trl9/stle", "max_stars_repo_path": "slt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 860, "size": 2261 }
\chapter{Appendix} % \section{To the proof of the theorem of Peano} % \begin{definition} % A function sequence $f^n$ is called \textbf{equicontinuous}, if it holds: % \begin{gather*} % \forall \,\epsilon > 0 % \;\exists \, \delta > 0 % \;\forall \, n \in \mathbb{N} % \;\forall |y-x| < \delta % \quad:\quad |f^n(y)-f^n(x)| < \epsilon % \end{gather*} % \end{definition} % \begin{theorem}[Arzela-Ascoli]\label{theorem:arzelaascoli} % Let be $f^h$ a equicontinuous, uniformly bounded sequence % of functions. Then it exists a subsequence $f^{h_i}$ which converges to a % uniformly continuous function $f$. % \end{theorem} % \begin{todo} % \begin{proof} % Nachzulesen in ...? %%%%%%%%%%%%%%%% % \end{proof} % \end{todo} % \begin{proof}[Proof of the theorem of Peano] % I) Choose a sequence $h \to 0$ and compute approximations $u^h(t)$ with % Euler's method. % The method is well-defined, % if $(t_n,u_n^h) \in D \ \ \ \forall n$ with $|t_n-t_0| \le T$ % $|u_1^h - u_0| = |h f(t_0,u_0)| \le h M$ % $|u_n^h - u_0| = | \sum\limits_{k=0}^{n-1} h f(t_k,u_k^h) | % \le M \underbrace{h n}_{= T} \le M T \le \beta$ % \noindent II) To show: $u^h(t)$ is uniformly bounded and % equicontinuous. % a) Since $u^h(t) \in D \Rightarrow |u^h(t)-u_0| \le \beta$ % b) For $\tau,t \in [t_{n-1},t_n]$ holds $|u^h(\tau)-u^h(t)| \le M |\tau-t|$ % \noindent $\Rightarrow$ The functions $u^h$ are equicontinuous and % Lipschitz continuous with L-constant $M$. % \noindent $\underset{Theorem~\ref{Theorem:arzelaascoli} }{\Rightarrow}$ It exists % a sequence $u_i^h$ with $u_i^h \to u$ and $u$ is equicontinuous. % \noindent III) The limit function $u$ solves Volterra's integral equation % $u(t) = u_0 + \int\limits_{t_0}^t f(s,u(s))\ \mathrm{d}s$. % \noindent To that: (The subsequence index will be omitted in the following) % $u^h(t) = u_0 + \sum\limits_{j=0}^k h f(t_j,u_j^h) + (t-t_k) f(t_k,u_k^h)$, % \noindent where $t_k$ is the highest point of Euler's method with $t_k < t$. % $\begin{array}{lcl} % u^h(t) = u_0 + \int\limits_{t_0}^t \Phi^h(s) \ \mathrm{d}s % & \text{ with } & \Phi^h(t) = f(t_j,u_j^h) \text{ for } t \in [t_j,t_{j+1}]\\ % \Big\downarrow h \to 0 & \ & \Big\downarrow h \to 0 \\ % u(t) = u_0 + \int\limits_{t_0}^t \Phi(s) \ \mathrm{d}s % & \text{ with } & \Phi(t) = f(t,u(t))\\ % \end{array}$ % \noindent Remains to show: $\Phi^h \to \Phi$ uniformly % This follows from the uniformly continuity of $f$ in $\overline{D}$ and % the equicontinuity of $u^h \to u$. % \end{proof} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Comments on uniqueness of an IVP} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% For a first order differential equation Lipschitz continuity is only a sufficient and not, as one might think, a necessary condition for uniqueness of a first order differential equation. The following theorem and proof show that it is indeed possible to have uniqueness of solution without assuming Lipschitz continuity on the function. \begin{Theorem*}{nonnecessity}{Non-necessity of L-continuity} Let $f$ be a continous function satisfying $f(x) > 0$ for all $x \in \R$. Then, the solution to the IVP \begin{subequations} \label{eq:myivp} \begin{align} u'(t)&=f\bigl(t,u(t)\bigr) \\ u(t_0)&=u_0 \end{align} \end{subequations} is globally unique for all $(t_0, u_0) \in \R^2$. \end{Theorem*} \begin{proof} Assume two solutions $\varphi, \, \psi \colon I \to \R$ on an open intervall $I$ with $t_0 \in I$. Then, there holds \begin{xalignat}{3} && 1 = \frac{\varphi(t) '}{f(\varphi(t))} = \frac{\psi(t) '}{f(\psi(t))} && \text{for all } t \in I. \label{eq:myivp2} \end{xalignat} Define the function $F \colon \R \to \R$ through \begin{gather*} F(x) = \int_{u_0} ^x \frac{\ds}{f(s)}. \end{gather*} $F$ is continously differentiable since \begin{gather*} \partial_x F(x) = \partial_x \left( \int_{u_0} ^x \frac{\ds}{f(s)} \right) = \frac{1}{f(x)}. \end{gather*} Obviously, $F$ is also stricly increasing, hence injective on $\R$: Take $x, \, y \in \R$ and assume without loss of generality that $x < y$. Then we have $F(x) < F(y)$ and thus $F(x) \neq F(y)$. Thus, $F$ is an injection. Also, for all $t \in I$ there holds \begin{gather*} F(\varphi(t)) = \int_{u_0} ^t \frac{\varphi'(s)}{f(\varphi(s))} \ds \overset{\ref{eq:myivp2}}{=} \int_{u_0} ^t \frac{\psi'(s)}{f(\psi(s))} \ds = F(\psi(t)). \end{gather*} Thus, since $F$ is injective, we have $\varphi(t) = \psi(t)$ for all $t \in I$. In conclusion, the IVP \ref{eq:myivp} has a unique solution. \end{proof} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Properties of matrices} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{The matrix exponential} \label{sec:matrix-exponentials} \begin{definition} The matrix exponential $e^A$ of a matrix $A\in \R^{d\times d}$ is defined by its power series \begin{gather} \label{eq:appendix:1} e^A = \sum_{k=0}^\infty \frac{A^k}{k!}. \end{gather} \end{definition} \begin{lemma} \label{lemma:appendix:exp-0} The power series~\eqref{eq:appendix:1} converges for each matrix $A$. It is therefore valid to write \begin{gather} \label{eq:appendix:1.5} e^A = \lim_{m \to \infty} \sum_{k=0}^m \frac{A^k}{k!} = \sum_{k=0}^\infty \frac{A^k}{k!}. \end{gather} \end{lemma} \begin{proof} Chose $\| \cdot \|$ a submultiplicative matrix norm on $\R^d$. We want to show that the sequence of partial sums $(S_n)_{n \in \mathbb N_0}$ with $S_n$ given as $\lim_{m \to \infty} \sum_{k=n} ^m \frac{A^k}{k!}$ converges to $S := e^A = \lim_{m \to \infty} \sum_{k=0} ^m \frac{A^k}{k!}$. Consider therefore \begin{gather} \|S - S_n\| = \left\| \lim_{m \to \infty} \sum_{k=n+1}^m \frac{A^k}{k!} \right\| = \lim_{m \to \infty} \left\| \sum_{k=n+1}^m \frac{A^k}{k!} \right\|. \end{gather} Using the triangle-inequality and the fact that $\| \cdot \|$ is submultiplicative yields \begin{gather} \lim_{m \to \infty} \sum_{k=n+1}^m \left\| \frac{A^k}{k!} \right\| \le \lim_{m \to \infty} \sum_{k=n+1}^m \frac{1}{k!} \|A \|^k. \end{gather} Considering the limit $n \to \infty$ concludes the proof. \end{proof} \begin{lemma}[Properties of the matrix exponential function] \label{Lemma:appendix:exp-1} The following relations hold true: \begin{xalignat}2 \label{eq:appendix:2} e^0 &= \identity \\ \label{eq:appendix:3} e^{\alpha A} e^{\beta A} &= e^{(\alpha+\beta) A}, &\forall A\in\R^{d\times d}\; \forall \alpha,\beta\in \R,\\ \label{eq:appendix:4} e^A e^{-A} &= \identity &\forall A\in\R^{d\times d},\\ \label{eq:appendix:5} e^{S^{-1}AS} &= S^{-1} A S^{-1} &\forall A \in\R^{d\times d} \text{ intervertible},\\ \label{eq:appendix:6} e^{\diag(\lambda_1, \dots, \lambda_d)} &= \diag(e^{\lambda_1}, \dots, e^{\lambda_d}) &\forall \lambda_i \in\R, \, i = 1, \dots, d. \end{xalignat} Moreover, $e^A$ is invertible for arbitrary quadratic matrices $A$ with $(e^A)^{-1} = e^{-A}$. \end{lemma} \begin{proof} The equality \ref{eq:appendix:2} follows directly from the definition. For \ref{eq:appendix:3} consider the function $\phi(\alpha)$ given by \begin{gather*} \phi(\alpha) = e^{\alpha A}e^{\beta A} - e^{(\alpha + \beta) A}. \end{gather*} For the derivative $\phi '$ there holds \begin{gather*} \phi'(\alpha) = A\big(e^{\alpha A}e^{\beta A} - e^{(\alpha + \beta) A} \big) = A \phi(\alpha). \end{gather*} For the obtained differential equation we obtain the initial value at $\alpha = 0$ by \begin{gather*} \phi(0) = \identity e^{\beta A} - e^{\beta A} = 0. \end{gather*} The solution is to this IVP then given by $\phi(\alpha) = e^{\alpha A} \phi(0) = 0$. Hence there holds \begin{gather*} \phi(\alpha) = \begin{cases} e^{\alpha A}e^{\beta A} - e^{(\alpha + \beta) A} \\ 0 \end{cases} \end{gather*} and we obtain $e^{\alpha A}e^{\beta A} = e^{(\alpha + \beta) A}$ as desired. Equation \ref{eq:appendix:4} is a special case of \ref{eq:appendix:3} with parameters $\alpha=1$ and $\beta = -1$. Using \ref{eq:appendix:2} yields the result. For \ref{eq:appendix:5} note that $\R^{d \times d}$ forms a ring and is associative as such. Then for $k \in \mathbb N_0$ we have \begin{align*} (S^{-1} AS)^k &= (S^{-1} AS) (S^{-1} AS) \cdots (S^{-1} AS)(S^{-1} AS) \\ &= S^{-1} A (SS^{-1}) A (S \cdots S^{-1}) A (SS^{-1}) AS = S^{-1} A^k S& \\ \intertext{and thus by convergence} e^{S^{-1}AS} &= \sum_{k=0}^\infty \frac{1}{k!} (S^{-1}AS)^k \\ &= \sum_{k=0}^\infty\frac{1}{k!} S^{-1} A^k S \\ &= \enskip S^{-1} \cdot \left( \sum_{k=0}^\infty \frac{1}{k!} A^k \right) \cdot S \enskip = \enskip S^{-1} e^A S.& \end{align*} Lastly, let $D = \diag(\lambda_1, \dots, \lambda_d) \in \R^{d \times d}$ where $\lambda_i \in \R$, $i= 1, \dots, d$. Then, $D^k = \diag(\lambda_1^k, \dots, \lambda_n^k)$ for any $k \in \mathbb N_0$. Then we have \begin{align} e^D =& \sum_{k=0}^\infty \frac{1}{k!} \diag(\lambda_1^k, \dots, \lambda_n^k) \\ =& \sum_{k=0}^\infty \diag \left(\frac{1}{k!}\lambda_1^k, \dots, \frac{1}{k!}\lambda_n^k \right) \\ =& \diag \left(\sum_{k=0}^\infty \frac{1}{k!} \lambda_1^k, \dots, \sum _{k=0} ^m \frac{1}{k!}\lambda_n^k \right) \\ =& \diag \left(\sum_{k=0}^\infty \frac{1}{k!} \lambda_1^k, \dots, \lim_{m \to \infty} \sum _{k=0} ^m \frac{1}{k!}\lambda_n^k \right) \\ =& \diag(e^{\lambda_1^k}, \dots, e^{\lambda_n^k}) \end{align} Here we have used the absolute convergence of the series and that these matrices are elements of the ring $R^{d \times d}$. \end{proof} \begin{example} We will perform an exemplary calculation of a matrix exponential. Consider \begin{gather*} A = \begin{pmatrix} 0 & 1 \\ k^2 & 0 \end{pmatrix}. \end{gather*} As the matrix exponential of a diagonal matrix is simply a diagonal matrix with the exponential of the entries. To diagonalize $A$ note that the eigenvalues $\lambda_1, \, \lambda_2$ of $A$ are $\lambda_1 = k$ and $\lambda_2 = -k$. By $D$ we denote the diagnoal matrix with the eigenvalues of $A$ as diagonal elements. The correspondig eigenvectors $v_1, \, v_2$ are $v_1 = \big( 1, k \big)^T$ and $v_2 = \big(1, -k \big)$. The matrix $S = (v_1 | v_2) \in \R^{2\times 2}$ satisfies \begin{gather*} A = S^{-1} D S. \end{gather*} The inverse of $S$ is given as \begin{gather*} S^{-1} = \frac 12 \begin{pmatrix} 1 & \nicefrac 1k \\ 1 & - \nicefrac 1k \end{pmatrix} \end{gather*} and with the above lemma we can now calculate \begin{gather*} e^A = S e^D S^{-1} = \frac 12 \begin{pmatrix} e^k + e^{-k} & \nicefrac 1k (e^k - e^{-k}) \\ k (e^k - e^{-k}) & e^k + e^{-k} \end{pmatrix} = \begin{pmatrix} \cosh(k) & \nicefrac 1k \sinh(k) \\ k \sinh(k) & \cosh(k) \end{pmatrix} \end{gather*} \end{example} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The Banach fixed-point theorem} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{Theorem*}{banach}{Banach fixed-point theorem} Let $\Omega \subset \R$ be a closed set and $f \colon \Omega \to \Omega$ a contraction, i.e. there holds $|f(x) - f(y)| \le \gamma |x-y|$ for a $\gamma \in (0,1)$. Then there exists a unique $x^* \in \Omega$ such that $f(x^*) =x^*$. \end{Theorem*} \begin{proof} Let $x_0 \in \Omega$ and define $f(x_k) = x_{k+1}$. First, we prove existence unsing the cauchy-criterion. Let $k, n \in \mathbb N_0$ and consider \begin{gather*} |x_k - x_{k+m} | = |f(x_{k-1}) - f(x_{k+m-1})| \le \gamma |x_{k-1} - x_{k+m-1})|. \end{gather*} Iteratively, we get \begin{gather*} |x_k - x_{k+m} | \le \gamma^k |x_0 - x_m|. \end{gather*} We now write $x_0 - x_m = x_0 - x_1 + x_1 - x_2 + \dots + x_{m-1} - x_m$. The triangle-inequality yields the estimate \begin{gather*} \gamma^k |x_0 - x_m| \le \gamma^k |x_0 - x_1| + |x_1 - x_2| + \dots + |x_{m-1} - x_m| \\ \le \gamma^k |x_0 - x_1| (1 + \gamma + \gamma ^2 + \dots + \gamma^m) \\ \le \frac{\gamma^k}{1-\gamma} |x_0 - x_1|. \end{gather*} As $k$ gets larger the estimate goes to zero. Concerning uniqueness, let $x^*$ and $y^*$ be fixpoints. \begin{gather*} |x^* - y^*| = |f(x*) - f(y^*) | \le \gamma |x^* - y^*| \end{gather*} Since $\gamma \in (0,1)$ we immediately obtain $|x^* - y^*| = 0$. Using that $|a| = 0$ if and only if $a=0$ yields $y^* = x^*$. This concludes the proof. \end{proof} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The implicit and explicit Euler-method} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The explicit resp. implicit Euler is given by the one-step method \begin{align*} && y_1 = y_0 + h f(y_0) && \text{resp.} && y_1 = y_0 + h f(y_1) && \end{align*} Clearly, the explicit Euler is a rather easy calculation since all one needs are $f$, $h$ and $y_0$. The implicit Euler is more difficult to compute since for calculating $y_1$ we need the value of $f$ at $y_1$. The goal of this section is to visualize and give an intuition for the two algorithms. Consider the following visualizations. \begin{center} \begin{minipage}[t]{0.45\textwidth} \begin{tikzpicture}[domain=0:4] \draw[->] (0,0) -- (4.5,0) node[anchor=north] {$t$}; \draw[->] (0,0) -- (0,4) node[anchor=east] {$y$}; \draw[dashed] (2,4) -- (2,-0.22) node[anchor=north] {$t_1$}; \draw plot [smooth] coordinates { (0,1) (0.5,1.19125) (1,1.41907) (1.5,1.69046) (2,2.01375) (2.5, 2.39888) (3, 2.85765) (3.5, 3.40417) (4, 4.0552) } node[left] {$u$}; \draw[-{>[scale=2.0]}] (0,1) -- (4, 2.4) node[right] {$u'(t_0)$}; \end{tikzpicture} For the explicit Euler we take $u_0$ and $u'_0$. $y_1$, our approximated solution for $u_1$, is chosen as the intersection point of $t_1$ and $g(t) = y_0 + t \cdot u'(t_0)$. \end{minipage} \begin{minipage}[t]{0.1\textwidth} \phantom{Käse} \end{minipage} \begin{minipage}[t]{0.45\textwidth} \begin{tikzpicture}[domain=0:4] \draw[->] (0,0) -- (4.5,0) node[anchor=north] {$t$}; \draw[->] (0,0) -- (0,4) node[anchor=east] {$y$}; \draw[dashed] (2,4) -- (2,-0.22) node[anchor=north] {$t_1$}; \draw plot [smooth] coordinates { (0,1) (0.5,1.19125) (1,1.41907) (1.5,1.69046) (2,2.01375) (2.5, 2.39888) (3, 2.85765) (3.5, 3.40417) (4, 4.0552) } node[left] {$u$}; \draw[{<[scale=3.0]}-] (0,1) -- (4, 3.81925) node[right] {$u'(t_1)$}; \end{tikzpicture} For implicit Euler we go backwards. On the $t_1$-axis we are looking for an the affine function $g$ that fulfills $g(0) = u_0$ and $g'(t_1) = f(t_1)$. Then we set $y_1 = g(t_1)$. \end{minipage} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Derivation of a BDF-scheme} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The BDF formulas use the approximations of the solution in the previous points $t_k -sh, \dots, t_k-h$ and the unkown value $y_k$ at $t_k$ which is to determine. With the Lagrange polynomial given by $L_i (t) = \prod_{j=1, j \not=i} ^n \tfrac{t- t_i}{t_j-t_i}$ we let $y(t) = \sum_{j=1} ^s y_{k-j} L_{k-j}(t)$. Then, we will assume that $y$ solves the IVP in the point $t_k$ and obtain a linear system from which we derive the desired value $y_k$.\\ We now aim to derive the scheme for BDF(2). Let the points $t_k -2h,t_k -h$ and $t_k$ be given. \begin{center} \begin{tikzpicture}[scale=2, domain=-0.5:2.5] \draw[->] (-.25,0) -- (2.25,0) node[pos=1.025] {$t$}; \draw[fill=black] (0,0) circle [radius=0.04] node[below] {$t_k - 2h$}; \draw[fill=black] (1,0) circle [radius=0.04] node[below] {$t_k - h$}; \draw[fill=black] (2,0) circle [radius=0.04] node[below] {$t_k$}; \end{tikzpicture} \end{center} For the Lagrange polynomials in the points $t_k$, $t_k -h$ resp. $t_k -2h$ we have \begin{gather*} L_0 (t) = \tfrac{t-(t_k-h)(t-t_k)}{2h^2}, \, L_1 (t) = \tfrac{(t-t_k)(t-t_kj-2h)}{h^2} \, \text{resp.} \, L_2 (t) = \tfrac{(t-t_k+2h)(t-t_k+h)}{2h^2}. \end{gather*} As announced, we assume that the interpolation polynomial fulfilles the IVP in the point $t_k$, i.e. there holds $f_k := f(t_k, y(t_k)) = y'(t_k) = \sum_{j=1} ^s y_{k-j} L_{k-j} ' (t)$. The product rule and evaluation at $t=t_k$ yields \begin{gather*} L_0 '(t) = \tfrac{2t-2t_k+h}{2h^2} = \tfrac{1}{2h}, \, L_1 '(t) = -\tfrac{2t -2t_k + 2h}{h^2} = -\tfrac{2}{h} \, \text{and} \, L_2 '(t) = \tfrac{2t-2t_k + 3h}{2h^2} = \tfrac{3}{2h}. \end{gather*} Then, we obtain \begin{gather*} f_k = \tfrac{1}{2h} y_{k-2} - \tfrac{2}{h}y_{k-1} + \tfrac{3}{2h} y_k. \end{gather*} Multiplication with $\tfrac{2}{3h}$ yields the scheme \begin{gather*} y_k - \tfrac{4}{3} y_{k-1} + 3y_{k-2} = \tfrac{2}{3h} f_k. \end{gather*} %%% Local Variables: %%% mode: latex %%% TeX-master: "notes" %%% End:
{ "alphanum_fraction": 0.5551689303, "avg_line_length": 38.6507592191, "ext": "tex", "hexsha": "55f57aa651e01954cf2b6d36b6a788f379b9b19a", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z", "max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z", "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_path": "ode/appendix.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z", "max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_path": "ode/appendix.tex", "max_line_length": 100, "max_stars_count": null, "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_path": "ode/appendix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6861, "size": 17818 }
\filetitle{dbsave}{Save database as CSV file}{dbase/dbsave} \paragraph{Syntax}\label{syntax} \begin{verbatim} List = dbsave(D,FName) List = dbsave(D,FName,Dates,...) \end{verbatim} \paragraph{Output arguments}\label{output-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{List} {[} cellstr {]} - - List of actually saved database entries. \end{itemize} \paragraph{Input arguments}\label{input-arguments} \begin{itemize} \item \texttt{D} {[} struct {]} - Database whose tseries and numeric entries will be saved. \item \texttt{FName} {[} char {]} - Filename under which the CSV will be saved, including its extension. \item \texttt{Dates} {[} numeric \textbar{} \emph{\texttt{Inf}} {]} Dates or date range on which the tseries objects will be saved. \end{itemize} \paragraph{Options}\label{options} \begin{itemize} \item \texttt{'class='} {[} \emph{\texttt{true}} \textbar{} false {]} - Include a row with class and size specifications. \item \texttt{'comment='} {[} \emph{\texttt{true}} \textbar{} \texttt{false} {]} - Include a row with comments for tseries objects. \item \texttt{'decimal='} {[} numeric \textbar{} \emph{empty} {]} - Number of decimals up to which the data will be saved; if empty the \texttt{'format'} option is used. \item \texttt{'format='} {[} char \textbar{} \emph{\texttt{'\%.8e'}} {]} - Numeric format that will be used to represent the data, see \texttt{sprintf} for details on formatting, The format must start with a \texttt{'\%'}, and must not include identifiers specifying order of processing, i.e.~the \texttt{'\$'} signs, or left-justify flags, the \texttt{'-'} signs. \item \texttt{'freqLetters='} {[} char \textbar{} \emph{\texttt{'YHQBM'}} {]} - Five letters to represent the five possible date frequencies (annual, semi-annual, quarterly, bimonthly, monthly). \item \texttt{'nan='} {[} char \textbar{} \emph{\texttt{'NaN'}} {]} - String that will be used to represent NaNs. \item \texttt{'saveSubdb='} {[} \texttt{true} \textbar{} \emph{\texttt{false}} {]} - Save sub-databases (structs found within the struct \texttt{D}); the sub-databases will be saved to separate CSF files. \item \texttt{'userData='} {[} char \textbar{} \emph{`userdata'} {]} - Field name from which any kind of userdata will be read and saved in the CSV file. \end{itemize} \paragraph{Description}\label{description} The data saved include also imaginary parts of complex numbers. \subparagraph{Saving user data with the database}\label{saving-user-data-with-the-database} If your database contains field named \texttt{'userdata='}, this will be saved in the CSV file on a separate row. The \texttt{'userdata='} field can be any combination of numeric, char, and cell arrays and 1-by-1 structs. You can use the \texttt{'userdata='} field to describe the database or preserve any sort of metadata. To change the name of the field that is treated as user data, use the \texttt{'userData='} option. \paragraph{Example}\label{example} Create a simple database with two time series. \begin{verbatim} d = struct(); d.x = tseries(qq(2010,1):qq(2010,4),@rand); d.y = tseries(qq(2010,1):qq(2010,4),@rand); \end{verbatim} Add your own description of the database, e.g. \begin{verbatim} d.userdata = {'My database',datestr(now())}; \end{verbatim} Save the database as CSV using \texttt{dbsave}, \begin{verbatim} dbsave(d,'mydatabase.csv'); \end{verbatim} When you later load the database, \begin{verbatim} d = dbload('mydatabase.csv') d = userdata: {'My database' '23-Sep-2011 14:10:17'} x: [4x1 tseries] y: [4x1 tseries] \end{verbatim} the database will preserve the \texttt{'userdata='} field. \paragraph{Example}\label{example-1} To change the field name under which you store your own user data, use the \texttt{'userdata='} option when running \texttt{dbsave}, \begin{verbatim} d = struct(); d.x = tseries(qq(2010,1):qq(2010,4),@rand); d.y = tseries(qq(2010,1):qq(2010,4),@rand); d.MYUSERDATA = {'My database',datestr(now())}; dbsave(d,'mydatabase.csv',Inf,'userData=','MYUSERDATA'); \end{verbatim} The name of the user data field is also kept in the CSV file so that \texttt{dbload} works fine in this case, too, and returns a database identical to the saved one, \begin{verbatim} d = dbload('mydatabase.csv') d = MYUSERDATA: {'My database' '23-Sep-2011 14:10:17'} x: [4x1 tseries] y: [4x1 tseries] \end{verbatim}
{ "alphanum_fraction": 0.6950732357, "avg_line_length": 29.4509803922, "ext": "tex", "hexsha": "90fa9e90a2823a6a41d375232b90b58fcd9522f9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/dbase/dbsave.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/dbase/dbsave.tex", "max_line_length": 72, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/dbase/dbsave.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 1390, "size": 4506 }
\subsection*{Velocity}\label{sec:Velocity} We started this section by saying, ``It is often useful to know how sensitive the value of $y$ is to small changes in $x$.'' We have seen one purely mathematical example of this, involving the function $f(x)=\sqrt{625-x^2}$. Here is a more applied example. With careful measurement it might be possible to discover that the height of a dropped ball $t$ seconds after it is released is $\ds h(t)=h_0-kt^2$. (Here $h_0$ is the initial height of the ball, when $t=0$, and $k$ is some number determined by the experiment.) A natural question is then, ``How fast is the ball going at time $t$?'' We can certainly get a pretty good idea with a little simple arithmetic. To make the calculation more concrete, let's use units of meters and seconds and say that $\ds h_0=100$ meters and $k=4.9$. Suppose we're interested in the speed at $t=2$. We know that when $t=2$ the height is $100-4\cdot 4.9=80.4$ meters. A second later, at $t=3$, the height is $100-9\cdot 4.9=55.9$ meters. The change in height during that second is $55.9-80.4=-24.5$ meters. The negative sign means the height has decreased, as we expect for a falling ball, and the number 24.5 is the average speed of the ball during the time interval, in meters per second. We might guess that 24.5 meters per second is not a terrible estimate of the speed at $t=2$, but certainly we can do better. At $t=2.5$ the height is $\ds 100-4.9(2.5)^2=69.375$ meters. During the half second from $t=2$ to $t=2.5$, the change in height is $69.375-80.4=-11.025$ meters giving an average speed of $11.025/(1/2)=22.05$ meters per second. This should be a better estimate of the speed at $t=2$. So it's clear now how to get better and better approximations: compute average speeds over shorter and shorter time intervals. Between $t=2$ and $t=2.01$, for example, the ball drops 0.19649 meters in one hundredth of a second, at an average speed of 19.649 meters per second. We still might reasonably ask for the precise speed at $t=2$ (the {\em instantaneous} speed) rather than just an approximation to it. For this, once again, we need a limit. Let's calculate the average speed during the time interval from $t=2$ to $t=2+\Delta t$ without specifying a particular value for $\Delta t$. The change in height during the time interval from $t=2$ to $t=2+\Delta t$ is \begin{align*} h(2+\Delta t)-h(2) &=(100-4.9(2+\Delta t)^2)-80.4\\ &=100-4.9(4+4\Delta t+\Delta t^2)-80.4\\ &=100-19.6-19.6\Delta t-4.9\Delta t^2-80.4\\ &=-19.6\Delta t-4.9\Delta t^2\\ &=-\Delta t(19.6+4.9\Delta t) \end{align*} The average speed during this time interval is then $$\frac{\Delta t(19.6+4.9\Delta t)}{\Delta t}=19.6+4.9\Delta t.$$ When $\Delta t$ is very small, this is very close to 19.6. Indeed, $\lim_{\Delta x\to 0}(19.6+4.9\Delta t)=19.6$. So the exact speed at $t=2$ is 19.6 meters per second. At this stage we need to make a distinction between \textit{speed} and \textit{velocity}. Velocity is signed speed, that is, speed with a direction indicated by a sign (positive or negative). Our algebra above actually told us that the instantaneous velocity of the ball at $t=2$ is $-19.6$ meters per second. The number 19.6 is the speed and the negative sign indicates that the motion is directed downwards (the direction of decreasing height). In the language of the previous section, we might have started with $\ds f(x)=100-4.9x^2$ and asked for the slope of the tangent line at $x=2$. We would have answered that question by computing $$ \lim_{\Delta x\to 0}\frac{f(2+\Delta x) - f(2)}{\Delta x} =\lim_{\Delta x\to 0}\frac{-19.6\Delta x-4.9\Delta x^2}{\Delta x} =-19.6-4.9\Delta x=19.6 $$ The algebra is the same. Thus, the velocity of the ball is the value of the derivative of a certain function, namely, of the function that gives the position of the ball. The upshot is that this problem, finding the velocity of the ball, is \ifont{exactly} the same problem mathematically as finding the slope of a curve. This may already be enough evidence to convince you that whenever some quantity is changing (the height of a curve or the height of a ball or the size of the economy or the distance of a space probe from earth or the population of the world) the \textit{rate} at which the quantity is changing can, in principle, be computed in exactly the same way, by finding a derivative. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Opensolutionfile{solutions}[ex] \section*{Exercises for Section \ref{sec:Slope}} \begin{enumialphparenastyle} %%%%%%%%%% \begin{ex} Draw the graph of the function $\ds y=f(x)=\sqrt{169-x^2}$ between $x=0$ and $x=13$. Find the slope $\Delta y/\Delta x$ of the chord between the points of the circle lying over (a) $x=12$ and $x=13$, (b) $x=12$ and $x=12.1$, (c) $x=12$ and $x=12.01$, (d) $x=12$ and $x=12.001$. Now use the geometry of tangent lines on a circle to find (e) the exact value of the derivative $f'(12)$. Your answers to (a)--(d) should be getting closer and closer to your answer to (e). \begin{sol} $-5$, $-2.47106145$, $-2.4067927$, $-2.400676$, $-2.4$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Use geometry to find the derivative $f'(x)$ of the function $\ds f(x)=\sqrt{625-x^2}$ in the text for each of the following $x$: (a) 20, (b) 24, (c) $-7$, (d) $-15$. Draw a graph of the upper semicircle, and draw the tangent line at each of these four points. \begin{sol} $-4/3$, $-24/7$, $7/24$, $3/4$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Draw the graph of the function $y=f(x)=1/x$ between $x=1/2$ and $x=4$. Find the slope of the chord between (a) $x=3$ and $x=3.1$, (b) $x=3$ and $x=3.01$, (c) $x=3$ and $x=3.001$. Now use algebra to find a simple formula for the slope of the chord between $(3,f(3))$ and $(3+\Delta x,f(3+\Delta x))$. Determine what happens when $\Delta x$ approaches 0. In your graph of $y=1/x$, draw the straight line through the point $(3,1/3)$ whose slope is this limiting value of the difference quotient as $\Delta x$ approaches 0. \begin{sol} $-0.107526881$, $-0.11074197$, $-0.1110741$, $\ds{-1\over3(3+\Delta x)}\rightarrow {-1\over9}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Find an algebraic expression for the difference quotient $\ds \bigl(f(1+\Delta x)-f(1)\bigr)/\Delta x$ when $\ds f(x)=x^2-(1/x)$. Simplify the expression as much as possible. Then determine what happens as $\Delta x$ approaches 0. That value is $f'(1)$. \begin{sol} $\ds{3+3\Delta x+\Delta x^2\over1+\Delta x}\rightarrow3$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Draw the graph of $\ds y=f(x)=x^3$ between $x=0$ and $x=1.5$. Find the slope of the chord between (a) $x=1$ and $x=1.1$, (b) $x=1$ and $x=1.001$, (c) $x=1$ and $x=1.00001$. Then use algebra to find a simple formula for the slope of the chord between $1$ and $1+\Delta x$. (Use the expansion $\ds (A+B)^3=A^3+3A^2B+3AB^2+B^3$.) Determine what happens as $\Delta x$ approaches 0, and in your graph of $\ds y=x^3$ draw the straight line through the point $(1,1)$ whose slope is equal to the value you just found. \begin{sol} $3.31$, $3.003001$, $3.0000$,\hfill\break $3+3\Delta x+\Delta x^2\rightarrow3$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex}\label{ex:derivative of a line} Find an algebraic expression for the difference quotient $(f(x+\Delta x)-f(x))/\Delta x$ when $f(x)=mx+b$. Simplify the expression as much as possible. Then determine what happens as $\Delta x$ approaches 0. That value is $f'(x)$. \begin{sol} $m$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Sketch the unit circle. Discuss the behavior of the slope of the tangent line at various angles around the circle. Which trigonometric function gives the slope of the tangent line at an angle $\theta$? Why? Hint: think in terms of ratios of sides of triangles. \end{ex} %%%%%%%%%% \begin{ex} Sketch the parabola $\ds y=x^2$. For what values of $x$ on the parabola is the slope of the tangent line positive? Negative? What do you notice about the graph at the point(s) where the sign of the slope changes from positive to negative and vice versa? \end{ex} %%%%%%%%%% \begin{ex} An object is traveling in a straight line so that its position (that is, distance from some fixed point) is given by this table: \begin{table}[!ht] \begin{tabular}{|c|c|c|c|c|} \hline time (seconds)& 0& 1& 2& 3\\ \hline distance (meters)& 0& 10& 25& 60\\ \hline \end{tabular} \end{table} Find the average speed of the object during the following time intervals: $[0,1]$, $[0,2]$, $[0,3]$, $[1,2]$, $[1,3]$, $[2,3]$. If you had to guess the speed at $t=2$ just on the basis of these, what would you guess? \begin{sol} $10$, $25/2$, $20$, $15$, $25$, $35$. \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Let $\ds y=f(t)=t^2$, where $t$ is the time in seconds and $y$ is the distance in meters that an object falls on a certain airless planet. Draw a graph of this function between $t=0$ and $t=3$. Make a table of the average speed of the falling object between (a) 2 sec and 3 sec, (b) 2 sec and 2.1 sec, (c) 2 sec and 2.01 sec, (d) 2 sec and 2.001 sec. Then use algebra to find a simple formula for the average speed between time $2$ and time $2+ \Delta t$. (If you substitute $\Delta t=1,\>0.1,\>0.01,\>0.001$ in this formula you should again get the answers to parts (a)--(d).) Next, in your formula for average speed (which should be in simplified form) determine what happens as $\Delta t$ approaches zero. This is the instantaneous speed. Finally, in your graph of $\ds y=t^2$ draw the straight line through the point $(2,4)$ whose slope is the instantaneous velocity you just computed; it should of course be the tangent line. \begin{sol} $5$, $4.1$, $4.01$, $4.001$, $4+\Delta t\rightarrow 4$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} If an object is dropped from an 80-meter high window, its height $y$ above the ground at time $t$ seconds is given by the formula $\ds y=f(t)=80-4.9t^2$. (Here we are neglecting air resistance; the graph of this function was shown in figure~\ref{fig:data plot}.) Find the average velocity of the falling object between (a) 1 sec and 1.1 sec, (b) 1 sec and 1.01 sec, (c) 1 sec and 1.001 sec. Now use algebra to find a simple formula for the average velocity of the falling object between 1 sec and $1+\Delta t$ sec. Determine what happens to this average velocity as $\Delta t$ approaches 0. That is the instantaneous velocity at time $t=1$ second (it will be negative, because the object is falling). \begin{sol} $-10.29$, $-9.849$, $-9.8049$, \hfill\break $-9.8-4.9\Delta t\rightarrow -9.8$ \end{sol} \end{ex} \end{enumialphparenastyle}
{ "alphanum_fraction": 0.677962963, "avg_line_length": 44.8132780083, "ext": "tex", "hexsha": "e4049f908c4a776891ea741548187465d3b82958", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_path": "4-derivatives/4-2-limits-velocity.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_path": "4-derivatives/4-2-limits-velocity.tex", "max_line_length": 211, "max_stars_count": null, "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_path": "4-derivatives/4-2-limits-velocity.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3485, "size": 10800 }
\documentclass[a4paper,11pt]{article} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{csquotes} \usepackage{float} \usepackage{graphicx,subfig} \usepackage{amssymb,amsmath} %\usepackage{siunitx} \usepackage[nodayofweek]{datetime} \usepackage[top=3.5cm,bottom=2.5cm,left=3cm,right=3cm,headheight=30pt]{geometry} \usepackage[style=numeric,backend=biber]{biblatex} \bibliography{refs} \usepackage{fancyhdr} \pagestyle{fancy} \usepackage{lastpage} \usepackage{parskip} \setlength{\parskip}{.5em} \setlength{\parindent}{1em} \usepackage[colorlinks=true,allcolors=blue]{hyperref} \hypersetup{ pdfauthor={Michaël Defferrard, Soroosh Shafiee}, pdftitle={Incremental Gradient Methods}, pdfsubject={Project proposal} } \lhead{Advanced Topics in Data Sciences\\ Project proposal} \chead{\hspace{2cm}EPFL\\ \hspace{2cm}\shortdate\today} \rhead{Michaël \textsc{Defferrard}\\ Soroosh \textsc{Shafiee}} \cfoot{} \newcommand{\R}{\mathbb{R}} \newcommand{\eqnref}[1]{(\ref{eqn:#1})} \begin{document} \begin{center} \Large{\textbf{\textsc{Incremental Gradient Methods}}} \end{center} This project is aimed to be a way for us to better understand and thinker with the recent advances in the Stochastic Gradient Descent algorithms, specifically some of the newest Incremental Gradient Methods such as SAG \cite{schmidt_minimizing_2013}, SVRG \cite{johnson_accelerating_2013} and SAGA \cite{defazio_saga_2014}. This class of algorithms have been developed to solve problems of the form \begin{equation} \label{eqn:problem} \min_{x \in \R^d} \frac{1}{n} \sum_{i=1}^n f_i(x) + h(x), \end{equation} where each $f_i$ is convex and has Libschitz continuous derivatives with constant $L$ or is strongly convex with constant $\mu$; and $h$ is a convex but potentially non-differentiable function (his proximal operator is however easy to compute). While computing the full gradient would be prohibitive due to large $d$ and $n$, these iterative stochastic algorithms reduce the computational cost of optimization by only computing the gradient of a subset of the functions $f_i$ at each step. Many machine learning problems can be cast in \eqnref{problem}, such as (constrained) Least-Square or Logistic Regressions with $\ell_1$ or $\ell_2$ regularization; where $x$ would represent the model parameters, $f_i$ the data fidelity term applied to a particular sample $i$, and $h$ a regularization or indicator function of a convex set. As such, these methods are of use in our respective domains of expertise: Signal Processing on Graphs and Risk Analytics. With the general setting in mind, we identify four directions relevant to our research in which we could contribute: \begin{enumerate} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Play with the trade-off between the computational efficiency of SAGA and the memory efficiency of SVRG, especially relevant when working with large datasets, e.g. for $n > 10^6$ which is not uncommon in these days of Big Data. A first approach to compromise on the memory requirement of SAGA would be to store averaged gradients over mini-batches instead of the full gradient matrix. This task will involve the implementation and empirical testing of the devised scheme. A novel proof of convergence can be envisioned. This work is related to \cite{nitanda2014stochastic}. \item A distributed implementation of one of those algorithms. This would be useful to diminish the clock time needed to solve a given problem or to solve large-scale optimizations where the memory of one computer is not sufficient anymore. This goal will require the analysis of the inter-nodes communication cost as well as the design of a merging or synchronization scheme. Novel proofs of convergence could be required. It could be inspired by \cite{bianchi2014coordinate}. \item Explore the application of these algorithms to minimax problems which aim at finding saddle points \cite{nemirovski2009robust}. The min-max formulation appears in the context of zero-sum games and robust optimization. Traditionally, robust optimization problems focus on converting the minimax problem to a minimization problem by leveraging duality theory. Instead, we aim to find the saddle points using incremental methods. \item Use these methods to fit statistical models. In particular, we are interested to fit a Gaussian Mixture Model (GMM) viewed as a manifold optimization problem. Our goal would be to adapt one of the incremental methods to fit GMMs \cite{reshad_matrix_2015}. \end{enumerate} We do not expect to complete all of the above objectives. We plan to discuss with experts in the domain\footnote{Such as the first author of \cite{reshad_matrix_2015}, whom Soroosh met during his master studies. Or someone from the EPFL LIONS lab.} and will then choose two of them to focus on two of them only. \paragraph{Roles.} Each of us will pursue one of the mentioned goals from beginning to end; which includes any necessary theory, implementation, testing, writing and presentation. Our work (code, report and presentation) will be tracked by \textit{git}, such that individual contributions can easily be spotted. \paragraph{Milestones.} Following are the milestones we envision for the completion of the aforementioned project. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item 2016-03-24 Proposal submitted. \item 2016-04-01 Proposal approved. \item 2016-04-08 Two directions chosen. \item 2016-04-22 Problems stated and solutions formulated. \item 2016-05-06 Solutions implemented (Jupyter notebooks, Python). \item 2016-05-20 Tested on real or synthetic data. \item 2016-05-27 Report written. \item 2016-06-03 Project presented. \end{itemize} \printbibliography \end{document}
{ "alphanum_fraction": 0.7872923446, "avg_line_length": 50.7739130435, "ext": "tex", "hexsha": "adc9ec094f374cba620cc7e0f0261bbc567e02d3", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-07-16T03:14:04.000Z", "max_forks_repo_forks_event_min_datetime": "2016-09-07T16:49:55.000Z", "max_forks_repo_head_hexsha": "040c262aaabfa6eacf2bd67fb0a05d95feed83a9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mdeff/saga", "max_forks_repo_path": "proposal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "040c262aaabfa6eacf2bd67fb0a05d95feed83a9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mdeff/saga", "max_issues_repo_path": "proposal.tex", "max_line_length": 96, "max_stars_count": 7, "max_stars_repo_head_hexsha": "040c262aaabfa6eacf2bd67fb0a05d95feed83a9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mdeff/saga", "max_stars_repo_path": "proposal.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-25T16:48:18.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-07T10:41:34.000Z", "num_tokens": 1578, "size": 5839 }
% \documentclass[11pt]{article} % \usepackage[utf8]{inputenc} % \usepackage{mathtools} % \usepackage{amsmath} % \usepackage{amsfonts} % \usepackage{enumerate} % % For proper referencing in article % \usepackage{hyperref} % \usepackage{url} % % For figures and graphics'n stuff % \usepackage{graphicx} % \usepackage{caption} % \usepackage{subcaption} % % \usepackage{tabularx} % \usepackage{float} % % For proper appendices % \usepackage[toc,page]{appendix} % % Algorithm packages % \usepackage{algorithm} % \usepackage{algorithmicx} % \usepackage{algpseudocode} % % For bold math symbols % \usepackage{bm} % \usepackage{xcolor} % % For customized hlines in tables % \usepackage{ctable} % % For having latex symbols in section titles % \usepackage{epstopdf} % % For proper citations % % \usepackage[round, authoryear]{natbib} % \usepackage[numbers]{natbib} % % For fixing large table height % \usepackage{a4wide} % % Remembrance and checking % \newcommand{\husk}[1]{\color{red} #1 \color{black}} % \newcommand{\sjekk}[1]{\color{violet} #1 \color{black}} % \DeclareMathOperator{\sign}{sign} % \DeclareMathOperator*{\argmin}{argmin} % \DeclareMathOperator*{\CO}{\mathcal{C}} % % \title{FYS-STK4155: Project 2} % \title{Exploring the hyperspace of Machine Learning parameters} % \author{Eirik Ramsli Hauge, Joakim Kalsnes, Hans Mathias Mamen Vege} % \date{\today} % \begin{document} \section{Conclusion} We found that although both linear and logistic regression can be used, neural nets that are correctly tuned will give better scores. From our experiment, this is evident from the increase in R2 and accuracy score when comparing linear and logistic regression to their neural net counterparts respectively. For our linear regression, we found similar results to previous experiments and our results for logistic regression were different from those of Metha et al.. The reason for this difference is left to future experiments to decipher, but as it stands now, the implemented logistic regression was the superior method. For our neural net, the optimal parameters were a learning rate of $\eta = 1.0e-02$, $\lambda = 1.0e-02$, 20 neurons, 500 epochs and a mini batch size of 30. It was also evident that a neural net with $\tanh$ as activation function was the optimal neural net in our case. % \end{document}
{ "alphanum_fraction": 0.7537602063, "avg_line_length": 37.5322580645, "ext": "tex", "hexsha": "485260060694793b778b909e77e5261e86148f2e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hmvege/FYSSTK4155-Project2", "max_forks_repo_path": "doc/conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hmvege/FYSSTK4155-Project2", "max_issues_repo_path": "doc/conclusion.tex", "max_line_length": 894, "max_stars_count": null, "max_stars_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hmvege/FYSSTK4155-Project2", "max_stars_repo_path": "doc/conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 639, "size": 2327 }
\documentclass{amsart} \usepackage{amsmath,amsthm,amsfonts,amssymb} \usepackage{mathrsfs} \usepackage{stmaryrd} \usepackage{mathtools} \usepackage{subfiles} \usepackage[sans]{dsfont} \usepackage{enumerate} \usepackage{fullpage} \usepackage{tikz} % \usepackage{marginnote} \usepackage{float} \usepackage{cancel} \usepackage{xcolor} \usepackage{setspace} \usepackage{calligra} \usepackage{MnSymbol} % \usepackage{setspace} \usepackage{hyperref} % \usepackage{showkeys} % \usepackage{lastpage} % \usepackage{fancyhdr} \usepackage[all]{xy} \usepackage[capitalize]{cleveref} \usetikzlibrary{shapes.geometric} \usetikzlibrary{calc} \newtheorem*{clm*}{Claim} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{fact}[thm]{Fact} \newtheorem{conj}[thm]{Conjecture} \newtheorem{clm}[thm]{Claim} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{ex}[thm]{Example} \newtheorem{xca}[thm]{Exercise} \newtheorem{axiom}[thm]{Axiom} \theoremstyle{remark} \newtheorem{rmk}[thm]{Remark} \newtheorem{setup}[thm]{Setup} \newtheorem{cond}[thm]{Condition} \newtheorem{cons}[thm]{Construction} \newtheorem{obs}[thm]{Observation} \newtheorem{ques}[thm]{Question} \numberwithin{equation}{section} \DeclareMathOperator{\aim}{Im} \DeclareMathOperator{\aut}{Aut} \DeclareMathOperator{\aend}{End} \DeclareMathOperator{\ahom}{Hom} \DeclareMathOperator{\aker}{Ker} \DeclareMathOperator{\ann}{Ann} \DeclareMathOperator{\ass}{Ass} \DeclareMathOperator{\chr}{char} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\coker}{Coker} \DeclareMathOperator{\colim}{Colim} \DeclareMathOperator{\depth}{depth} \DeclareMathOperator{\der}{Der} \DeclareMathOperator{\ext}{Ext} \DeclareMathOperator{\fr}{Frac} \DeclareMathOperator{\gr}{gr} \DeclareMathOperator{\hght}{ht} \DeclareMathOperator{\id}{id} \DeclareMathOperator{\limit}{Lim} \DeclareMathOperator{\mor}{Mor} \DeclareMathOperator{\op}{op} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\proj}{Proj} \DeclareMathOperator{\pd}{pd} \DeclareMathOperator{\red}{red} \DeclareMathOperator{\rk}{rank} \DeclareMathOperator{\SHom}{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} \DeclareMathOperator{\SKer}{\mathscr{K}\text{\kern -3pt {\calligra\large er}}\,} \DeclareMathOperator{\SDer}{\mathscr{D}\text{\kern -2pt {\calligra\large er}}\,} \DeclareMathOperator{\SEnd}{\mathscr{E}\text{\kern -3pt {\calligra\large nd}}\,} \DeclareMathOperator{\SSym}{\mathscr{S}\text{\kern -3pt {\calligra\large ym}}\,} \DeclareMathOperator{\soc}{soc} \DeclareMathOperator{\spec}{Spec} \DeclareMathOperator{\supp}{Supp} \DeclareMathOperator{\res}{res} \DeclareMathOperator{\tor}{Tor} \DeclareMathOperator{\upth}{th} \newcommand{\mbb}[1]{\mathbb{#1}} \newcommand{\bA}{\mbb{A}} \newcommand{\bH}{\mbb{H}} \newcommand{\bP}{\mbb{P}} \newcommand{\bF}{\mbb{F}} \newcommand{\CC}{\mbb{C}} \newcommand{\QQ}{\mbb{Q}} \newcommand{\NN}{\mbb{N}} \newcommand{\RR}{\mbb{R}} \newcommand{\ZZ}{\mbb{Z}} % mathscr - msc \newcommand{\msc}[1]{\mathscr{#1}} \newcommand{\sA}{\msc{A}} \newcommand{\sB}{\msc{B}} \newcommand{\sC}{\msc{C}} \newcommand{\sD}{\msc{D}} \newcommand{\sE}{\msc{E}} \newcommand{\sF}{\msc{F}} \newcommand{\sG}{\msc{G}} \newcommand{\sH}{\msc{H}} \newcommand{\sI}{\msc{I}} \newcommand{\sJ}{\msc{J}} \newcommand{\sL}{\msc{L}} \newcommand{\sM}{\msc{M}} \newcommand{\sN}{\msc{N}} \newcommand{\sT}{\msc{T}} \newcommand{\sX}{\msc{X}} % mathfrak - mf \newcommand{\mf}[1]{\mathfrak{#1}} \newcommand{\fb}{\mf{b}} \newcommand{\fg}{\mf{g}} \newcommand{\fl}{\mf{l}} \newcommand{\fm}{\mf{m}} \newcommand{\fp}{\mf{p}} \newcommand{\fgl}{\mf{gl}} % mathrm - mrm \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\dd}{\mrm{d}} \newcommand{\ee}{\mrm{e}} \newcommand{\ii}{\mrm{i}} \newcommand{\mF}{\mrm{F}} \newcommand{\mG}{\mrm{G}} \newcommand{\mH}{\mrm{H}} \newcommand{\mN}{\mrm{N}} \newcommand{\mT}{\mrm{T}} % mathcal - mcal \newcommand{\mcal}[1]{\mathcal{#1}} \newcommand{\cA}{\mcal{A}} \newcommand{\cB}{\mcal{B}} \newcommand{\cC}{\mcal{C}} \newcommand{\cL}{\mcal{L}} \newcommand{\cN}{\mcal{N}} \newcommand{\OO}{\mcal{O}} \newcommand{\cR}{\mcal{R}} \newcommand{\dps}{\displaystyle} \newcommand{\seq}[3][1]{#2_{#1},\ldots,#2_{#3}} \newcommand{\rdf}{\mathds{R}} \newcommand{\ldf}{\mathds{L}} % Need package mathtools \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \DeclarePairedDelimiter\norm{\lVert}{\rVert} \DeclarePairedDelimiter\dba{\llangle}{\rrangle} \DeclarePairedDelimiter\ps{\llbracket}{\rrbracket} \DeclarePairedDelimiter\set{\{}{\}} \DeclarePairedDelimiter\abs{\lvert}{\rvert} % \noindent \setlength{\parindent}{0pt} \setlength{\parskip}{0.5\baselineskip} % Blank box placeholder for figures (to avoid requiring any % particular graphics capabilities for printing this document). \newcommand{\blankbox}[2]{% \parbox{\columnwidth}{\centering % Set fboxsep to 0 so that the actual size of the box will match the % given measurements more closely. \setlength{\fboxsep}{0pt}% \fbox{\raisebox{0pt}[#2]{\hspace{#1}}}% }% } %% Deal with eqref and hypperef \makeatletter \renewcommand*{\eqref}[1]{% \hyperref[{#1}]{\textup{\tagform@{\ref*{#1}}}}% } \makeatother % comment color \definecolor{yiwang}{rgb}{1.0,0.078,0.5742} % hyperref setup \hypersetup{ colorlinks = true, % linkcolor = red, % anchorcolor = black, % citecolor = green, % filecolor = cyan, % menucolor = red, % runcolor = cyan, % urlcolor = magenta, allcolors = yiwang } \newcommand{\gal}{\mathsf{Gal}} \newcommand{\gl}{\mathsf{GL}} \newcommand{\un}{\mathsf{un}} \newcommand{\dR}{\mathsf{dR}} \newcommand{\et}{\mathsf{\acute{e}t}} \newcommand{\cris}{\mathsf{cris}} \newcommand{\HT}{\mathsf{HT}} \begin{document} \title{$GL_3$ case of Homogeneity result} \author{Yiwang Chen} \thanks{Compiled on \today} \maketitle This is just an attempt for note in Karol's class. \section{summary} This course is essentially about the (mod $p$) representation of $p$-adic groups Course will focus on the representation theory of $p$-adic groups like $GL{n(\QQ_p)}$. \\Motivation: They show up in the langlands program. Warm up: Quadratic reciprocity Suppose that $p$ and $l$ are two primes, and $p \equiv l \equiv 1 \mod 4$. Then we have that \[x^2 \equiv p \mod l \text{ has a solution} \equiv x^2 \equiv l \mod p \text{ has a solution}\] (often written $(p/l)=(l/p)$ in legendre symbol). This is proved by Gauss. Modern viewpoint: Classfield theory by Takagi, Artin 1920s Facts(Sprcialized to out situation) \begin{itemize} \item By choice of $p$, we have that $\Q \subset \Q(\sqrt{p})\subset \Q(\zeta_p)$. \item Invlusions induce $\rho$: \[Gal(\Q(\zeta_p)/\Q) \to Gal(\Q{\sqrt{p}}/\Q)\simeq {\pm 1}\subset \C{\times}\] \item $\forall q \neq p$ prime, $\exists$ an element $Frob_{q} \in Gal(\Q(\zeta_p)/\Q)$ such that $Frob_{q}(x)=x^{q}$ on ''$\overline{F_q}$ `'. \item There is an isomorphism $\alpha:(\Z/p\Z){\times} \to Gal(\Q(\zeta_p)/\Q)$ taking $a \mapsto (\zeta_p \mapsto \zeta_p^a)$, so it takes $q \neq p \mapsto Frob_{q}$. \end{itemize} \end{document}
{ "alphanum_fraction": 0.7039574885, "avg_line_length": 27.61003861, "ext": "tex", "hexsha": "12b4da3955201fcb870e5dce64fd8f96974025b7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "80ba7f9b24fa6a666d2d7f0f4b4c41a9aa1822c8", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "yiwchen/yiwchen.github.io", "max_forks_repo_path": "Note/Karol'sclassnote/repofpadic.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "80ba7f9b24fa6a666d2d7f0f4b4c41a9aa1822c8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "yiwchen/yiwchen.github.io", "max_issues_repo_path": "Note/Karol'sclassnote/repofpadic.tex", "max_line_length": 189, "max_stars_count": null, "max_stars_repo_head_hexsha": "80ba7f9b24fa6a666d2d7f0f4b4c41a9aa1822c8", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "yiwchen/yiwchen.github.io", "max_stars_repo_path": "Note/Karol'sclassnote/repofpadic.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2621, "size": 7151 }
\documentclass[xcolor=dvipsnames]{beamer} % Class options include: notes, notesonly, handout, trans, % hidesubsections, shadesubsections, % inrow, blue, red, grey, brown % Theme for beamer presentation. \usetheme{Susan} \usepackage{graphics} \usepackage{multicol} \usepackage{url} % double-hyphen command to make -- render as separate dashes instead % of as an m-dash \renewcommand{\dh}{{-}{-}} \section*{Mercurial (hg)} \begin{document} \begin{frame} \begin{center}{\Huge Mercurial (hg)} \end{center} \end{frame} \begin{frame} \frametitle{Introduction} Learning Goal \begin{enumerate} \item Explain when and why you should use version control \end{enumerate} \end{frame} \begin{frame} \begin{columns} \column{0.6\textwidth} \resizebox{!}{\textheight}{\includegraphics{img/phd101212s.png}} \column{0.4\textwidth} "Piled Higher and Deeper" by Jorge Cham, http://www.phdcomics.com \end{columns} \end{frame} \begin{frame}[label=part1] \frametitle{A Better Kind of Backup - Part 1} %Learning Goals \begin{enumerate} \item Explain which initialization and configuration steps are required once per machine, and which are required once per repository. \item Add files to Mercurial's collection of tracked files. \item Go through the modify-commit cycle for single and multiple files and explain where information is stored before and after the commit. \item Identify and use Mercurial revision numbers and changeset identifiers. \item Compare files with previous version of themselves. \end{enumerate} \begin{multicols}{2} \begin{itemize} \item Mercurial.ini (windows) \item ~/.hgrc (Linux/Mac) \item mkdir planets \item cd planets \item hg init \item ls -a \item hg verify \item nano mars.txt \item hg status \item hg add mars.txt \item hg commit -m ``Starting...'' \item hg log \item hg diff \end{itemize} \end{multicols} \end{frame} \begin{frame}[fragile] \frametitle{Mercurial.ini for Windows} Create a new file called \%USERPROFILE\%\textbackslash Mercurial.ini (that's spelled \$USERPROFILE/Mercurial.ini if you are in gitbash) \begin{verbatim} [ui] username = Vlad Dracula <[email protected]> editor = nano [extensions] color = [color] mode = win32 \end{verbatim} \end{frame} \againframe{part1} \begin{frame}[label=part2] \frametitle{A Better Kind of Backup - Part 2} \begin{enumerate} \setcounter{enumi}{4} \item Compare files with old versions of themselves. \item Restore old versions of files. \item Configure Mercurial to ignore specific files, and explain why it is sometimes useful to do so. \end{enumerate} \begin{multicols}{2} \begin{itemize} \item hg diff \dh rev 1:2 mars.txt \item hg diff -r 0:2 mars.txt \item hg diff \dh change 1 \item hg revert mars.txt \item hg revert \dh rev 0 mars.txt \item hg status \item mkdir results \item touch a.dat b.dat c.dat results/a.out results/b.out \item hg status \item nano .hgignore \item hg status \dh ignored \end{itemize} \end{multicols} \end{frame} \begin{frame}[fragile] \frametitle{.hgignore} \begin{verbatim} syntax: glob *.dat results/ \end{verbatim} \end{frame} \againframe{part2} \begin{frame} \frametitle{Exercise} Create a new Mercurial repository on your computer called bio. Write a three-line biography for yourself in a file called me.txt, commit your changes, then modify one line and add a fourth and display the differences between its updated state and its original state. \end{frame} \begin{frame}[label=Collaborating] \frametitle{Collaborating} \begin{enumerate} \item Explain what remote repositories are and why they are useful. \item Explain what happens when a remote repository is cloned. \item Explain what happens when changes are pushed to or pulled from a remote repository. \end{enumerate} \begin{multicols}{2} \begin{itemize} \item hg paths \item hg push \item hg pull \item hg clone \item hg log \dh graph \item hg update \end{itemize} \end{multicols} \end{frame} \begin{frame}[fragile] We're going to explore collaborating via a remote repository clone on Bitbucket by pretending that we are going back and forth between our home and work computers. We'll simulate that by creating a directory for each location and moving our {\tt planets/} repository into the work computer directory. \begin{verbatim} $ cd $ cd Desktop/swc/ $ mkdir home-pc work-pc $ mv planets/ work-pc/ \end{verbatim} These could just as easily be directories on our own and our supervisor's computer, or on the computers of a group of collaborators spread around the world. \end{frame} \againframe{Collaborating} \begin{frame} \frametitle{Conflicts and Merging} \begin{enumerate} \item Explain what conflicts are and when they can occur. \item Resolve conflicts resulting from a merge. \end{enumerate} \begin{multicols}{2} \begin{itemize} \item hg heads \item hg log -G \item hg merge \dh tool=kdiff3 \item hg summary \end{itemize} \end{multicols} \end{frame} \begin{frame} \frametitle{Open Science} \begin{enumerate} \item Explain how the GNU Public License (GPL) differs from most other open licenses. \item Explain the four kinds of restrictions that can be combined in a Creative Commons license. \item Correctly add licensing and citation information to a project repository. \item Outline options for hosting code and data and the pros and cons of each. \end{enumerate} \end{frame} \end{document}
{ "alphanum_fraction": 0.7534831878, "avg_line_length": 26.915, "ext": "tex", "hexsha": "6876d9901f15d421fd54138d4f5045ff665a1351", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ac050abf98031e8fd17352e6dff3411836a8fee7", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "douglatornell/2014-09-25-ubc", "max_forks_repo_path": "teaching_notes/hg_notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ac050abf98031e8fd17352e6dff3411836a8fee7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "douglatornell/2014-09-25-ubc", "max_issues_repo_path": "teaching_notes/hg_notes.tex", "max_line_length": 266, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ac050abf98031e8fd17352e6dff3411836a8fee7", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "douglatornell/2014-09-25-ubc", "max_stars_repo_path": "teaching_notes/hg_notes.tex", "max_stars_repo_stars_event_max_datetime": "2018-08-03T21:12:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-03T21:12:58.000Z", "num_tokens": 1488, "size": 5383 }
% Template for PLoS % Version 3.5 March 2018 % % % % % % % % % % % % % % % % % % % % % % % % % -- IMPORTANT NOTE % % This template contains comments intended % to minimize problems and delays during our production % process. Please follow the template instructions % whenever possible. % % % % % % % % % % % % % % % % % % % % % % % % % % Once your paper is accepted for publication, % PLEASE REMOVE ALL TRACKED CHANGES in this file % and leave only the final text of your manuscript. % PLOS recommends the use of latexdiff to track changes during review, as this will help to maintain a clean tex file. % Visit https://www.ctan.org/pkg/latexdiff?lang=en for info or contact us at [email protected]. % % % There are no restrictions on package use within the LaTeX files except that % no packages listed in the template may be deleted. % % Please do not include colors or graphics in the text. % % The manuscript LaTeX source should be contained within a single file (do not use \input, \externaldocument, or similar commands). % % % % % % % % % % % % % % % % % % % % % % % % % % -- FIGURES AND TABLES % % Please include tables/figure captions directly after the paragraph where they are first cited in the text. % % DO NOT INCLUDE GRAPHICS IN YOUR MANUSCRIPT % - Figures should be uploaded separately from your manuscript file. % - Figures generated using LaTeX should be extracted and removed from the PDF before submission. % - Figures containing multiple panels/subfigures must be combined into one image file before submission. % For figure citations, please use "Fig" instead of "Figure". % See http://journals.plos.org/plosone/s/figures for PLOS figure guidelines. % % Tables should be cell-based and may not contain: % - spacing/line breaks within cells to alter layout or alignment % - do not nest tabular environments (no tabular environments within tabular environments) % - no graphics or colored text (cell background color/shading OK) % See http://journals.plos.org/plosone/s/tables for table guidelines. % % For tables that exceed the width of the text column, use the adjustwidth environment as illustrated in the example table in text below. % % % % % % % % % % % % % % % % % % % % % % % % % % % -- EQUATIONS, MATH SYMBOLS, SUBSCRIPTS, AND SUPERSCRIPTS % % IMPORTANT % Below are a few tips to help format your equations and other special characters according to our specifications. For more tips to help reduce the possibility of formatting errors during conversion, please see our LaTeX guidelines at http://journals.plos.org/plosone/s/latex % % For inline equations, please be sure to include all portions of an equation in the math environment. For example, x$^2$ is incorrect; this should be formatted as $x^2$ (or $\mathrm{x}^2$ if the romanized font is desired). % % Do not include text that is not math in the math environment. For example, CO2 should be written as CO\textsubscript{2} instead of CO$_2$. % % Please add line breaks to long display equations when possible in order to fit size of the column. % % For inline equations, please do not include punctuation (commas, etc) within the math environment unless this is part of the equation. % % When adding superscript or subscripts outside of brackets/braces, please group using {}. For example, change "[U(D,E,\gamma)]^2" to "{[U(D,E,\gamma)]}^2". % % Do not use \cal for caligraphic font. Instead, use \mathcal{} % % % % % % % % % % % % % % % % % % % % % % % % % % % Please contact [email protected] with any questions. % % % % % % % % % % % % % % % % % % % % % % % % % \documentclass[10pt,letterpaper]{article} %\usepackage[top=0.85in,left=2.75in,footskip=0.75in]{geometry} % amsmath and amssymb packages, useful for mathematical formulas and symbols \usepackage{amsmath,amssymb} % Use adjustwidth environment to exceed column width (see example table in text) \usepackage{changepage} % Use Unicode characters when possible \usepackage[utf8x]{inputenc} % textcomp package and marvosym package for additional characters \usepackage{textcomp,marvosym} % cite package, to clean up citations in the main text. Do not remove. \usepackage{cite} % Use nameref to cite supporting information files (see Supporting Information section for more info) \usepackage{nameref,hyperref} % line numbers \usepackage[right]{lineno} % ligatures disabled \usepackage{microtype} \DisableLigatures[f]{encoding = *, family = * } % color can be used to apply background shading to table cells only \usepackage[table]{xcolor} % array package and thick rules for tables \usepackage{array} % code-listing package \usepackage{minted} %figure placement package \usepackage{float} %chemical reaction package %\usepackage{mhchem} % create "+" rule type for thick vertical lines \newcolumntype{+}{!{\vrule width 2pt}} % create \thickcline for thick horizontal lines of variable length \newlength\savedwidth \newcommand\thickcline[1]{% \noalign{\global\savedwidth\arrayrulewidth\global\arrayrulewidth 2pt}% \cline{#1}% \noalign{\vskip\arrayrulewidth}% \noalign{\global\arrayrulewidth\savedwidth}% } % \thickhline command for thick horizontal lines that span the table \newcommand\thickhline{\noalign{\global\savedwidth\arrayrulewidth\global\arrayrulewidth 2pt}% \hline \noalign{\global\arrayrulewidth\savedwidth}} % Remove comment for double spacing %\usepackage{setspace} %\doublespacing % Text layout %\raggedright \setlength{\parindent}{0.5cm} \textwidth 5.25in \textheight 8.75in % Bold the 'Figure #' in the caption and separate it from the title/caption with a period % Captions will be left justified \usepackage[aboveskip=1pt,labelfont=bf,labelsep=period,justification=raggedright,singlelinecheck=off]{caption} \renewcommand{\figurename}{Fig} % Use the PLoS provided BiBTeX style \bibliographystyle{plos2015} % Remove brackets from numbering in List of References \makeatletter \renewcommand{\@biblabel}[1]{\quad#1.} \makeatother % Header and Footer with logo \usepackage{lastpage,fancyhdr,graphicx} \usepackage{epstopdf} %\pagestyle{myheadings} \pagestyle{fancy} \fancyhf{} %\setlength{\headheight}{27.023pt} %\lhead{\includegraphics[width=2.0in]{PLOS-submission.eps}} \rfoot{\thepage/\pageref{LastPage}} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrule}{\hrule height 2pt \vspace{2mm}} \fancyheadoffset[L]{2.25in} \fancyfootoffset[L]{2.25in} %\lfoot{\today} %% Include all macros below \newcommand{\lorem}{{\bf LOREM}} \newcommand{\ipsum}{{\bf IPSUM}} %% END MACROS SECTION \begin{document} \vspace*{0.2in} % Title must be 250 characters or less. \begin{flushleft} {\Large \textbf\newline{Parallel scalable simulations of biological neural networks using TensorFlow: A beginner's guide} % Please use "sentence case" for title and headings (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns). }\\ % Insert author names, affiliations and corresponding author email (do not include titles, positions, or degrees). Saptarshi Soham Mohanta and Collins Assisi \bigskip \\ Indian Institute of Science Education and Research, Pune, Maharashtra, India\\ % Use the asterisk to denote corresponding authorship and provide email address in note below. \bigskip *[email protected] \\ *[email protected] \end{flushleft} % Please keep the abstract below 300 words \section*{Abstract} Neuronal networks are often modeled as systems of coupled, nonlinear, ordinary or partial differential equations. The number of differential equations used to model a network increases with the size of the network and the level of detail used to model individual neurons and synapses. As one scales up the size of the simulation it becomes important to use powerful computing platforms. Many tools exist that solve these equations numerically. However, these tools are often platform-specific. There is a high barrier of entry to developing flexible general purpose code that is platform independent and supports hardware acceleration on modern computing architectures such as GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source package initially designed for machine learning algorithms, but it presents a scalable environment for a variety of computations including solving differential equations using iterative algorithms such as Runge-Kutta methods. In this article, organized as a series of tutorials, we present a simple exposition of numerical methods to solve ordinary differential equations using Python and TensorFlow. It consists of a series of Python notebooks that, over the course of five sessions, will lead novice programmers from writing programs to integrate simple 1-dimensional differential equations using Python, to solving a large system (1000’s of differential equations) of coupled conductance-based neurons using a highly parallel and scalable framework. Embedded with the tutorial is a physiologically realistic implementation of a network in the insect olfactory system. This system, consisting of multiple neuron and synapse types, can serve as a template to simulate other networks. % Please keep the Author Summary between 150 and 200 words % Use first person. PLOS ONE authors please skip this step. % Author Summary not valid for PLOS ONE submissions. \linenumbers % Use "Eq" instead of "Equation" for equation citations. \begin{nolinenumbers} \section*{Motivation} Information processing in the nervous system spans a number of spatial and temporal scales. Millisecond fluctuations in ionic concentration at a synapse can cascade into long term (hours to days) changes in the behavior of the organism. Capturing the temporal scales and the details of the dynamics of the brain is a colossal computational endeavour. The dynamics of single and small networks of neurons can easily be simulated on a desktop computer with high level, readable, programing languages like Python. However, large networks of conductance based neurons are often simulated on clusters of CPUs. More recently, graphical processing units (GPUs) have become increasingly available due to lower (though still prohibitive) costs and from cloud services like Google Cloud and Amazon AWS among others. Writing code for each of these platforms is non trivial and requires a considerable investment of time to master different software tools. For example, implementing parallelism in multiple core shared memory architectures is often achieved using Open Multi-Processing (OpenMP) with C or C++. Message Passing Interface (MPI) libraries are used to implement code that computes over high performance computing clusters. Compute Unified Device Architecture (CUDA) allows users to run programs on NVIDIA's GPUs. However, code written for one platform cannot be used on other platforms. This poses a high barrier of entry for Neuroscientists conversant with a high-level programing language, attempting to test simulations on different platforms. This is where TensorFlow, an open source platform for machine learning~\cite{tensorflow2015-whitepaper}, comes in handy. TensorFlow is highly scalable. Code written using TensorFlow functions can work seamlessly on single cores, multi-core shared memory processors, high-performance computer clusters, GPUs and Tensor processing units (TPUs - a propreitary chip designed by Google to work with TensorFlow). We found (as others have~\cite{tensorflow-api-docs,tfcookbook}, that TensorFlow functions can be used to implement numerical methods to solve ODEs. Doing so gave us a significant speed-up even on a single desktop with a multicore processor compared to similar Python code that did not use TensorFlow functions and operated on a single core(Figure~\ref{fig:comparison}). The code itself was highly readable and could be debugged with ease. Familiarity with the Python programming language and a brief introduction to some TensorFlow functions proved sufficient to write the code. Python is a an extremely popular programming language that used across number of disciplines and has found a broad user base among Biologists \cite{Ekmekci2016,Bassi2007,primer}. We found that introducing a few TensorFlow functions in Python, an easy addition to a familiar language, can bring readers to a point where they can simulate large networks of neurons in a platform independent manner. Further, by piggybacking on TensorFlow we will also be able to take advantage of an active TensorFlow developer community in addition to a wide range of Python libraries that are already available. \begin{figure} \includegraphics[scale=0.7]{Figures/Bench.pdf} \caption{Comparison of Python and Tensorflow on an 8 core, 2.1GHz desktop with 2 Intel Xeon E5-2620 processors} \label{fig:comparison} \end{figure} These tutorials were written to address the needs of a group of undergraduate students in our institute. These students came from diverse backgrounds and had a basic introduction to Python during their first semester. They were interested in working on problems in Computational Neuroscience. Our goal was to introduce them to some of the numerical tools and mathematical models in Neuroscience while also allowing them to tinker with advanced projects that required writing code that ran on distributed systems. We were careful to keep the innards of the code visible \textemdash the form of the integrator, the specification of the differential equations and the ability to modify the code to suit their needs was particularly important. Towards the end of the tutorial, that many students managed to devour within a day or two, they were in a position to write codes simulating networks of neurons in the antennal lobe \cite{Bazhenov2001} (the insect equivalent of the olfactory bulb in mammals), firing rate models of grid cells~\cite{Burak2009}, detailed networks of stellate cells and inhibitory interneurons~\cite{Neru2019} and networks with plastic synapses~\cite{Bazhenov2005}. %~\cite{bib1} \section*{How to use this Tutorial} A reader familiar with Python will find this tutorial accessible. We use a number of Numpy and Matplotlib functions to simulate and display figures. These libraries are well documented with excellent introductory guides. During Day 1 of this tutorial we introduce numerical integration using Python without using any TensorFlow functions. On Day 2 we use TensorFlow functions to implement the integrator. Day 5 talks about memory management in TensorFlow. On Days 3 and 4 we use the code developed on Days 1 and 2 to simulate networks of conductance based neurons. Readers who are interested in solving differential equations in other domains will find the tutorial on Days 1, 2 and 5 self-contained. Our simulations were run on an 8 core 2.1 GHz Linux desktop with two Intel Xeon E5-2620 processors and on Google Cloud. We recommend that readers install Python 3.6 or above, Jupyter Notebook, Numpy Python package~\cite{numpy}, Matplotlib Python package~\cite{matplotlib}, and TensorFlow 1.13 or above using the Anaconda distribution of Python 3. The tutorials are linked in the supplementary material as Python notebooks (.ipynb files) that can be accessed using Jupyter and as .html files that can be read using any browser. % For figure citations, please use "Fig" instead of "Figure". Fig~\ref{fig1} % Place figure captions after the first paragraph in which they are cited. %\begin{figure}[!h] %\caption{{\bf Bold the figure title.} %Figure caption text here, please use this space for the figure panel descriptions instead of using subfigure commands. A: Lorem ipsum dolor sit amet. B: Consectetur adipiscing elit.} %\label{fig1} %\end{figure} \section*{Day 1: Solving ODEs using Python} In this tutorial we are interested in solving ordinary differential equations (ODEs) of the form, \begin{equation} \frac{dx}{dt} = f(x, t) \label{eq:ode} \end{equation} where, $x$ is an $N-$dimensional vector and $t$ typically stands for time. The function $f(x,t)$ may be a nonlinear function of $x$ that explicitly depends on $t$. In addition to specifying the form of the differential equation, one must also specify the value of $x$ at a particular time point. Say, $x=x_{0}$ at $t = t_0$. It is often not possible to derive a closed form solution for the equation(\ref{eq:ode}). Therefore numerical methods to solve these equations are of great importance. One example of ODEs at the core of our tutorial are the Hodgkin-Huxley equations describing the dynamics of action potential generation and propagation in the giant axon of the squid~\cite{Huxley1952}. Alan Hodgkin and Andrew Huxley arrived at these equations after a series of clever experiments that tested the limits of experimental technology available at the time. It also tested the limits of computational tools available. The form of the differential equations they derived contained nonlinearities that made it analytically intractable. In order to compute action potentials, Huxley numerically intergrated the equations using a hand operated Bunsviga mechanical calculator. The calculation took nearly three weeks to complete~\cite{Hodgkin1976}. They used a numerical method due to (see numerical methods section in \cite{Hodgkin1976}) to integrate the differential equations. Each iteration that calculated the value of the solution at subsequent time points consisted of 9 steps. In calculating the solution over time, they also varied the step size such that dynamics occurring over faster time scales (such as the rising phase of the action potential) were calculated with time step sizes of $0.1-0.2$ms while slower dynamics (such as the small highly damped oscillation following a spike) was calculated with time steps that were an order of magnitude higher ($1$ms). In this tutorial, we illustrate two numerical methods to iteratively compute each time step of the solution. The first is a simple one-step method known as the Euler's method of integration. The second is another popular method, the Runge-Kutta method of order 4 (abberviated as RK4) that is more accurrate than the Euler's method, but requires additional computations to calculate the value of the solution at each time step. There are several textbooks on numerical methods. For the integrators described here, we have used~\cite{kreyszig1983} \subsection*{Euler's Method for Numerical Integration} Our goal is to solve (\ref{eq:ode}) - calculate $x(t)$ given an initial condition $x(t_n)=x_{n}$. The simplest method to solve (\ref{eq:ode}) numerically is the Euler's method. Here we start from $x(t=t_{0})=x_{0}$ and compute the solution at subsequent time points ($t_{0}+\epsilon,t_{0}+2\epsilon,t_{0}+3\epsilon \dots $) iteratively. Each step of the computation is done using the same formula that can be derived by truncating a Taylor series after the first term. That is, the solution at time $t_{0}+\epsilon$ is given by, \begin{equation} x(t_{0}+\epsilon) = x_{t){0}} + \epsilon\frac{dx}{dt} + \mathcal{O}(\epsilon^2) \label{eq:euler} \end{equation} where $\frac{dx}{dt}=f(x,t)$. The higher order terms $\mathcal{O}(\epsilon^2)$ are ignored in this approximation. \begin{figure}[H] \includegraphics[scale=0.4]{Figures/fig1.pdf} \caption{\textbf{Euler's method} The dashed line shows the solution computed by successive iterations of Euler's method. It diverges from the actual solution as errors accumulate over time. } \label{fig:euler} \end{figure} A geometric interpretation of the Euler's method is shown in figure~\ref{fig:euler}. The solution of a particular differential equation is represented by the solid blue line in the figure. The dashed line approximates this solution by iteratively calculating $x$ at different time points using equation~\ref{eq:euler}. If the value of the solution at $t_{n}$ is $x_{n}$, $f(x_{n},t)$ is the slope of the tangent to the solution at $t_{n}$. If $\epsilon$ is sufficiently small, the solution at $t_{n+1}=t_{n}+\epsilon$ can be approximated by linearly extrapolating from $x_{n}$ to $x_{n} + \epsilon f(x_{n},t_{n})$ \subsection*{Implementation of Euler's Method in Python} Let $\frac{dx}{dt}=5x$. We wish to calculate $x(t)$ over the interval $t\in[0,2)$ given the initial condition $x(0)=1$. The exact solution of this equation is $x(t) = e^{5t}$. In our implementation of Euler's method, we used the Python library Numpy to create and operate on arrays, and the plotting library Matplotlib, to display the results. We implement Euler's Method in Python as follows: \begin{minted}[linenos]{python} import numpy as np import matplotlib.pyplot as plt def f(x,t): # define the function f(x,t) return 5*x epsilon = 0.01 # define timestep t = np.arange(0,2,epsilon) # define an array for t x = np.zeros(t.shape) # define an array for x x[0]= 1 # set initial condition for i in range(1,t.shape[0]): x[i] = epsilon*f(x[i-1],t[i-1])+x[i-1] # Euler Integration Step \end{minted} \begin{figure}[H] \includegraphics[scale=0.7]{Figures/fig2.pdf} \caption{Comparison of actual solution and approximation by Euler's method} \label{fig:eulerError} \end{figure} The exact solution and the numerical solution are compared in Fig~\ref{fig:eulerError}. Notice that the approximation begins to diverge from the actual solution of the equation as the number of iterations increase. The omission of terms $\mathcal{O}(\epsilon^2)$ leads to a truncation error per step that can accumulate over time. \subsection*{Implementing Euler's method to solve a system of differential equations} The implementation above can be easily extended to systems of equations. The Initial Value problem now becomes: \begin{eqnarray}\frac{d\vec{X}}{dt} = \vec{f}(\vec{X}, t)\end{eqnarray} \begin{eqnarray}\vec{X}(t_o) = \vec{X_o}\end{eqnarray} where $\vec{X}=[X_1,X_2...]$ and $\vec{f}(\vec{X}, t)=[f_1(\vec{X}, t),f_2(\vec{X}, t)...]$. We rewrite Euler's method as: \begin{eqnarray}t_{n+1} = t_n + \epsilon \end{eqnarray} \begin{eqnarray}\vec{X}(t_{n+1}) = \vec{X}(t_{n}) + \epsilon \vec{f}(\vec{X}(t_{n}), t_n)\end{eqnarray} Let $\frac{d\vec{X}}{dt}=f(\vec{X},t)$, we wish to find $\vec{X}(t)$ over $t\in[0,2)$, given that $\vec{X}(t)=[x,y]$, $\vec{X}(0)=[1,0]$ and $f(\vec{X},t) = [x-y,y-x]$. We implement the modified algorithm as follows: \begin{minted}[linenos]{python} def f(X,t): # the function f(X,t) now takes a vector X as input x,y = X #the first and the second elements of X are assigned to x and y return np.array([x-y,y-x]) t = np.arange(0,2,epsilon) # define an array for t X = np.zeros((2,t.shape[0])) # initialize an array for X X[:,0]= [1,0] # set initial condition for i in range(1,t.shape[0]): X[:,i] = epsilon*f(X[:,i-1],t[i-1])+X[:,i-1] # Euler Integration Step \end{minted} \subsection*{A generalized code to the Euler method} Here we rewrite the code in a modular fashion and cast the integrator as a function that takes in 3 inputs ie. the function $\vec{f}(\vec{y},t)$ where $\frac{d\vec{y}}{dt}=f(\vec{y},t)$, the time array, and initial vector $\vec{y}_{0}$. We will find this form to be particularly useful when we use TensorFlow functions to code the integrator. Further, it allows us to write multiple integrating functions (for example Euler or RK4) within the same class and call a specific integrator as needed. In addition we also introduce a function to ensure that the correct inputs are given to the integrator failing which an error message is generated. \subsubsection*{Algorithm} \begin{itemize} \item Get the required inputs: function $\vec{f}(\vec{y},t)$, initial condition vector $\vec{y}_0$ and time series $t$. Entering a time series $t$ allows for greater control over $\epsilon$ as it can now vary for each timestep. \item Check if the input is of the correct datatype ie. floating point decimal. \item Create a zero matrix to hold the output. \item For each time step, perform the update $\vec{y}$ using the Euler method with variable $\epsilon$ and store it in the output matrix. \item Return the output time series [number of equations $\times$ iterations] matrix. \end{itemize} \begin{minted}[linenos]{python} def check_type(y,t): # Ensure Input is Correct return y.dtype == np.floating and t.dtype == np.floating class _Integrator(): def integrate(self,func,y0,t): time_delta_grid = t[1:] - t[:-1] y = np.zeros((y0.shape[0],t.shape[0])) y[:,0] = y0 for i in range(time_delta_grid.shape[0]): y[:,i+1]= time_delta_grid[i]*func(y[:,i],t[i])+y[:,i] return y def odeint_euler(func,y0,t): y0 = np.array(y0) t = np.array(t) if check_type(y0,t): return _Integrator().integrate(func,y0,t) else: print("error encountered") solution = odeint_euler(f,[1.,0.],t) \end{minted} \subsection*{Runge-Kutta Methods for Numerical Integration} Euler's method $x_{n+1}=x_n + \epsilon f(x_n,t_n)$ calculates the solution at $t_{n+1}=t_n+\epsilon$ given the solution at $t_n$. In doing so we use the derivative at $t_{n}$ though its value may change throughout the interval $[t,t+\epsilon]$. This results in an error in the order of $\mathcal{O}(\epsilon^2)$. By calculating the derivatives at intermediate steps, one can reduce the error at each step. Consider the following second order method where the slope is calculated at $t_{n}$ and $t_n+\frac{\epsilon}{2}$. \begin{eqnarray}k_1=\epsilon f(x_n,t_n)\end{eqnarray} \begin{eqnarray}k_2=\epsilon f(x_n+\frac{k_1}{2},t_n+\frac{\epsilon}{2})\end{eqnarray} \begin{eqnarray}x_{n+1}=x_n+k_2+O(\epsilon^3)\end{eqnarray} This method is called the second order Runge-Kutta method or the midpoint method. \begin{figure} \includegraphics[scale=0.4]{Figures/fig4.pdf} \caption{Second order Runge-Kutta method} \label{fig:RK2} \end{figure} Figure~\ref{fig:RK2} is a schematic description of the second order Runge-Kutta method. The blue curve denotes a solution of some differential equation. We can reduce errors by calculating additional derivatives. One of the most commonly used integrators is the fourth-order Runge-Kutta Method or RK4 method, that is implemented below: \begin{eqnarray}k_1=f(x_n,t_n)\end{eqnarray} \begin{eqnarray}k_2=f(x_n+\epsilon\frac{k_1}{2},t_n+\frac{\epsilon}{2})\end{eqnarray} \begin{eqnarray}k_3=f(x_n+\epsilon\frac{k_2}{2},t_n+\frac{\epsilon}{2})\end{eqnarray} \begin{eqnarray}k_4=f(x_n+\epsilon k_3,t_n+\epsilon)\end{eqnarray} \begin{eqnarray}y_{n+1}=y_n+\frac{\epsilon}{6}(k_1+2 k_2+2 k_3+k_4)+\mathcal{O}(\epsilon^5)\end{eqnarray} Note that this numerical method is again easily converted to a vector algorithm by simply replacing $x_i$ by the vector $\vec{X_i}$. We will use this method to simulate networks of neurons. \subsection*{Generalized RK4 Method in Python} We can now modify the Euler Integration code implemented earlier with a generalized function for RK4 that takes three inputs \textemdash the function $f(\vec{y},t)$ when $\frac{d\vec{y}}{dt}=f(\vec{y},t)$, the time array, and an initial vector $\vec{y_0}$. The code can be updated as follows, \begin{minted}[linenos]{python} # RK4 Integration Steps replace the Euler's Updation Steps k1 = func(y[:,i], t[i]) half_step = t[i] + time_delta_grid[i] / 2 k2 = func(y[:,i] + time_delta_grid[i] * k1 / 2, half_step) k3 = func(y[:,i] + time_delta_grid[i] * k2 / 2, half_step) k4 = func(y[:,i] + time_delta_grid[i] * k3, t + time_delta_grid[i]) y[:,i+1]= (k1 + 2 * k2 + 2 * k3 + k4) * (time_delta_grid[i] / 6) + y[:,i] \end{minted} As an \textbf{Exercise}, solve the equation of a simple pendulum and observe its dynamics using the Euler and RK4 methods. Change the time step ($\epsilon$) and compare the resulting solutions. The equation of motion of a simple pendulum is given by: \begin{eqnarray}\frac{d^2s}{dt^2}=L\frac{d^2\theta}{dt^2}=-g\sin{\theta}\end{eqnarray} where $L$ = Length of String and $\theta$ = angle made with vertical. To solve this second order differential equation convert it to a system of first order ODEs using a variable $\omega$ that represents the angular velocity. \begin{eqnarray}\frac{d\theta}{dt}=\omega \end{eqnarray} \begin{eqnarray}\frac{d\omega}{dt}=-\frac{g}{L}\sin{\theta} \end{eqnarray} \section*{Day 2: Let the Tensors Flow!} \subsection*{An Introduction to TensorFlow} TensorFlow is an open-source library that was developed by researchers and engineers in the Google Brain team. TensorFlow has a number of functions that make it particularly suitable for machine learning applications. However, it is primarily an interface for numerical computation~\cite {tensorflow2015-whitepaper}. All computations in TensorFlow are specified as directed graphs (nodes connected by arrows) known as data flow graphs. Nodes are operations such as addition, multiplication etc. The incoming edges for each node are tensors (scalars, vectors, matrices and higher dimensional arrays), the actual values that are operated upon. The output is also a tensor that results from the computation. For example, consider the following computation where two vectors $a$ and $b$ serve as inputs to the node, a matrix multiplication operation, that produces a matrix $c$ as output. \begin{figure}[H] \begin{center} \includegraphics[scale=0.7]{Figures/fig6.pdf} \caption{Example of a simple computational graph} \label{fig:compGraph} \end{center} \end{figure} The following program implements the computation described in Figure(\ref{fig:compGraph}). \begin{minted}[linenos]{python} # Creating nodes in the computation graph a = tf.constant([[1.],[2.],[3.]], dtype=tf.float64) # a 3x1 column matrix b = tf.constant([[1.,2.,3.]], dtype=tf.float64) # a 1x3 row matrix c = tf.matmul(a, b) # To run the graph, we need to create a session. # Creating the session initializes the computational device. sess = tf.Session() # start a session output = sess.run(c) # compute the value of c sess.close() # end the session print(output) # TO automatically close the session after computation, Use: # with tf.Session() as sess: # output = sess.run(c) \end{minted} By specifying the computation graph we also specify the dependcies among the nodes. One can thus split the graph into smaller chunks or sub-graphs that can be independently computed by different devices that coordinate with each other. This makes it possible to develop programs that are device independent and scalable across CPUs, GPUs , TPUs and clusters of servers. \subsection*{Efficient recursion with TensorFlow} Numerical integration is essentially a recursive process over a time array $[t_{0},t_{1},t_{2},\dots t_{n}]$. The rules to update both the Euler and the RK4 integrators can be written as a recursive function $F$ such that $X_{i+1}=F(X_i,t_i,\epsilon_i)$. The solution of the differential equation is the array $[X_0,F(X_0,t_0,\epsilon_0),F(F(X_0,t_0,\epsilon_0),t_1,\epsilon_1)...]$. We us the TensorFlow function \texttt{tf.scan} ~\cite{tensorflow-api-docs} to iterate over the time array. The arguments of \texttt{tf.scan} are a (i)a recursive function (ii)the list to iterate over and (iii)the initial value. If the initial value is not specified \texttt{tf.scan} uses the first element of the list as an initializer. As an example, consider the following program that calculates the cumulative sum over a list. Every step involves adding an element from the list onto the last addition. \begin{minted}[linenos]{python} # define the recursive function that takes in two values the # accumulated value and the additional input from a list. def recursive_addition(accumulator,new_element): return accumulator+new_element # define the list over which we iterate elems = np.array([1, 2, 3, 4, 5, 6]) # tf.scan takes in three inputs: the recursive function, the # list to iterate over and the initial value. If an initial # value is not provided, its taken as the first element of elems. # accumulate with no initializer cum_sum_a = tf.scan(recursive_addition, elems) # accumulate with initializer as the number 5 cum_sum_b = tf.scan(recursive_addition, elems, tf.constant(5,dtype=tf.int64)) with tf.Session() as sess: output_a = sess.run(cum_sum_a) output_b = sess.run(cum_sum_b) print(output_a) print(output_b) # This prints : #[ 1 3 6 10 15 21] #[ 6 8 11 15 20 26] \end{minted} \textbf{Exercise} Use \texttt{tf.scan} to compute the Fibonacci sequence. \subsection*{Euler Integration Function with TensorFlow} We now implement Euler's method using \texttt{tf.scan} to iterate over the time array. Note that the function \texttt{scan\_func} that defines each step of Euler's method, is now an input to \texttt{tf.scan}. \begin{minted}[linenos]{python} def tf_check_type(t, y0): # Ensure Input is Correct if not (y0.dtype.is_floating and t.dtype.is_floating): # The datatype of any tensor t is accessed by t.dtype raise TypeError('Error in Datatype') class _Tf_Integrator(): def integrate(self, func, y0, t): time_delta_grid = t[1:] - t[:-1] def scan_func(y, t_dt): t, dt = t_dt dy = dt*func(y,t) return y + dy # iterating over (a,b) where a and b are lists of same size # results in the ith accumulative step in tf.scan receiving # the ith elements of a and b zipped together y = tf.scan(scan_func, (t[:-1], time_delta_grid),y0) return tf.concat([[y0], y], axis=0) def tf_odeint_euler(func, y0, t): # Convert input to TensorFlow Objects t = tf.convert_to_tensor(t, preferred_dtype=tf.float64, name='t') y0 = tf.convert_to_tensor(y0, name='y0') tf_check_type(y0,t) return _Tf_Integrator().integrate(func,y0,t) # Define a function using Tensorflow math operations. # This creates the computation graph. def f(X,t): # extracting a single value eg. X[0] returns a single value but # we require a tensor, so we extract a range with one element. x = X[0:1] y = X[1:2] out = tf.concat([x-y,y-x],0) return out y0 = tf.constant([1,0], dtype=tf.float64) epsilon = 0.01 t = np.arange(0,2,epsilon) # Define the final value (output of scan) that we wish to compute state = tf_odeint_euler(f,y0,t) # Start a TF session and evaluate state with tf.Session() as sess: state = sess.run(state) \end{minted} \subsection*{RK4 Integration Function with TensorFlow} Now, we implement the RK4 integrator. Note that here we replace the single step iterator used for the Euler's with a four step RK4 iterator. In addition, to make the code more modular, we define a function \texttt{\_step\_func()} that is called by \texttt{scan\_func} and calculates the next step of the RK4 integrator. The rest of the program remains the same as the Euler's method implemented above. \begin{minted}[linenos]{python} def integrate(self, func, y0, t): time_delta_grid = t[1:] - t[:-1] def scan_func(y, t_dt): t, dt = t_dt dy = self._step_func(func,t,dt,y) # Make code more modular. return y + dy y = tf.scan(scan_func, (t[:-1], time_delta_grid),y0) return tf.concat([[y0], y], axis=0) def _step_func(self, func, t, dt, y): k1 = func(y, t) half_step = t + dt / 2 dt_cast = tf.cast(dt, y.dtype) # Failsafe k2 = func(y + dt_cast * k1 / 2, half_step) k3 = func(y + dt_cast * k2 / 2, half_step) k4 = func(y + dt_cast * k3, t + dt) return tf.add_n([k1, 2 * k2, 2 * k3, k4]) * (dt_cast / 6) \end{minted} \textbf{Exercise}, Simulate the non-linear Lorentz Attractor using the Euler and RK4 with TensorFlow. The equations of the Lorenz Attractor are given by, \begin{eqnarray}\frac{dx}{dt}=\sigma(y-x) \end{eqnarray} \begin{eqnarray}\frac{dy}{dt}=x(\rho-z)-y \end{eqnarray} \begin{eqnarray}\frac{dz}{dt}=xy-\beta z \end{eqnarray} Use the values $\sigma =10$, $\beta =\frac{8}{3}$, $\rho =28$. Simulate these equations for similar initial conditions and compare how the trajectories diverge. The solution of the Lorenz equations should resemble Figure~\ref{fig:Lorenz} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{Figures/fig6_op.pdf} \caption{Phase space plot obtained from integrating the Lorenz equations. Solutions associated with neighboring initial conditions diverge from each other in the chaotic regime given by the paramters specified in the exercise.} \label{fig:Lorenz} \end{center} \end{figure} \section*{Day 3: Neurons in Silicon} The electric potential measured across the membranes of excitable cells, such as neurons or heart cells, can undergo transient changes when perturbed by external inputs. When the inputs to a neuron are sufficiently large these transient changes can regeneratively build up into a large deviation from the resting state known as an action potential. Action potentials propagate undiminished along the axon and perturb post-synaptic neurons. The Hodgkin-Huxley model is a system of differential equations that describe the generation an action potential and its propagation along the axon. We provide only a brief overview of the Hodgkin-Huxley model. A number of classic references~\cite{Dayan2005, Johnston1995} and the original papers by Hodgkin and Huxley~\cite{Huxley1952} chronicle the history and the details of the model. An excellent set of MOOCS ~\cite{gerstnerMOOC, compneuroMOOC} and the accompanying textbooks ~\cite{Gerstner2014,Dayan2005} give an accessible introduction to the topic \subsection*{What is the Hodgkin Huxley Neuron Model?} The cell membrane, a 5nm thick lipid bilayer, separates the inside from the outside of the neuron. The membrane is largely impermeable to charged ions present on either side. The concentration of Na\textsuperscript{+} ions outside the cell is greater than its concentration inside, while K\textsuperscript{+} ions are relatively abundant inside compared to the outside. In addition to these there are chloride (Cl\textsuperscript{-}), calcium (Ca\textsuperscript{2+}) and magnesium ions (Mg\textsuperscript{+}) that populate the cellular milieu. The differences in ionic abundances across the membrane cause a net accumulation of positive ions on one side of the membrane and negative ions on the other, and thus a potential difference across the membrane. Embedded on the membrane are ion channels that are highly selective to the ion species it lets across. In the squid axon, Hodgkin and Huxley found that there were only two types of ion channels (Na\textsuperscript{+} and K\textsuperscript{+}), in addition to a non-specific leak channel. The Hodgkin-Huxley model of neurons can be understood with the help of an equivalent electrical circuit Figure(\ref{fig:HH}) The cell membrane acts as a capacitor. An total injected current ($I$) can be written as the sum of the capacitive current $I_{C}$, ionic currents $I_{Na}$ and $I_{K}$ and the leak current $I_L$. \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{Figures/fig7.pdf} \caption{Equivalent circuit of the Hodgkin-Huxley neuron model. Adapted from~\cite{Gerstner2014}} \label{fig:HH} \end{center} \end{figure} \begin{equation} I = I_{C}(t) + I_{Na}(t) + I_{K}(t) \end{equation} where, \begin{eqnarray} C_m = 1 \mu F/cm^2 \\ I_{Na} = g_{Na}(u-E_{Na})\\ I_{K} = g_{k}(u-E_K)\\ I_{L} = g_{L}(u-E_L) \end{eqnarray} The equation describing the membrane potential can thus be written as follows, \begin{eqnarray} \label{eq:HH} C_m\frac{du}{dt}=−I_{Na}(t)−I_{K}(t)−I_{L}(t)+I(t) \end{eqnarray} Hodgkin and Huxley discovered that the $Na$ and the $K$ channels do not act as Ohmic conductances, but are modulated by the potential across the membrane. Changes in potential had a nonlinear effect on flow of ionic currents. Based in their experimental results they obtained a system of differential equations that described the temporal evolution of the membrane potential in terms of changes in ionic currents (chiefly Na\textsuperscript{+} and K\textsuperscript{+}). \begin{eqnarray}\label{d3_2}I_{Na} = g_{Na}m^3h(u−E_{Na})\end{eqnarray} \begin{eqnarray}\label{d3_3}I_K = g_Kn^4(u−E_K)\end{eqnarray} \begin{eqnarray}\label{d3_4}I_L = g_L(u−E_L)\end{eqnarray} where $E_{Na}=50\ mV$, $E_K = -95\ mV$ and $E_L=-55\ mV$ are the reversal potentials; $g_{Na} = 100\ \mu S/cm^2$, $g_K = 10\ \mu S/cm^2$ and $g_L = 0.15\ \mu S/cm^2$ are the channel conductances; and m,h, and n are gating variables that follow the dynamics given by: \begin{eqnarray}\label{d3_5}\frac{dm}{dt} = - \frac{1}{\tau_m}(m-m_0)\end{eqnarray} \begin{eqnarray}\label{d3_6}\frac{dh}{dt} = - \frac{1}{\tau_h}(h-h_0)\end{eqnarray} \begin{eqnarray}\label{d3_7}\frac{dn}{dt} = - \frac{1}{\tau_n}(n-n_0)\end{eqnarray} where $\tau_m$, $\tau_h$ and $\tau_n$ are empirically determined voltage dependent time constants and $m_0$, $h_0$ and $n_0$ are voltage dependent asymptotic gating values. \subsection*{Implementing the Hodgkin-Huxley neuron model} The variable of the Hodgkin Huxley neuron model has that are updated at each integration time step are, the membrane potential, $V$, the sodium activation gating Variable, $m$, the sodium inactivation gating Variable, $h$, and the potassium channel gating Variable, $n$. The dynamics are given by Eq~(\ref{d3_5}),Eq~(\ref{d3_6}) and Eq~(\ref{d3_7}). In the following code, we define the parameters associated with the conductances, including the formulae for $\tau_{m}$, $\tau_{h}$, $\tau_{n}$ and the voltage dependent steady state values of the gating variables. \begin{minted}[linenos]{python} # Step 1: Defining Parameters of the Neuron C_m = 1 g_K = 10 E_K = -95 g_Na = 100 E_Na = 50 g_L = 0.15 E_L = -55 # Step 2: Defining functions to calculate tau_x and x_0 # Note: Always use TensorFlow functions for all operations. def K_prop(V): T = 22 phi = 3.0**((T-36.0)/10) V_ = V-(-50) alpha_n = 0.02*(15.0 - V_)/(tf.exp((15.0 - V_)/5.0) - 1.0) beta_n = 0.5*tf.exp((10.0 - V_)/40.0) t_n = 1.0/((alpha_n+beta_n)*phi) n_0 = alpha_n/(alpha_n+beta_n) return n_0, t_n def Na_prop(V): T = 22 phi = 3.0**((T-36)/10) V_ = V-(-50) alpha_m = 0.32*(13.0 - V_)/(tf.exp((13.0 - V_)/4.0) - 1.0) beta_m = 0.28*(V_ - 40.0)/(tf.exp((V_ - 40.0)/5.0) - 1.0) alpha_h = 0.128*tf.exp((17.0 - V_)/18.0) beta_h = 4.0/(tf.exp((40.0 - V_)/5.0) + 1.0) t_m = 1.0/((alpha_m+beta_m)*phi) t_h = 1.0/((alpha_h+beta_h)*phi) m_0 = alpha_m/(alpha_m+beta_m) h_0 = alpha_h/(alpha_h+beta_h) return m_0, t_m, h_0, t_h # Step 3: Defining function that calculate Neuronal currents def I_K(V, n): return g_K * n**4 * (V - E_K) def I_Na(V, m, h): return g_Na * m**3 * h * (V - E_Na) def I_L(V): return g_L * (V - E_L) # Step 4: Define the function dX/dt where X is the State Vector def dXdt(X, t): V = X[0:1] m = X[1:2] h = X[2:3] n = X[3:4] dVdt = (5 - I_Na(V, m, h) - I_K(V, n) - I_L(V)) / C_m # Here the current injection I_injected = 5 uA m0,tm,h0,th = Na_prop(V) n0,tn = K_prop(V) dmdt = - (1.0/tm)*(m-m0) dhdt = - (1.0/th)*(h-h0) dndt = - (1.0/tn)*(n-n0) out = tf.concat([dVdt,dmdt,dhdt,dndt],0) return out # Step 5: Define Initial Condition and Integrate y0 = tf.constant([-71,0,0,0], dtype=tf.float64) epsilon = 0.01 t = np.arange(0,200,epsilon) state = odeint(dXdt,y0,t) with tf.Session() as sess: state = sess.run(state) \end{minted} \subsection*{Simulating multiple independent Hodgkin-Huxley neurons} Here we illustrate some simple steps that can be used to simulate populations of neurons efficiently. Key to setting up the equations is to order it in a manner that utilizes TensorFlow's algorithms that distribute vector, matrix and tensor computations over multiple cores. Consider a system of 20 independent HH neurons with different input currents that characterise the firing rates. \subsubsection*{Methods of Parallelization} TensorFlow has built-in functions that speed up Tensor computations using available multi-cores, and GPU/TPU setups. There are two major parts of the code where such a speed-up can be effected \begin{enumerate} \item \textbf{RK4 iterations} Our implementation of the integrator utilizes Tensors as inputs. \item \textbf{Functional Evaluations:} The form of the equations that describe the neuronal dynamics, are common across neurons. Only the parameters for differ across neurons. This can be used to `vectorize' the equations. \end{enumerate} Say $\vec{X}=[V,m,n,h]$ is the state vector of a single neuron and its dynamics are defined using parameters $C_m,g_K,...E_L$ equations of the form: \begin{eqnarray}\frac{d\vec{X}}{dt} = [f_1(\vec{X},C_m,g_K,...E_L),f_2(\vec{X},C_m,g_K,...E_L)...f_m(\vec{X},C_m,g_K,...E_L)]\end{eqnarray} We can convert these equations to a form in which all evaluations are done as vector calculations and NOT scalar calculations. Despite the parameters being different, the functional forms of the equations are similar for the same state variable of different neurons. Thus, the trick is to reorganize $\mathbf{X}$ as $\mathbf{X'}=[(V_1,V_2,...V_n),(m_1,m_2,...m_n),(h_1,h_2,...h_n),(n_1,n_2,...n_n)]=[\vec{V},\vec{m},\vec{h},\vec{n}]$. And the parameters as $[\vec{C_m},\vec{g_K}] = [C_{m_{1}}\dots C_{m_{n}},g_{K_{1}}\dots g_{K_{n}}]$ and so on. The advantage of this re-ordering is that the differential equation of the form, \begin{eqnarray}\frac{dV_i}{dt}=f(V_i,m_i,h_i,n_i,C_{m_i},g_{K_i}...)\end{eqnarray} is now easily parallelizable using a vector computation of the form, \begin{eqnarray}\frac{d\vec{V}}{dt}=f(\vec{V},\vec{m},\vec{h},\vec{n},\vec{C_m},\vec{g_K}...)\end{eqnarray} The equations can now be written in the form, \begin{eqnarray}\frac{d\mathbf{X'}}{dt}= \Big[\frac{d\vec{V}}{dt},\frac{d\vec{m}}{dt},\frac{d\vec{h}}{dt},\frac{d\vec{n}}{dt}\Big]\end{eqnarray} \subsubsection*{Implementation} Notice that the functions calculating the gating dynamics and channel currents are already capable of vector input and output, so we do not need to change these. However, the parameters were not defined as vectors in our earlier implementation. \begin{minted}[linenos]{python} n_n = 20 # number of simultaneous neurons to simulate # parameters will now become n_n-vectors C_m = [1.0]*n_n g_K = [10.0]*n_n E_K = [-95.0]*n_n g_Na = [100]*n_n E_Na = [50]*n_n g_L = [0.15]*n_n E_L = [-55.0]*n_n # The state vector definition will change def dXdt(X, t): V = X[:1*n_n] # First n_n values are Membrane Voltage m = X[1*n_n:2*n_n] # Next n_n values are Sodium Activation Gating h = X[2*n_n:3*n_n] # Next n_n values are Sodium Inactivation Gating n = X[3*n_n:] # Last n_n values are Potassium Gating dVdt = (np.linspace(0,10,n_n)-I_Na(V, m, h)-I_K(V, n)-I_L(V))/ C_m # Input current is linearly varied between 0 and 10 m0,tm,h0,th = Na_prop(V) n0,tn = K_prop(V) dmdt = - (1.0/tm)*(m-m0) dhdt = - (1.0/th)*(h-h0) dndt = - (1.0/tn)*(n-n0) out = tf.concat([dVdt,dmdt,dhdt,dndt],0) return out y0 = tf.constant([-71]*n_n+[0,0,0]*n_n, dtype=tf.float64) \end{minted} The firing frequency as a function of the input is shown in Figure(\ref{fig:freq}). The code to generate the firing rate is below. \begin{center} \begin{figure} \includegraphics[scale=0.6]{Figures/fig12.pdf} \caption{ Firing frequency of the Hodgkin-Huxley neuron as a function of input amplitude} \label{fig:freq} \end{figure} \end{center} \begin{minted}[linenos]{python} rate = np.bitwise_and(state[:-1,:20]<0,state[1:,:20]>0).sum(axis=0)/0.2 \end{minted} \section*{Day 4: Neurons and Networks} In this section we simulate a network of neurons interacting via synapses. Each synapse is defined by its own set of state variables and differential equations governing their temporal evolution. There are different kinds of synapses - electrical and chemical synapses. Electrical synapses are essentially physical conduits that allow the flow of ions across connected neurons. Chemical synapses are more common in the brain and are more complex than electrical synapses. When an action potential arrives at the axon terminal, it leads to the opening of voltage-gated calcium channels. The incoming calcium triggers neurotransmitter filled vesciles to fuse with the axon terminal membrane and release their cargo into the synaptic cleft. The neurotransmitters diffuse across the cleft and open (or close) ion channels on the post-synaptic neuron. This can cause a depolarization (increase in potential across the post-synaptic neuron's membrane) that makes it easier for the neuron to spike or it can inhibit the neuron and have the opposite effect. In some cases these effects are fast and direct \textemdash a neurotransmitter binds to a receptor in the post-synaptic site that causes an influx or efflux of ions and leads to a change in the membrane potential. The effect of a synapse can also be indirect such that neurotransmitters invoke a second messenger cascade that eventually leads to the opening or closing of ion channels in the post-synaptic neuron. Here we model fast excitatory and inhibitory chemical synapses.The network of interactions between neurons will be described by a connectivity matrix. Different connectivity matrices describe the interactions due to different types of synapses. \subsubsection*{Modelling Synapses} The synaptic current ($I_{syn}$) depends on the difference between the reversal potential ($E_{syn}$) and the value of the membrane potential ($u$). The synaptic current due to neurotransmitter release into the synaptic cleft following an action potential is given by, \begin{equation} I_{syn}(t)=g_{syn}(t)(u(t)−E_{syn}) \label{eq:syncurr} \end{equation} When the transmitter binds to postsynaptic receptors it causes a transient change in the conductance $g_{syn}$. To capture the dynamics of $g_{syn}$, one models the system using a simple kinetic model where the receptors can be either in an open or a closed state~\cite{Destexhe1994}. The transition between the states is proportional to the concentration of the neurotransmitter $[T]$ in the cleft. \begin{equation} \mathrm{C}\underset{\beta}{\stackrel{\alpha[T]}{\rightleftharpoons}} \mathrm{O} \end{equation} This may be rewritten in the form of a differential equation. \begin{eqnarray}\label{d4_1}\frac{d[O]}{dt}=\alpha[T](1−[O])−\beta[O]\end{eqnarray} We can now describe the synaptic conductance $g_{syn}(t)=g_{max}[O]$, in terms of the maximal conductance $g_{max}$ and a gating variable $[O]$, where $[O](t)$ is the fraction of open synaptic channels. $\alpha$ is the binding constant, $\beta$ the unbinding constant and $(1−[O])$ the fraction of closed channels where the neurotransmitter can bind. The functional form of T depends on the nature of the synapse. For cholinergic excitatory synapses, $[T]$ is given by, \begin{eqnarray} \label{d4_2} [T]_{ach} = A\ \Theta(t_{max}+t_{fire}+t_{delay}-t)\ \Theta(t-t_{fire}-t_{delay}) \end{eqnarray} where, $\Theta (x)$ is the Heaviside step function, $t_{fire}$ is the time of the last presynaptic spike, $t_{delay}$ is the time delay from the time of the last spike to its effect on the postsynaptic neuron and $t_{max}$ is the duration after the spike during which the transmitter remains in the synaptic cleft. For Fast GABAergic inhibitory synapses, we used the following equation, \begin{eqnarray}\label{d4_3}[T]_{gaba} = \frac{1}{1+e^{-\frac{V(t-t_{fire}-t_{delay})-V_0}{\sigma}}}\end{eqnarray} Note that in order to solve the equation~\ref{d4_1}, we need to determine the time when the presynaptic neuron fired ($t_{fire}$). To account for these synaptic interactions between neurons we need to modify the RK4 integrator developed to simulate multiple independent Hodgkin-Huxley neurons. \subsection*{Redesigning the Generalized TensorFlow Integrator} In this section we modify the integrator that we coded on Day 2 to account for interactions between neurons. This will require an additional variable that stores the time elapsed from the last presynaptic spike to calculate the equations~\ref{d4_2},~\ref{d4_3}. In the modified code we will use the TensorFlow function \texttt{tf.where()} to efficiently assign the indices of neurons that have spiked and those that have not spiked at each time point. To understand the usage and function of \texttt{tf.where()}, consider the following example. Say, you have a array \texttt{x} of 10 random numbers between 0 and 1. You want the output of the code to be another array of the same size as \texttt{x} such that the elements of the array are either -10 or 10 depending on whether the corresponding element in \texttt{x} is less or greater than 0.5. The function \texttt{tf.where(cond,a,b)} outputs an array with elements from \texttt{a} if the condition \texttt{cond} is \texttt{True} and from \texttt{b} if \texttt{cond} is \texttt{False}. See the example code below. \begin{minted}[linenos]{python} # create the Tensor with the random variables x = tf.constant(np.random.uniform(size = (10,)),dtype=tf.float64) # a list of 10s to select from if true if_true = tf.constant(10*np.ones((10,)),dtype=tf.float64) # a list of -10s to select from if false if_false = tf.constant(-10*np.ones((10,)),dtype=tf.float64) # perform the conditional masking selection = tf.where(tf.greater(x,0.5),if_true,if_false) with tf.Session() as sess: x_out = sess.run(x) selection_out = sess.run(selection) # If x_out = [0.13 0.08 0.58 0.17 0.34 0.58 0.97 0.66 0.30 0.29 ], # selection_out = [-10. -10. 10. -10. -10. 10. 10. 10. -10. -10.] \end{minted} In order to determine whether a particular neuron fired, we introduce a new variable \texttt{fire\_t} that stores the time of the last spike for each neuron. We modify the code as follows: \begin{enumerate} \item The Integrator class that we defined earlier now requires two more properties as input, namely, the number of neurons (\texttt{n}) and firing threshold (\texttt{F\_b}) of each of these neurons. We provide these inputs as arguments to the Integrator class. \item The state vector will now have an additional set of \texttt{n} variables representing the firing times. These will not be updated by the step function (\texttt{\_step\_func}). \item Inside the Integrator class, we have access to the values of the state variable and the change in the state variable since the last iteration. We use this to check if the voltages have crossed the firing threshold. The convention followed in this code is, the first \texttt{n} elements of the state vector are the membrane voltages while the last \texttt{n} elements are the time from the last spike for each of the neurons. \item The differential update function ie. step\_func takes except the last \texttt{n} values of the state variable and updates according to the differential equations specified in \texttt{func}. The last \texttt{n} variables are updated separately in \texttt{scan\_func}. It checks if any neuron has crossed its firing threshold and updates the variable \texttt{fire\_t} of the appropriate neurons with the current time. \end{enumerate} The modifications to the RK4 code implemented earlier is shown below, \begin{minted}[linenos]{python} def integrate(self, func, y0, t): time_delta_grid = t[1:] - t[:-1] def scan_func(y, t_dt): # recall the necessary variables n_ = self.n_ F_b = self.F_b t, dt = t_dt # Differential updation dy = self._step_func(func,t,dt,y) # Make code more modular. dy = tf.cast(dy, dtype=y.dtype) # Failsafe out = y + dy # the result after differential updation # Use specialized Integrator vs Normal Integrator (n=0) if n_>0: # Extract the last n variables for fire times fire_t = y[-n_:] # Change in fire_t if neuron didnt fire = 0 l = tf.zeros(tf.shape(fire_t),dtype=fire_t.dtype) # Change in fire_t if neuron fired = Current-Last Fire l_ = t-fire_t # Check if previous Voltage is less than Threshold z = tf.less(y[:n_],F_b) # Check if Voltage is more than Threshold after update z_ = tf.greater_equal(out[:n_],F_b) df = tf.where(tf.logical_and(z,z_),l_,l) fire_t_ = fire_t+df # Update firing time return tf.concat([out[:-n_],fire_t_],0) else: return out y = tf.scan(scan_func, (t[:-1], time_delta_grid),y0) return tf.concat([[y0], y], axis=0) def odeint(func, y0, t, n_, F_b): t = tf.convert_to_tensor(t, preferred_dtype=tf.float64, name='t') y0 = tf.convert_to_tensor(y0, name='y0') tf_check_type(y0,t) return _Tf_Integrator(n_, F_b).integrate(func,y0,t) \end{minted} \subsection*{Implementing a network of Hodgkin-Huxley neurons} Recall, each Hodgkin Huxley Neuron in a network with $n$ neurons has 4 dynamical variables $V$, $ m$, $n$, $h$. Each of these variables were represented as $n$\textemdash dimensional vectors. Now we need to add some more state variables representing each synapse. The neuron receives excitatory and inhibitory inputs that are introduced as additional synaptic currents $I_{ach}$ and $I_{GABA}$. Equation~\ref{eq:HH} now reads, \begin{eqnarray}C_m\frac{dV}{dt} = I_{injected} - I_{Na} - I_K - I_L - I_{ach} - I_{gaba}\end{eqnarray} For each synapse, we have Eq~(\ref{d4_1}), Eq~(\ref{d4_2}) and: \begin{eqnarray} \frac{d[O]_{ach/gaba}}{dt} = \alpha (1-[O]_{ach/gaba})[T]_{ach/gaba}-\beta[O]_{ach/gaba} \label{eq:synO} \end{eqnarray} \begin{eqnarray} I_{ach/gaba}(t)=g_{max}[O]_{ach/gaba}(V−E_{ach/gaba}) \label{eq:Isyn} \end{eqnarray} \subsection*{Synaptic Memory Management} In a network with $n$ neurons, there are at most $n^2$ synapses of each type. The actual number may be much smaller. The dynamics of each synapse is given by the equation(\ref{eq:synO}). To illustrate the details of the implementation, consider the following three neuron network. Let $X_1$ be an excitatory neuron that forms a cholinergic synapse, $X_2$ an inhibitory neuron that extends a GABAergic synapse onto $X_3$. The network has the form: $X_1\rightarrow X_2\rightarrow X_3$. In defining the connectivity matrix for each synapse type, we set a convention where the presynaptic neurons are indexed by the column number, and the postsynaptic neurons by the row number. Let $X_1$, $X_2$, $X_3$ be indexed as 0, 1 and 2 respectively. The excitatory connectivity matrix takes the form \begin{eqnarray} Ach_{n\times n}= \begin{bmatrix} 0&0&0\\ 1&0&0\\ 0&0&0\\ \end{bmatrix} \end{eqnarray} Similarly, the inhibitory connectivity matrix becomes \begin{eqnarray} GABA_{n\times n}= \begin{bmatrix} 0&0&0\\ 0&0&0\\ 0&1&0\\ \end{bmatrix} \end{eqnarray} In the following code we specify the parameters of the synapse. The number of synapses of each type are determined by adding up all the elements of the connectivity matrix. Other parameters are specified as vectors with values for each of the synapses. \begin{minted}[linenos]{python} n_n = 3 # number of simultaneous neurons to simulate # Acetylcholine ach_mat = np.zeros((n_n,n_n)) # Ach Synapse Connectivity Matrix ach_mat[1,0]=1 ## Parameters for Acetylcholine synapses ## n_ach = int(np.sum(ach_mat)) # Number of Acetylcholine (Ach) Synapses alp_ach = [10.0]*n_ach # Alpha for Ach Synapse bet_ach = [0.2]*n_ach # Beta for Ach Synapse t_max = 0.3 # Maximum Time for Synapse t_delay = 0 # Axonal Transmission Delay A = [0.5]*n_n # Synaptic Response Strength g_ach = [0.35]*n_n # Ach Conductance E_ach = [0.0]*n_n # Ach Potential # GABAa gaba_mat = np.zeros((n_n,n_n)) # GABAa Synapse Connectivity Matrix gaba_mat[2,1] = 1 ## Parameters for GABAa synapses ## n_gaba = int(np.sum(gaba_mat)) # Number of GABAa Synapses alp_gaba = [10.0]*n_gaba # Alpha for GABAa Synapse bet_gaba = [0.16]*n_gaba # Beta for GABAa Synapse V0 = [-20.0]*n_n # Decay Potential sigma = [1.5]*n_n # Decay Time Constant g_gaba = [0.8]*n_n # fGABA Conductance E_gaba = [-70.0]*n_n # fGABA Potential ## Storing Firing Thresholds ## F_b = [0.0]*n_n # Fire threshold ## Store our input to each neuron as a n x timesteps matrix ## called current_input, and extract value at each timepoint def I_inj_t(t): # Turn indices to integer and extract from matrix index = tf.cast(t/epsilon,tf.int32) return tf.constant(current_input.T,dtype=tf.float64)[index] \end{minted} For updating the dynamics of synapses, we need only as many variables as the number of synapses $\times$ number of equations required for each synapse. Here our synapse models require only one dynamical variable, the fraction of open channels $[O]$, that we store as an $k$\textemdash dimensional vector, where $k$ is the number of synapses. There are two instances where the $[O]$ vector is used. First, to solve the equation \ref{eq:synO} and second, to calculate the synaptic current given by, \begin{eqnarray}I_{syn} = \sum_{presynaptic} g_{syn}[O](V-E_{syn}) \label{eq:Isyn} \end{eqnarray} \subsection*{Defining the connectivity matrix} The most efficient way to compute $I_{syn}$ is to use the connectivity matrix $\mathbf{C}$ to convert the open fraction vector $\vec{[O]}$ to an open fraction matrix $\mathbf{O}$. $C$ is given as, \begin{eqnarray} \mathbf{C}= \begin{bmatrix} 0&1&...&0\\ 0&0&...&1\\ ...&...&...&1\\ 1&0&0&0 \end{bmatrix} \end{eqnarray} and $[O]$ \begin{eqnarray}\vec{[O]}=[O_1,O_2...O_k]\end{eqnarray} We convert this to, \begin{eqnarray} \mathbf{O}= \begin{bmatrix} 0&O_1&...&0\\ 0&0&...&O_a\\ ...&...&...&O_b\\ O_k&0&0&0 \end{bmatrix} \end{eqnarray} The equation~\ref{eq:Isyn} can now be written in the form, \begin{eqnarray}\vec{[I_{syn}]}=\sum_{columns}\mathbf{O}\diamond(\vec{g}_{syn}\odot(\vec{V}-\vec{E}_{syn}))\end{eqnarray} where $\diamond$ is columnwise multiplication and $\odot$ is elementwise multiplication. $\vec{[I_{syn}]}$ is now the total synaptic current input to the each of the neurons. \subsubsection*{Steps to calculate synaptic currents} \begin{enumerate} \item First we convert the $[O]$ vector to the $\mathbf{O}$ matrix. TensorFlow does not allow one to change a defined tensor directly. Therefore, we create a $n^{2}$ vector TensorFlow variable \texttt{o\_} which we later reshape to a $n\times n$ matrix. \item We then identify the non-zero indices of $C$. For this we use the Boolean mask function to choose the correct $k$ indices from the range $1$ to $n^2$ and store in the variable \texttt{ind}. \item Using the \texttt{scatter\_update} function of TensorFlow, we fill the correct indices of the variable \texttt{o\_} that we created with the values of open fraction from the $[O]$ vector. \item We now reshape the vector as a $n\times n$ matrix. Python stores matrices as an array of arrays, with each row as an inner array. To perform columnwise multiplication, we first tranpose the matrix, so that each column is the inner array, perform element wise multiplication with each inner array, and transpose the matrix again. \item Finally using \texttt{reduce\_sum}, we sum over the columns to compute the $I_{syn}$ vector. \end{enumerate} This process of converting from a vector to a matrix and back to a vector makes the computation more efficient than a simple loop through all the indices. \begin{minted}[linenos]{python} ## Acetylcholine Synaptic Current ## def I_ach(o,V): o_ = tf.Variable([0.0]*n_n**2,dtype=tf.float64) ind = tf.boolean_mask(tf.range(n_n**2),ach_mat.reshape(-1) == 1) o_ = tf.scatter_update(o_,ind,o) o_ = tf.transpose(tf.reshape(o_,(n_n,n_n))) return tf.reduce_sum(tf.transpose((o_*(V-E_ach))*g_ach),1) ## GABAa Synaptic Current ## def I_gaba(o,V): o_ = tf.Variable([0.0]*n_n**2,dtype=tf.float64) ind = tf.boolean_mask(tf.range(n_n**2),gaba_mat.reshape(-1) == 1) o_ = tf.scatter_update(o_,ind,o) o_ = tf.transpose(tf.reshape(o_,(n_n,n_n))) return tf.reduce_sum(tf.transpose((o_*(V-E_gaba))*g_gaba),1) ## Other Currents remain the same ## \end{minted} \subsection*{Updating synaptic variables} To update the synapses we first calculate the values of the presynaptic activation function $[T]$ for both types of synapses. This function determines whether a neuron fired or not and is calculated for each neuron. The values of $[T]$ are then sent to the correct synapses in the form of a $k\times 1$ vector. Recall: \begin{eqnarray}[T]_{ach} = A\ \Theta(t_{max}+t_{fire}+t_{delay}-t)\ \Theta(t-t_{fire}-t_{delay})\label{eq:Tach}\end{eqnarray} \begin{eqnarray}[T]_{gaba} = \frac{1}{1+e^{-\frac{V(t)-V_0}{\sigma}}} \label{eq:Tgaba} \end{eqnarray} Once we calculate the values of [T]-vector for both types of synapse, we need to redirect them to the correct synapses in a sparse $k\times1$ vector form. \subsubsection*{Steps to calculate $[T]$} \begin{enumerate} \item To calculate $[T]_{ach}$, we use a Boolean logical \texttt{\_and} function to check if the current timepoint t is greater than the last firing time (\texttt{fire\_t}) + delay (\texttt{t\_delay}) and less than last firing time (\texttt{fire\_t}) + delay (\texttt{t\_delay}) + activation length (\texttt{t\_max}) for each neuron. The result of these Boolean operations to used to determine the product of Heaviside functions in equation(\ref{eq:Tach}). For $[T]_{gaba}$, we simply used $\vec{V}$ to determine T. \item To determine the $[T]$ vector, we follow a two step process. First we multiply each row of the connectivity matrix $\mathbf{C}$ with the respective $[T]$ vector to get a activation matrix $\mathbf{T}$. We then flatten $\mathbf{T}$ and $\mathbf{C}$ and, using \texttt{tf.boolean\_mask}, remove all the zeros from $\mathbf{T}$ to get a $k\times1$ vector which now stores the presynaptic activation for each of the synapses where $k=n_{gaba}$ or $n_{ach}$. \item Calculate the differential change in the open fractions (OF) using the $k\times1$ vector. \end{enumerate} \begin{minted}[linenos]{python} def dXdt(X, t): V = X[:1*n_n] # First n_n: Membrane Voltage m = X[1*n_n:2*n_n] # Next n_n: Sodium Activation Gating h = X[2*n_n:3*n_n] # Next n_n: Sodium Inactivation Gating n = X[3*n_n:4*n_n] # Next n_n: Potassium Gating # Next n_ach and n_gaba: Ach and GABAa Open Fraction respectively o_ach = X[4*n_n : 4*n_n + n_ach] o_gaba = X[4*n_n + n_ach : 4*n_n + n_ach + n_gaba] fire_t = X[-n_n:] # Last n_n: last fire times dVdt = (I_inj_t(t)-I_Na(V, m, h)-I_K(V, n)- I_L(V)-I_ach(o_ach,V)-I_gaba(o_gaba,V))/C_m ## Updation for gating variables ## m0,tm,h0,th = Na_prop(V) n0,tn = K_prop(V) dmdt = - (1.0/tm)*(m-m0) dhdt = - (1.0/th)*(h-h0) dndt = - (1.0/tn)*(n-n0) ## Updation for o_ach ## A_ = tf.constant(A,dtype=tf.float64) Z_ = tf.zeros(tf.shape(A_),dtype=tf.float64) T_ach = tf.where(tf.logical_and(tf.greater(t,fire_t+t_delay), tf.less(t,fire_t+t_max+t_delay)),A_,Z_) T_ach = tf.multiply(tf.constant(ach_mat,dtype=tf.float64),T_ach) T_ach = tf.boolean_mask(tf.reshape(T_ach,(-1,)), ach_mat.reshape(-1) == 1) do_achdt = alp_ach*(1.0-o_ach)*T_ach - bet_ach*o_ach ## Updation for o_gaba ## T_gaba = 1.0/(1.0+tf.exp(-(V-V0)/sigma)) T_gaba = tf.multiply(tf.constant(gaba_mat,dtype=tf.float64),T_gaba) T_gaba = tf.boolean_mask(tf.reshape(T_gaba,(-1,)), gaba_mat.reshape(-1) == 1) do_gabadt = alp_gaba*(1.0-o_gaba)*T_gaba - bet_gaba*o_gaba ## Updation for fire times ## dfdt = tf.zeros(tf.shape(fire_t),dtype=fire_t.dtype) # no change out = tf.concat([dVdt,dmdt,dhdt,dndt,do_achdt,do_gabadt,dfdt],0) return out \end{minted} \subsection*{Updating the gating variable and initial conditions} As before, we again define functions that return the values of $\tau_m$, $\tau_h$, $\tau_n$, $m_0$, $h_0$, $n_0$, set parameters and initial conditions. Note: The last firing times are stored in the $n$ elements of the state vector. If we initialize the last firing time as 0, then the second neuron $X_2$ will get an EPSP immediately after the start of the simulation. To avoid this the last firing time should be initialized to a large negative number >= the length of the simulation. \begin{minted}[linenos]{python} # The initialization of the Parameters and Gating Variable # Updation Function remain the same. # Initialize the State Vector y0 = tf.constant([-71]*n_n+[0,0,0]*n_n+[0]*n_ach+[0]*n_gaba+ [-9999999]*n_n,dtype=tf.float64) \end{minted} \subsection*{Current input to the network} Here we stimulate the neuron $X_{1}$ with 100ms long current injections that progressively increase in amplitude. We introduce a 100ms gap between successive current inputs. \begin{minted}[linenos]{python} current_input= np.zeros((n_n,t.shape[0])) current_input[0,int(100/epsilon):int(200/epsilon)] = 2.5 current_input[0,int(300/epsilon):int(400/epsilon)] = 5.0 current_input[0,int(500/epsilon):int(600/epsilon)] = 7.5 state = odeint(dXdt,y0,t,n_n,F_b) with tf.Session() as sess: # Since we are using variables we have to initialize them tf.global_variables_initializer().run() state = sess.run(state) \end{minted} The output of the network is shown in Fig. \begin{center} \begin{figure} \includegraphics[scale=0.4]{Figures/fig13.pdf} \caption{Pulses of current trigger the firing of action potentials with increasing frequency. Neuron $X_1$ crosses its firing threshold first and triggers neuron $X_2$ to fire after a slight delay. Finally, $X_{2}$ hyperpolarizes neuron $X_3$. } \end{figure} \end{center} \section*{Day 5: Memory management} Now that we can simulate a model network of conductance-based neurons, we discuss the limitations of our programs and our attempts to work around these issues. \subsection*{Limits on memory} Using Python and TensorFlow allowed us to write code that is readable, parallizable and scalable across a variety of computational devices. However, our implementation is very memory intensive. The iterators in TensorFlow do not follow the normal process of memory allocation and garbage collection. Since, TensorFlow is designed to work on diverse hardware like GPUs, TPUs and distributed platforms, memory allocation is done adaptively during the TensorFlow session and NOT cleared until the Python kernel has stopped execution. The memory used increases linearly with time as the state matrix is computed recursively by the \texttt{tf.scan} function. The maximum memory used by the computational graph is 2 times the total state matrix size at the point when the computation finishes and copies the final data into the memory. The larger the network and longer the simulation, the larger the solution matrix. Each run is limited by the total available memory. For a system with a limited memory of K bytes, The length of a given simulation (L timesteps) of a given network (N differential equations) with 64-bit floating-point precision will follow: \begin{eqnarray}2\times64\times L\times N=K\end{eqnarray} That is, for any given network, our maximum simulation length is limited. One way to improve our maximum length is to divide the simulation into smaller batches. There will be a small queuing time between batches, which will slow down our code but will allow longer simulation times. Thus, if we split the simulation into K sequential batches, the maximum memory for the simulation becomes $(1+\frac{1}{K})$ times the total matrix size. Thus the memory relation becomes: \begin{eqnarray}\Big(1+\frac{1}{K}\Big)\times64\times L\times N=K\end{eqnarray} This way, we can maximize the length of out simulation that we can run in a single Python kernel. \subsection*{Implementing the Model} To improve the readability of our code we separate the integrator into a independent import module. The integrator code was placed in a file called \texttt{tf\_integrator.py}. The file must be present in the same directory as the implementation of the model. Note: If you are using Jupyter Notebook, remember to remove the \%matplotlib inline command as it is specific to jupyter. \subsubsection*{Importing tf\_integrator and other requirements} Once the integrator, \texttt{tf\_integrator.py}, is saved in the same directory as the Notebook, we can start importing required libraries and the integrator. \begin{minted}[linenos]{python} import tensorflow as tf import numpy as np import tf_integrator as tf_int import matplotlib.pyplot as plt import seaborn as sns \end{minted} To simulate the code in batches we do not need to change how we construct our model only how we execute it. \subsubsection*{Splitting time series into independent batches and run each batch sequentially} Since we will be dividing the computation into batches, we have to split the time array such that for each new call, the final state vector of the last batch will be the initial condition for the current batch. The function \texttt{np.array\_split()} splits the array into non-overlapping vectors. Therefore, we append the last time of the previous batch to the beginning of the current time array batch. \begin{minted}[linenos]{python} # Define the Number of Batches n_batch = 2 # Split t array into batches using numpy t_batch = np.array_split(t,n_batch) # Iterate over the batches of time array for n,i in enumerate(t_batch): # Inform start of Batch Computation print("Batch",(n+1),"Running...",end="") # Re-adjusting edges if n>0: i = np.append(i[0]-sim_res,i) # Set state_vector as the initial condition init_state = tf.constant(state_vector, dtype=tf.float64) # Create the Integrator computation graph tensor_state = tf_int.odeint(dXdt, init_state, i, n_n, F_b) # Initialize variables and run session with tf.Session() as sess: tf.global_variables_initializer().run() state = sess.run(tensor_state) sess.close() # Reset state_vector as the last element of output state_vector = state[-1,:] # Save the output of the simulation to a binary file np.save("part_"+str(n+1),state) # Clear output state=None print("Finished") \end{minted} \subsubsection*{Putting the output together} The output from our batch implementation is a set of binary files that store parts of our total simulation. To get the overall output we have to stitch them back together. \begin{minted}[linenos]{python} overall_state = [] # Iterate over the generated output files for n,i in enumerate(["part_"+str(n+1)+".npy" for n in range(n_batch)]): # Since the first element in the series was the last output, # we remove them if n>0: overall_state.append(np.load(i)[1:,:]) else: overall_state.append(np.load(i)) # Concatenate all the matrix to get a single state matrix overall_state = np.concatenate(overall_state) \end{minted} By this method, we have maximized the usage of our available memory. We can extend this further and develop a method to allow longer simulations. Since the memory is not cleared until the Python kernel finishes we save the parameters of the model (such as connectivity matrix) and the state vector in a file, and start a new Python kernel from a Python script to compute successive batches. This way after each large batch, the memory gets cleaned. By combining the previous batch implementation and this system, we can maximize our ability to run memory intensive simulations. \subsection*{Implementing a Runner and a Caller} First, we have to create an implementation of the model that takes in previous inputs as current parameters. Thus, we create a file, which we call \texttt{run.py} that takes an argument - the current batch number. The implementation for \texttt{run.py} is nearly the same as the previous code but with one minor difference. When the batch number is 0, we initialize all variable parameters and save them, otherwise we use the saved values. The parameters we save include the acetylcholine matrix, the GABA$_{A}$ matrix and the final/initial state vector. \texttt{run.py} also saves the files with both batch number and sub-batch number listed. \subsubsection*{Implementing the Caller code} The caller function, \texttt{call.py} creates the time array, splits it into batches and uses the Python subprocess module to call \texttt{run.py} with appropriate arguments. The code for \texttt{call.py} is given below. \begin{minted}[linenos]{python} from subprocess import call import numpy as np total_time = 1000 n_splits = 2 time = np.split(np.arange(0,total_time,0.01),n_splits) # Append the last time point to the beginning of the next batch for n,i in enumerate(time): if n>0: time[n] = np.append(i[0]-0.01,i) np.save("time",time) # call successive batches with a new python subprocess # and pass the batch number for i in range(n_splits): call(['python','run.py',str(i)]) print("Simulation Completed.") \end{minted} \subsubsection*{Implementing the Runner code} \texttt{run.py} is essentially identical to the batch implementation we developed earlier with the changes described below. \begin{minted}[linenos]{python} # Additional Imports # import sys # Duration of Simulation # # Replace t = np.arange(0,sim_time,sim_res) by t = np.load("time.npy")[int(sys.argv[1])] # get first argument to run.py # Connectivity Matrix Definitions # if sys.argv[1] == '0': ach_mat = np.zeros((n_n,n_n)) # Ach Synapse Connectivity Matrix ach_mat[1,0]=1 # If connectivity is random, once initialized it will be the same. np.save("ach_mat",ach_mat) else: ach_mat = np.load("ach_mat.npy") if sys.argv[1] == '0': gaba_mat = np.zeros((n_n,n_n)) # GABAa Synapse Connectivity Matrix gaba_mat[2,1] = 1 # If connectivity is random, once initialized it will be the same. np.save("gaba_mat",gaba_mat) else: gaba_mat = np.load("gaba_mat.npy") # Current Input Definition # if sys.argv[1] == '0': current_input= np.zeros((n_n,int(sim_time/sim_res))) current_input[0,int(100/sim_res):int(200/sim_res)] = 2.5 current_input[0,int(300/sim_res):int(400/sim_res)] = 5.0 current_input[0,int(500/sim_res):int(600/sim_res)] = 7.5 np.save("current_input",current_input) else: current_input = np.load("current_input.npy") # State Vector Definition # if sys.argv[1] == '0': state_vector = [-71]*n_n+[0,0,0]*n_n+[0]*n_ach+[0]*n_gaba +[-9999999]*n_n state_vector = np.array(state_vector) state_vector = state_vector + 0.01*state_vector *np.random.normal(size=state_vector.shape) np.save("state_vector",state_vector) else: state_vector = np.load("state_vector.npy") # Saving of Output # # Replace np.save("part_"+str(n+1),state) by np.save("batch"+str(int(sys.argv[1])+1)+"_part_"+str(n+1),state) \end{minted} \subsubsection*{Combining all Data} Just as we merged all the batches, we merge all the sub-batches and batches. \begin{minted}[linenos]{python} overall_state = [] # Iterate over the generated output files for n,i in enumerate(["batch"+str(x+1) for x in range(n_splits)]): for m,j in enumerate(["_part_"+str(x+1) for x in range(n_batch)]): # Since the first element in the series was the last output, # we remove them if n>0 and m>0: overall_state.append(np.load(i+j+".npy")[1:,:]) else: overall_state.append(np.load(i+j+".npy")) # Concatenate all the matrix to get a single state matrix overall_state = np.concatenate(overall_state) \end{minted} \section*{Conclusion} TensorFlow is a rapidly evolving ecosystem with a large community of developers. We anticipate that the code described here will serve as a starting point to simulate ODEs and could potentially include more sophisticated and faster integration algorithms and methods to manage limitations imposed by memory. \section*{Supporting information} The code in this tutorial is implemented as a series of Python notebooks that can be downloaded from the following Github repository \texttt{https://github.com/technosap/PSST}. \section*{Acknowledgments} SSM received a KVPY fellowship and support from IISER Pune. CA was funded by DBT–Wellcome India Alliance through an Intermediate fellowship IA/I/11/2500290 and IISER Pune. We thank members of the Assisi and Nadkarni labs at IISER Pune and several students who tested the code. We thank Prof. Maxim Bazhenov for discussions about code related to the insect antennal lobe model. \nolinenumbers \bibliography{nerveFlow} % Either type in your references using % \bibitem{} % Text % \end{thebibliography} % % or % % Compile your BiBTeX database using our plos2015.bst % style file and paste the contents of your .bbl file % here. See http://journals.plos.org/plosone/s/latex for % step-by-step instructions. % %\begin{thebibliography}{10} % %\bibitem{bib1} %Conant GC, Wolfe KH. %\newblock {{T}urning a hobby into a job: how duplicated genes find new % functions}. %\newblock Nat Rev Genet. 2008 Dec;9(12):938--950. % %\bibitem{bib2} %Ohno S. %\newblock Evolution by gene duplication. %\newblock London: George Alien \& Unwin Ltd. Berlin, Heidelberg and New York: % Springer-Verlag.; 1970. % %\bibitem{bib3} %Magwire MM, Bayer F, Webster CL, Cao C, Jiggins FM. %\newblock {{S}uccessive increases in the resistance of {D}rosophila to viral % infection through a transposon insertion followed by a {D}uplication}. %\newblock PLoS Genet. 2011 Oct;7(10):e1002337. % %\end{thebibliography} \end{nolinenumbers} \end{document}
{ "alphanum_fraction": 0.7370438873, "avg_line_length": 62.7043010753, "ext": "tex", "hexsha": "2127c66104668645172ca33ab75f31f738daa3a7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fe99bd65a01d464b6029db45491e929afa1ec73a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CollinsAssisi/nerveFlow", "max_forks_repo_path": "biorXiv/biorXivSaptarshiAssisi.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fe99bd65a01d464b6029db45491e929afa1ec73a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CollinsAssisi/nerveFlow", "max_issues_repo_path": "biorXiv/biorXivSaptarshiAssisi.tex", "max_line_length": 1875, "max_stars_count": null, "max_stars_repo_head_hexsha": "fe99bd65a01d464b6029db45491e929afa1ec73a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CollinsAssisi/nerveFlow", "max_stars_repo_path": "biorXiv/biorXivSaptarshiAssisi.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 22459, "size": 81641 }
\chapter{Download bit and hex files to the ARTY board} \label{atomics} The release comes with a precompiled ''bit'' file (\$project/bit/fpga\_top.bit), which can be used for downloading via the standalone LAB tool version from Xilinx, or by using the Hardware Manager in the Vivado GUI. In order to understand the proposed ''hex'' file download process, it is best to look at the two examples in \$project/ftdi/arty\_ftdi/work/. When using cygwin, ''make colors'' and ''make downloadHex'' can be used to download the colors example ''hex'' files. The results will be reported once the download has finished. Alternatively the colors.bat command can be executed in a Windows command prompt. Here the downloading progress will be prompted directly. In this initial version of the Arduissimo project, programming via the UART basically means: 1) to loopback a byte in order to test the USB - FTDI - FPGA(UART) link,\\ 2) to control the reset status of the system,\\ 3) to write to the individual program memories,\\ 4) to write to and to read from the data memories of the individual cores and\\ 5) to communicate with core 0. As of now, no debugging capabilities are implemented (other than the data memory read feature). In order to access the system via the UART, a byte or a sequence of bytes has to be written via the FTDI chip first. The first byte defines the access type: \begin{table}[h] { \begin{small} \begin{center} \begin{tabular}{c c} \hline \multicolumn{1}{|c|}{Byte 0} & \multicolumn{1}{|c|}{Access type} \\ \hline \multicolumn{1}{|c|}{0x1X} & \multicolumn{1}{|c|}{set/clear reset} \\ \hline \multicolumn{1}{|c|}{0x20} & \multicolumn{1}{|c|}{loopback} \\ \hline \multicolumn{1}{|c|}{0x30} & \multicolumn{1}{|c|}{memory write follows} \\ \hline \multicolumn{1}{|c|}{0x40} & \multicolumn{1}{|c|}{memory read follows} \\ \hline \multicolumn{1}{|c|}{0x50} & \multicolumn{1}{|c|}{user communication in write direction follows} \\ \hline \end{tabular} \end{center} \end{small} } \caption{Access type resulting from byte 0.} \label{uart_byte_0} \end{table} A reset command sets or clears the internal system reset. The 4 LSB of that byte define the reset flag state of indivial core. In loopback mode, the next byte is sent back via uart\_tx\_out. When the memory write or memory read option is chosen, the following bytes define the data stream: \begin{table}[h] { \begin{small} \begin{center} \begin{tabular}{c c} \hline \multicolumn{1}{|c|}{Byte} & \multicolumn{1}{|c|}{Meaning} \\ \hline \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{bit [17:16] of memory start address} \\ \hline \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{bit [15:8] of memory start address} \\ \hline \multicolumn{1}{|c|}{3} & \multicolumn{1}{|c|}{bit [7:0] of memory start address} \\ \hline \multicolumn{1}{|c|}{4} & \multicolumn{1}{|c|}{high byte of access length} \\ \hline \multicolumn{1}{|c|}{5} & \multicolumn{1}{|c|}{low byte of access length} \\ \hline \end{tabular} \end{center} \end{small} } \caption{Meaning of bytes 1 through 5.} \label{uart_byte_1_5} \end{table} The program and data memory offsets are as follows (from the UART perspective): \begin{table}[h] { \begin{small} \begin{center} \begin{tabular}{c c} \hline \multicolumn{1}{|c|}{Core/Memory} & \multicolumn{1}{|c|}{Offset} \\ \hline \multicolumn{1}{|c|}{core 0} & \multicolumn{1}{|c|}{0x00000000} \\ \hline \multicolumn{1}{|c|}{core 1} & \multicolumn{1}{|c|}{0x00020000} \\ \hline \multicolumn{1}{|c|}{core 2} & \multicolumn{1}{|c|}{0x00040000} \\ \hline \multicolumn{1}{|c|}{core 3} & \multicolumn{1}{|c|}{0x00060000} \\ \hline \multicolumn{1}{|c|}{program} & \multicolumn{1}{|c|}{0x00000000} \\ \hline \multicolumn{1}{|c|}{data} & \multicolumn{1}{|c|}{0x00010000} \\ \hline \end{tabular} \end{center} \end{small} } \caption{Meaning of bytes 1 through 5.} \label{uart_byte_1_5} \end{table} When the memory write option is chosen, the user must write the memory content according to the programmed access length. The received data is automatically replied. The user can use this feature to verify the transferred data. %[6...n] program or data memory content In case of the memory readback option, the FPGA will send the requested memory content according to the programmed access length directly after the configuration stream has finished. For setting or clearing reset flags and memory write or read access, the C program \$project/ftdi/arty\_ftdi/source/downloadHex.c provides the relevant routines to be used for interfacing. The arguments are : \indent -srb: Set reset at beginning.\\ \indent -srb: Set reset at end.\\ \indent -lb: Loopback test.\\ \indent -dc: Download to core [0..3] the following ''hex'' file. When no argument is given, the following arguments will be used as default: downloadHex -srb f -sre 0 -lb -dc 0 main\_0.hex -dc 1 main\_1.hex -dc 2 main\_2.hex -dc 3 main\_3.hex When the user communication in write direction (PC -\textgreater\space FTDI -\textgreater\space FPGA) option is chosen, then the following bytes qualify the data stream. The user must write the user communication content according to the programmed access length. \begin{table}[h] { \begin{small} \begin{center} \begin{tabular}{c c} \hline \multicolumn{1}{|c|}{byte} & \multicolumn{1}{|c|}{meaning} \\ \hline \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{high byte of transfer length} \\ \hline \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{low byte of transfer length} \\ \hline \multicolumn{1}{|c|}{[3...n]} & \multicolumn{1}{|c|}{write stream content} \\ \hline \end{tabular} \end{center} \end{small} } \caption{Meaning of bytes 1 through n.} \label{uart_byte_1_5} \end{table} The user communication in read direction (FPGA -\textgreater\space FTDI -\textgreater\space PC) is initiated and defined by the program executed on core 0. The USB driver running on the PC must be ready and capable to handle the upcoming data stream. As of now, no programming example is provided.
{ "alphanum_fraction": 0.6698847711, "avg_line_length": 36.2824858757, "ext": "tex", "hexsha": "63c41d03e575d5a832b7c3176cf19520b7dbe707", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-02-04T01:31:50.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-04T01:31:50.000Z", "max_forks_repo_head_hexsha": "25ad4d1830a65d7612198a16510764f41ade13c4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "cloudxcc/Arduissimo", "max_forks_repo_path": "v0.1/doc/datasheet/download.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "25ad4d1830a65d7612198a16510764f41ade13c4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "cloudxcc/Arduissimo", "max_issues_repo_path": "v0.1/doc/datasheet/download.tex", "max_line_length": 459, "max_stars_count": 13, "max_stars_repo_head_hexsha": "25ad4d1830a65d7612198a16510764f41ade13c4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "cloudxcc/Arduissimo", "max_stars_repo_path": "v0.1/doc/datasheet/download.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-10T22:35:12.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-20T17:55:17.000Z", "num_tokens": 2132, "size": 6422 }
% +--- CHAPTER --- \begin{comment} ./texfix.py --fpaths chapter6-conclusion.tex --outline --asmarkdown --numlines=99 -w fixtex --fpaths chapter6-conclusion.tex --outline --asmarkdown --numlines=999 --shortcite \end{comment} \chapter{CONCLUSION}\label{chap:conclusion} In this \thesis{} we have addressed the problem of identifying individual animals from images. We have demonstrated that our approach is effective for identifying plains zebras, Grévy's zebras, Masai giraffes, and humpback whales. Our approach consists of three main components: (1) the ranking algorithm from \cref{chap:ranking} that uses a bounding box annotation around an animal to search a labeled database of annotations for likely matches, (2) the classification algorithm from \cref{chap:pairclf} that probabilistically verifies if a pair of annotations is positive, negative, or incomparable, and (3) the graph framework from \cref{chap:graphid} that harnesses the previous algorithms in a principled way to dynamically determine the identity of all animals in a dataset. Each of these algorithms was designed to build on the previous one(s), improving the overall accuracy and efficiency of the counting process. In \cref{sec:graphexpt} we demonstrated that this was indeed the case. By combining these algorithms we have made several meaningful contributions to the problem of animal identification. In \cref{sec:introgzc} we discussed the Great Zebra Count (\GZC{}), where the ranking algorithm was used in combination with the effort of citizen scientists to provide an estimate of the number of plains zebras and Masai giraffes in Nairobi National Park. In \cref{sec:rankexpt} we investigated several parameters and factors that can impact the performance of the ranking algorithm. We discovered that having multiple photos of each individual significantly improves the accuracy of the ranking algorithm and we designed a novel name scoring mechanism with this in mind. In \cref{sec:pairexpt} we demonstrated that a classification algorithm can be used to improve the separation of positive results from negative and incomparable results in a ranked list. In \cref{sec:graphexpt} we simulated the \GZC{} and demonstrated that our improvements to the ranking algorithm --- made by the classification and graph algorithm --- enable us to perform identification using less than $25\percent$ of the number of manual reviews required by the original event. \section{DISCUSSION}\label{sec:discuss} The research that resulted in this \thesis{} began in $2010$ and was completed in $2017$. During that time, many significant developments were made in the fields of computer vision and machine learning, most notably the explosion of deep learning~\cite{lecun_deep_2015}. While some steps in our approach (\eg{} the foregroundness weights) do make use of deep convolutional neural networks (DCNNs), most do not. In some sense this is an advantage because the algorithms can be applied to different species without any need for pre-training, but this also means they do not obtain the level of accuracy shown to be achievable by these networks. Yet, the contributions of this \thesis{} are still relevant and complementary to DCNNs. This is trivially true in the case of the ranking and classification algorithms, in part due to the aforementioned reasons. However, the contribution of the graph algorithm is relevant, even in the era of deep learning. %\section{Discussion of the graph algorithm} The graph identification algorithm models the abstract constraints of the identification problem and provides a framework that can efficiently harness the power of any ranking or verification algorithm, whether it be deep or shallow. The framework dynamically manages the relationships between annotations. In most cases this means deciding if two annotations are the same (positive) or different (negative), but this also means handling cases like when the annotations are incomparable or when there is some other interesting connection between two annotations like scenery matches and photobombs. As new relationships are added, errors are discovered and corrected, and the identifications are updated. The framework also provides a means of prioritizing which edges need to be reviewed based on (1) the underlying computer vision algorithms, (2) the edge-augmentation needed to ensure minimum redundancy, and (3) the minimum cut needed to correct an error and split an inconsistent individual. Edge prioritization works in conjunction with a convergence criteria that determines when identification has been completed. A signal is emitted whenever manual interaction is needed, and the algorithm continues after the user returns with a response. The only time a user interacts with the algorithm after it begins is to label an edge as positive, negative, or incomparable. All other decisions are made internally. The algorithm stops once there is a high probability that the vast majority of identifications have been made correctly and consistently. This means that the graph algorithm requires little expertise to use and can be thought of as an ``identification wizard'' that simply guides the user through a set of simple questions. This design allows the graph algorithm to be run on a web server, where requests for manual interactions can be sent to remote users and quickly done in a web browser. %\section{Discussion of the ranking and verification algorithm} %pass \section{CONTRIBUTIONS}\label{sec:contributions} A summary of the contributions made in this \thesis{} are as follows: \begin{enumln} \item {The ranking algorithm}: \begin{enumln} \item We have adapted LNBNN~\cite{mccann_local_2012} to the problem of individual animal identification. We have performed experiments that demonstrate the effect of several parameters at multiple database sizes. We have shown that tripling the number of annotations in a database can reduce the ranking accuracy at rank $1$ by $2\percent$. \item We have evaluated the effect of various levels of feature invariance in our experiments. We have introduced a heuristic that augments the orientation of query keypoints to account for pose variations. For plains zebras, this can improve the ranking accuracy at rank $1$ by $7\percent$. \item We have accounted for the influence of background features using a learned a foregroundness measure to weight the LNBNN scores of feature correspondences. We have empirically shown that this procedure can increase the ranking accuracy at rank $1$ by $5\percent$. \item We have demonstrated the impact of image redundancy and the importance of collecting more than one annotation in each encounter. Our experiments show that multiple exemplars per name can significantly increase the ranking accuracy at rank $1$ by $20\percent$. \item We have developed a \name{} scoring mechanism to take advantage of information in database names with multiple exemplars. We have shown that this can increase the ranking accuracy at rank $1$ by $1\percent$ when there are multiple exemplars per name. \end{enumln} \item {The pairwise classification algorithm}: \begin{enumln} \item We have developed a novel feature vector that represents local and global matching information between two annotations. Our experiments have shown that both the local and global feature dimensions are important for predicting if two annotations match. \item We have used this feature vector to learn a random forest that can predict the probability that two annotations are either positive, negative, or incomparable. We have shown that this learned pairwise classifier is a strong predictor of match-state by measuring an MCC of $0.83$ for plains zebras and $0.91$ for Grévy's zebras. \item We have compared the learned probabilities to LNBNN scores and demonstrated that re-ranking using the positive probabilities can improve the ranking accuracy at rank $1$ by $9\percent$ for plains zebras and $2\percent$ for Grévy's zebras. Additionally, the probabilities significantly improve the separation of positive and non-positive annotation pairs. For both species, an ROC AUC of less than $0.9$ is improved to an AUC greater than $0.97$. \end{enumln} \item {The graph identification algorithm}: \begin{enumln} \item We have demonstrated that combining the graph algorithm with existing ranking and verification algorithms improves the accuracy and efficiency of semi-automatic animal identification. We have designed the framework to be agnostic to the specific ranking and verification algorithms so future DCNN-based algorithms can be swapped in. \item We have proposed a measure of redundancy based on edge-connectivity used to increase accuracy and reduce the number of reviews needed. \item We have developed an algorithm for fixing errors whenever inconsistencies in the graph are been discovered. \item We have developed a probabilistic termination criteria that determines when to stop identification. \end{enumln} \end{enumln} \section{FUTURE WORK}\label{sec:futurework} We have shown that our ranking and match-state classification algorithms are both accurate and work well for identifying animals. However, the clearest direction for future research is to replace these algorithms with ones based on DCNNs. To replace the ranking algorithm, we believe that the approach in~\cite{arandjelovic_netvlad_2016} is a good starting point. We had briefly investigated replacing the pairwise classifier using the techniques in~\cite{taigman_deepface_2014}, but the results were poor because we did not have as much training data or an alignment procedure. Research into the geometric matching technique described in~\cite{rocco_convolutional_2017} may help address both of these issues. There are also improvements that can be made to the graph algorithm. First it would be useful to parallelize the algorithm so reviews could be distributed across multiple users. This can be obtained by popping multiple edges from the queue at a time, but this could add extraneous redundancy if one edge in the popped set would have been filtered by another. Second, the current prioritization of edges is based completely on the output of the pairwise classifier. In the best case, the ordering would first construct each PCC as a chain, and then only $1$ redundant review would be needed. In the worst case, this order would connect one annotation of an individual to all others causing a star shaped PCC. Then to make the PCC $2$-positive-redundant, it would take $n - 2$ reviews, where $n$ is the number of annotations in the PCC. Determining the best order in which to review edges depending on the specified level of redundancy is an interesting question, which is perhaps made more challenging if considered in a distributed setting. % L___ CHAPTER ___
{ "alphanum_fraction": 0.7436223409, "avg_line_length": 61.7748691099, "ext": "tex", "hexsha": "3e90edfc331fd622db65e4365f6e155d81b13bd0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0f340b55dffb8545312abf0e43813f8b5c128888", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Erotemic/crall-thesis-2017", "max_forks_repo_path": "chapter6-conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0f340b55dffb8545312abf0e43813f8b5c128888", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Erotemic/crall-thesis-2017", "max_issues_repo_path": "chapter6-conclusion.tex", "max_line_length": 115, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0f340b55dffb8545312abf0e43813f8b5c128888", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Erotemic/crall-thesis-2017", "max_stars_repo_path": "chapter6-conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-01T19:41:38.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-01T19:41:38.000Z", "num_tokens": 2436, "size": 11799 }
\documentclass[../main.tex]{subfiles} \begin{document} \section{move} \end{document}
{ "alphanum_fraction": 0.7325581395, "avg_line_length": 14.3333333333, "ext": "tex", "hexsha": "85c27117733ce0497810c687860f3114f433a501", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_path": "Easy-Book/chapters/reading_of_this_book.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_path": "Easy-Book/chapters/reading_of_this_book.tex", "max_line_length": 37, "max_stars_count": null, "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_path": "Easy-Book/chapters/reading_of_this_book.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 26, "size": 86 }
\documentclass{article} \usepackage{graphicx} \usepackage{amsmath} \begin{document} \title{Notes on the Trotter Breakup for Jaynes-Cummings Hamiltonians} \author{Matthew Otten} \maketitle \begin{abstract} Basic notes on utilizing the trotter breakup for the Jaynes-Cummings Hamiltonian. \end{abstract} \section{Hamiltonian} The Jaynes-Cummings model describeds the dynamics of a two level system coupled to an oscillator mode (be it a mechanical resonator, a plasmonic system, or an electromagnetic field): \begin{equation}\label{jc_ham} H = \omega_a a^\dagger a + \omega_\sigma \sigma^\dagger \sigma + g (\sigma + \sigma^\dagger ) ( a + a^\dagger), \end{equation} where $a^\dagger$ is the creation operator of the oscillator, $\omega_s$ is the frequency of the oscillator, $\sigma^\dagger$ is the creation operator of the two level system, $\omega_\sigma$ is the transition frequency of the two level sistem, and $g$ is the coupling strength between the two systems. We are using units such that $\hbar = 1$. This Hamiltonian can be extended to many two level systems, which is sometimes called the Tavis-Cummings Hamiltonian: \begin{equation}\label{tc_ham} H = \omega_a a^\dagger a + \sum_i^{N_{tls}}\omega_i \sigma_i^\dagger \sigma_i + g_i ( \sigma_i + \sigma_i^\dagger) ( a + a^\dagger), \end{equation} which is just equation~\ref{jc_ham}, but with multiple two level systems, $\sigma_i$. Using either of these Hamiltonians, we can solve for the time dynamics of the system using the time-dependent Schrodinger equation, \begin{equation}\label{schrod} \dot{\psi} = -i H \psi, \end{equation} which has the solution \begin{equation}\label{solution} \psi (t) = \exp(-i H t) \psi (0). \end{equation} If the exponential of $H$ can be calculated efficiently, then the problem is solved. \section{Trotter Breakup} Taking the exponential of a matrix is, generally, difficult. One avenue is using the symmetric Trotter breakup to split $H$ into two parts: the diagonal oscillator terms ($A = \omega_a a^\dagger a + \sum_i^{N_{tls}}\omega_i \sigma_i^\dagger \sigma_i$) and the off diagonal coupling terms ($B = \sum_i^{N_{tls}} g_i ( \sigma_i \sigma_i^\dagger) ( a + a^\dagger),$). The symmetric Trotter breakup states that \begin{equation} \label{trotter} \exp(-i(A+B)\Delta t) \approx \exp(-i A \Delta t/2) \exp(-i B \Delta t) \exp(-i A \Delta t/2) + \mathcal{O} (\Delta t ^3). \end{equation} The exponential of a diagonal matrix, such as $A$, is trivially the exponential of the diagonal elements. If the exponential of $B$ can be found with similar ease, the explicit propagator can be constructed and time stepping becomes a matrix vector product with $2^{nd}$ order accuracy. One way of finding the exponential of a matrix is by diagonalizing that matrix. \section{Eigenvectors of Kronecker Products} Luckily, $B$ is a matrix of high structure. In the one two level system case, explicitly including the tensor products \begin{equation}\label{explicit_b} B = (a + a^\dagger) \otimes (\sigma + \sigma^\dagger ). \end{equation} In the multiple two level system case, there are additional Kronecker products. The eigenvectors of the Kronecker product of two matrices turns out to be the Kronecker product of the eigenvectors of the two matrices, and the eigenvalues of the Kronecker product are the product of the eigenvalues of the two matrices. This can be simply shown, \begin{equation}\label{kron_ev} (C\otimes D) (v_c \otimes v_d) = (C v_c ) \otimes (D v_d) = (\lambda_c v_c) \otimes (\lambda_d v_d) = \lambda_c \lambda_d (v_c \otimes v_d). \end{equation} Since our $B$ deals with the tensor product of many small matrices, equation~\ref{kron_ev} shows that we can build the eigensystems of huge matrices ($2^{30}$) using the eigensystems of the small matrices ($<\mathcal{O} (10)$). One issue is that if $C = D$, the eigenvectors obtained via equation~\ref{kron_ev} are actually a linear combination of the basic eigenvectors (at least in the case where $C = \sigma + \sigma^\dagger$). I am not sure exactly how to deal with this in the general case, but I found an empirical way of getting the eigenvectors of $C \otimes C \otimes C \otimes ... \otimes C$ when $C = \sigma + \sigma^\dagger$, which is good enough for now. Then, by construction, I can get the eigenvectors and eigenvalues of $B$ (without any need to do explicit diagonalization). Using this, and the Trotter breakup of equation~\ref{trotter}, we can create the explicit propagator: \begin{equation}\label{propagator} P = \exp(-i A \Delta t/2) U \exp(-i V \Delta t) U^\dagger \exp(-i A \Delta t/2), \end{equation} where $U$ is the matrix of eigenvectors of $B$ and $V$ is the diagonal matrix of eigenvalues of $B$. With $P$, we can do the time propagation efficiently - each timestep is a single matrix - vector product, and the method is stable with any timestep, with second order accuracy. \end{document}
{ "alphanum_fraction": 0.7434384537, "avg_line_length": 49.15, "ext": "tex", "hexsha": "c886456e49738fe9d6e942600aca3f24693bf110", "lang": "TeX", "max_forks_count": 13, "max_forks_repo_forks_event_max_datetime": "2022-02-24T20:07:22.000Z", "max_forks_repo_forks_event_min_datetime": "2017-03-13T15:03:11.000Z", "max_forks_repo_head_hexsha": "2b47b378c6b5b823a094e9af79f7cb8eb39dd337", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sgulania/QuaC", "max_forks_repo_path": "doc/trotter_propagation.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "2b47b378c6b5b823a094e9af79f7cb8eb39dd337", "max_issues_repo_issues_event_max_datetime": "2020-09-03T14:21:56.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-17T15:16:22.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sgulania/QuaC", "max_issues_repo_path": "doc/trotter_propagation.tex", "max_line_length": 142, "max_stars_count": 23, "max_stars_repo_head_hexsha": "2b47b378c6b5b823a094e9af79f7cb8eb39dd337", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sgulania/QuaC", "max_stars_repo_path": "doc/trotter_propagation.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-28T10:27:57.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-18T02:11:04.000Z", "num_tokens": 1433, "size": 4915 }
\section{\module{sndhdr} --- Determine type of sound file.} \declaremodule{standard}{sndhdr} \modulesynopsis{Determine type of a sound file.} \sectionauthor{Fred L. Drake, Jr.}{[email protected]} % Based on comments in the module source file. The \module{sndhdr} provides utility functions which attempt to determine the type of sound data which is in a file. When these functions are able to determine what type of sound data is stored in a file, they return a tuple \code{(\var{type}, \var{sampling_rate}, \var{channels}, \var{frames}, \var{bits_per_sample})}. The value for \var{type} indicates the data type and will be one of the strings \code{'aifc'}, \code{'aiff'}, \code{'au'}, \code{'hcom'}, \code{'sndr'}, \code{'sndt'}, \code{'voc'}, \code{'wav'}, \code{'8svx'}, \code{'sb'}, \code{'ub'}, or \code{'ul'}. The \var{sampling_rate} will be either the actual value or \code{0} if unknown or difficult to decode. Similarly, \var{channels} will be either the number of channels or \code{0} if it cannot be determined or if the value is difficult to decode. The value for \var{frames} will be either the number of frames or \code{-1}. The last item in the tuple, \var{bits_per_sample}, will either be the sample size in bits or \code{'A'} for A-LAW\index{A-LAW} or \code{'U'} for u-LAW\index{u-LAW}. \begin{funcdesc}{what}{filename} Determines the type of sound data stored in the file \var{filename} using \function{whathdr()}. If it succeeds, returns a tuple as described above, otherwise \code{None} is returned. \end{funcdesc} \begin{funcdesc}{whathdr}{filename} Determines the type of sound data stored in a file based on the file header. The name of the file is given by \var{filename}. This function returns a tuple as described above on success, or \code{None}. \end{funcdesc}
{ "alphanum_fraction": 0.7196056955, "avg_line_length": 43.4761904762, "ext": "tex", "hexsha": "43f04ec7cb58bbbaa95dcdafd57b30fb4b39bd6e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec", "max_forks_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_forks_repo_name": "1byte2bytes/cpython", "max_forks_repo_path": "Doc/lib/libsndhdr.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_issues_repo_name": "1byte2bytes/cpython", "max_issues_repo_path": "Doc/lib/libsndhdr.tex", "max_line_length": 71, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec", "max_stars_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_stars_repo_name": "1byte2bytes/cpython", "max_stars_repo_path": "Doc/lib/libsndhdr.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-25T21:41:07.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-25T21:41:07.000Z", "num_tokens": 533, "size": 1826 }
\section{Valency alternations} \label{valclass} The frames introduced in \sectref{frames} show various alternations. Different types of alternations have to be distinguished: some just change the argument realization, e.g. differential case marking, which is triggered by pragmatic factors such as scenario classes. Other alternations, e.g. the inchoative-causative lability, change the argument structure. The labile verbs will be discussed in \sectref{labile}, \sectref{three-arg} deals with the alternations among the three-argument frames.\footnote{For alternations found among the experiencer-as-possessor predicates see \sectref{nv-comp-poss}.} \subsection{Lability}\label{labile} Labile verbs are characterized by variable transitivity of the same verbal stem, which is not brought about by means of a morphological derivation. \citet[224]{Letuchiy2009Labile} classifies labile verbs into different types: the inchoative/causative alternation, the reflexive alternation, the reciprocal alternation, the passive (extremely rare) and the converse type. According to this classification, Yakkha has the inchoative/causative\footnote{The inchoative/causative type is equated with \rede{labile} in \citet{Haspelmath1993More}, whose definition of labile verbs is more restrictive.} and the reflexive.\footnote{Furthermore, Yakkha shows morphologically unmarked detransitivizations that can have both passive and antipassive interpretations, but they do not change the semantic roles of the arguments and hence they are not lexical alternations. They are treated below in \sectref{trans-op} on transitivity operations. Letuchiy acknowledges the passive-type as labile, but considers unmarked antipassives \emph{quasi-lability}, because his crucial defining feature for lability is a change of the semantic roles. But if semantic role change is required, his inclusion of the passive alternation is misleading. In passives, the semantic roles do not change; the undergoer of \rede{beat} does not have different semantic roles in the active vs. the passive voice.} The current lexical database contains 77 labile verbs. The inchoative/causative alternation is patient-preserving; the reflexive alternation is agent-preserving (see also \citealt[223]{Letuchiy2009Labile}). As lability is defined by the absence of morphological marking, it is hard to tell which form of a labile pair is the basic form. The intransitive verb can be considered the basic form semantically and formally, as less participants are involved in the event, and as the verb hosts less inflectional morphology than the transitive verb.\footnote{From a first impression, there are definitely also differences in frequency among the labile verbs. Some are rather used transitively and some intransitively, depending on which function of a verb is more plausible in natural discourse. The existing corpus is not big enough for significant statistic analyses.} \paragraph*{Inchoative/causative lability}\quad\newline\vskip-1ex \noindent By far the majority of the labile verbs belong to the inchoative-causative class, a fact that goes along with the crosslinguistic findings in \citet{Letuchiy2009Labile}. The intransitive verbs denote states or spontaneous changes of state. No agent or causer argument is entailed in the verbal semantics.\footnote{Notably, inchoative (\rede{anticausative} in \citealt{Creissels2012_Lability}) readings do not always express events that do not have an agent or a causer argument. Sometimes, the A is merely not relevant for a certain event, and thus it is not part of the underlying concept of the event, and has to be left unexpressed, as shown e.g. for facilitative readings of anticausatives in Tswana by Creissels (ibid.).} In the corresponding transitive verb, a causer argument that brings about the event is added, and the P argument corresponds to the S of the intransitive verb. Examples \Next[a], \Next[c] and \Next[e] show the inchoative verbs with S undergoing a spontaneous change of state, while \Next[b], \Next[d] and \Next[f] show the corresponding transitive verbs with an A argument bringing about that change of state. The verb \emph{cimma}, meaning both \rede{learn} and \rede{teach}, basically belongs to the same alternation, but it has one additional argument. The intransitively inflected verb has two arguments and the transitively inflected verb has three arguments (see \sectref{itr-teach}). \ex. \ag. dailo hos-a=na.\\ door open{\sc [3sg]-pst=nmlz.sg}\\ \rede{The door opened.} \bg. a-ppa=ŋa dailo hos-uks-u=na.\\ {\sc 1sg.poss-}father{\sc =erg} door open{\sc -prf-3.P=nmlz.sg}\\ \rede{Father has opened the door.} \bg. siŋ eg-a=na.\\ wood break{\sc [3sg]-pst=nmlz.sg}\\ \rede{The piece of wood broke.} \bg.uŋ=ŋa siŋ eg-u=na.\\ {\sc 3sg=erg} wood break{\sc -3.P[pst]=nmlz.sg} \\ \rede{He broke the piece of wood.} \bg.phuama yupma=ci=bhaŋ cend-a=na.\\ last-born\_girl sleepiness{\sc =nsg=abl} wake\_up{\sc [3sg]-pst=nmlz.sg}\\ \rede{Phuama woke from her sleep.} \bg.ka uŋ cend-u-ŋ=na.\\ {\sc 1sg[erg]} {\sc 3sg} wake\_up{\sc -3sg.P[pst]-1sg.A=nmlz.sg}\\ \rede{I woke her up.} There are border cases of lability. In Yakkha, many events are expressed by complex predicates. In these predicates, the first stem contains the lexical verb, such as the labile stem \emph{khiks \ti khiŋ} \rede{stretch, grow} in \Next. The second verbal stem is from the closed class of function verbs (V2s, see Chapter \ref{verb-verb}); they specify the verbal semantics, for instance with regard to the temporal structure. In \Next[a], the V2 \emph{-kheʔ} \rede{go} emphasizes the telicity of the event. It is sensitive to transitivity, too. The V2 \emph{-kheʔ} is only compatible with intransitive interpretations (see ungrammatical \Next[b]). Thus, complex predication can have the secondary function of indicating transitivity features. \ex.\ag. ikhiŋ khiks-a({\bf -khy}-a)=naǃ\\ how\_much stretch{\sc [3sg]-pst(-V2.go-pst)=nmlz.sg}\\ \rede{How tall she becameǃ} \bg.a-laŋ=ci khiŋ({\bf *-kheʔ})-ma=ci.\\ {\sc 1sg.poss-}leg{\sc =nsg} stretch{\sc (*-V2.go)-inf[deont]=nsg}\\ Intended: \rede{I have to stretch my legs.} \paragraph*{Reflexive lability} \quad\newline\vskip-1ex \noindent The stems of this class alternate between a transitive reading and an intransitive reading with reflexive semantics. Strictly speaking, no argument is removed in reflexives, but the A and P have identical reference and collapse into one single intransitive subject role formally \citep[1134]{Haspelmath2004_Valency}. In the transitive reading, an external P argument is added. Typically, the verbs undergoing this alternation refer to actions involving the body. The examples in \Next illustrate the reflexive alternation with three verb pairs. \ex. \ag. uŋci=ŋa men-ni-ma=nuŋ cum-a-ŋ=na\\ {\sc 3nsg=erg} {\sc neg-}see{\sc -inf=com.cl} hide{\sc -pst-1sg=nmlz.sg} \\ \rede{I hid, so that they cannot see (me).} \bg.ripu=ŋa khorek cum-u=na\\ Ripu{\sc =erg} bowl hide{\sc -3.P[pst]=nmlz.sg} \\ \rede{Ripu hid the bowl.} \bg. ka=ca mimiʔ wasiʔ-a-ŋ=hoŋ, ...\\ {\sc 1sg=add} a\_little wash{\sc -pst-1sg=seq}\\ \rede{After washing myself a little, ...} \source{40\_leg\_08.050} \bg.a-nuncha wasiʔ-wa-ŋ=na\\ {\sc 1sg.poss-}younger\_sibling wash{\sc -npst[3.P]-1sg.A=nmlz.sg}\\ \rede{I wash my little sister.} \bg.a-chya (tek=ŋa) ept-a=na\\ {\sc 1sg.poss-}child (cloth{\sc =ins}) cover{\sc [3sg]-pst=nmlz.sg}\\ \rede{My child covered itself (with the blanket).} \bg.yenda ept-a-n-u-m\\ millet\_mash cover{\sc -pst-pl-3.P[imp]-2pl.A}\\ \rede{Cover the millet mash.} \subsection{Alternations in three-argument verbs}\label{three-arg} Alternations in three-argument verbs are mostly conditioned by pragmatic factors such as topicality or the referential properties of the arguments.\footnote{My investigation of referentiality effects in three-argument verbs (see also \citealt{Schackow2012_Referential}) has been inspired by the EUROBabel project Referential Hierarchies in Morphosyntax (RHIM) and a questionnaire on three-argument constructions, designed by Anna Siewierska and Eva van Lier (not published).} Typically, in events with three arguments, the G arguments (goals, recipients) are animate, definite and thus also more topic-worthy, whereas the T arguments have a strong tendency to be inanimate, indefinite and thus less topic-worthy. Events in which this expected scenario is reversed are more marked pragmatically, and this could be reflected in the morphosyntax of the clause (\citealt{Dryer1986Primary, Siewierska2003Person}, \citealt{Haspelmath2004Explaining, Haspelmath2005Argument, Haspelmath2007Ditransitive}, \citealt{Malchukovetal2010Ditrans-overview}). Some of the referential effects are found exclusively in three-argument verbs in Yakkha, for instance a case of hierarchical agreement, where the T and the G argument compete for an agreement slot. One has to distinguish between argument-based alternations, i.e. effects that are conditioned by the referential properties of only one argument, and scenario-based alternations, i.e. effects that are conditioned by the properties of both T and G in relation to each other. \paragraph*{The spray-load alternation}\quad\newline\vskip-1ex \noindent One class of verbs shows alternations between the indirective and the secundative frame, also known as \emph{spray-load alternation} \citep{Levin1993_English, Malchukovetal2010Studies, Malchukovetal2015_Valency}. Either the T argument is in the instrumental case and the G triggers object agreement on the verb (for the secundative frame, see \Next[a]), or the G argument is in the locative and the T triggers object agreement (for the indirective frame, see \Next[b]). \ex. \ag. ka makai=ŋa dalo ipt-wa-ŋ=na\\ {\sc 1sg[erg]} corn{\sc =ins} sack fill{\sc -npst[3.P]-1sg=nmlz.sg} \\ \rede{I filled the sack with corn.} (secundative) \bg. gagri=be maŋcwa ipt-u\\ pot{\sc =loc} water fill{\sc -3.P[imp]}\\ \rede{Fill the water into the pot.} (indirective) The verb \emph{ipma} \rede{fill} in \Last can only have inanimate G arguments. Verbs with a greater variability of possible arguments may show restrictions on this alternation. Some verbs, for instance, block the secundative frame when the G argument is inanimate, e.g. \Next[a], which renders the indirective frame the only possibility (see \Next[b]). In order to license the secundative frame, the G argument has to have the potential to be affected by the event \Next[c]. The verb \emph{lupma} \rede{scatter, disperse, strew} provides another example of this restriction. Again, the the secundative frame is the preferred option for animate G arguments, while the indirective is used when inanimate G arguments are involved \NNext[b] (context: the preparation of millet beer). In \NNext[a], the G argument is non-overt, but it has human reference, which can be inferred from the context: a funeral. \ex. \ag. *ka maŋcwa luŋkhwak=ŋa lept-u-ŋ=ha \\ {\sc 1sg[erg]} water stone{\sc =ins} throw{\sc -3.P[pst]-1sg=nmlz.nsg} \\ Intended: \rede{I threw a stone into the water.} (*secundative) \bg. ka lunkhwak maŋcwa=be lept-u-ŋ=na \\ {\sc 1sg[erg]} stone water{\sc =loc} throw{\sc -3.P[pst]-1sg=nmlz.sg} \\ \rede{I threw a stone into the water.} (indirective) \bg. ka nda luŋkhwak=ŋa lep-nen=na\\ {\sc 1sg[erg]} {\sc 2sg} stone{\sc =ins} throw{\sc [pst]-1>2=nmlz.sg} \\ \rede{I threw a stone at/to you.} (secundative) \ex. \ag. kham=ŋa lupt-u-ga=i\\ soil{\sc =ins} scatter{\sc -3.P[imp]-2=emph} \\ \rede{Cover him with sand.} \bg. yenda=be khawa lupt-u-g=ha=i?\\ millet\_mash{\sc =loc} yeast disperse{\sc -3.P[pst]-2=nmlz.nsg=q} \\ \rede{Did you add the yeast to the millet mash?} \paragraph*{Alternations related to the animacy of G}\label{loc-alt}\quad\newline\vskip-1ex \noindent One could see in the spray-load alternation that the unmarked nominative is preferred for animate, sentient G arguments. For some verbs, this results in alternations between the double object frame and the indirective frame. In \Next[a], the G argument is human, moreover it is a speech-act participant, and thus the highest on the referential hierarchy \citep{Silverstein1976Hierarchy}. Hence, the double object frame is chosen, the verb agrees with G, and both T and G are in the nominative. In \Next[b], the G has third person inanimate reference, and the frame changes to indirective, with G in the locative, and T triggering the agreement.\footnote{There is no number hierarchy at work in these alternations. The number of T is not the crucial factor, but nonsingular was chosen to illustrate the agreement.} \ex. \ag. ka nda sandhisa khuʔ-nen=na\\ {\sc 1sg[erg]} {\sc 2sg} present{\sc } bring{\sc [pst]-1>2=nmlz.sg}\\ \rede{I brought you a present.} \bg. uŋ=ŋa kitab(=ci) iskul=be khut-u-ci=ha\\ {\sc 3sg=erg} book{\sc (=nsg)} school{\sc =loc} bring{\sc -3.P[pst]-3nsg.P=nmlz.nsg}\\ \rede{He brought the books to school.} Some verbs only change the case marking of G without changing the agreement. The verb \emph{hambiʔma} \rede{distribute} is a benefactive derivation of \emph{hamma} \rede{distribute, divide, spread}. In the typical scenario, the G argument is referentially high, the T argument is low, and the argument realization follows the double object frame, as in \Next[a]. When the G argument changes to inanimate reference, as in example \Next[b], it has to be in the locative case, but the verb does not change to the indirective frame; and thus the agreement remains with G. Furthermore, instead of using the nonsingular marker \emph{=ci} on the G argument \emph{ten} \rede{village}, it is marked for nonsingular number by reduplication, which indicates a plurality of subevents. This kind of plural marking is not encountered when the G argument is human, as shown in example \Next[c]. \ex. \ag. ka nniŋda phoʈo(=ci) ham-biʔ-meʔ-nen-in=ha\\ {\sc 1sg[erg]} {\sc 2pl} photo{\sc (=nsg)} distribute{\sc -V2.give-npst-1>2-2pl=nmlz.nsg} \\ \rede{I distribute the photos among you.} \bg. sarkar=ŋa yaŋ ten-ten=be ŋ-haps-u-bi-ci=ha\\ government{\sc =erg} money village-village{\sc =loc} {\sc 3pl.A-}distribute{\sc -3.P[pst]-V2.give-3nsg.P=nmlz.nsg}\\ \rede{The government distributed the money among the villages.} \bg. ka piccha=ci yaŋ haps-u-bi-ŋ-ci-ŋ=ha\\ {\sc 1sg[erg]} child{\sc =nsg} money{\sc } distribute{\sc -3.P[pst]-V2.give-1sg.A-nsg.P-1sg.A=nmlz.nsg}\\ \rede{I distributed the money among the children.} \paragraph*{Scenario-based alternations}\label{scen-based}\quad\newline\vskip-1ex \noindent Not only case marking, but also the verbal person marking can be subject to reference-based alternations. The Yakkha verb agrees with only one object, so that there is the potential for competition between T and G arguments as to which argument will trigger the agreement. The universal tendency for agreement to be triggered by arguments that are speech act participants, animate or topical has already been mentioned by \citet{Givon1976Topic}. This tendency can lead to hierarchical alignment of agreement, understood as agreement that is not determined by syntactic roles but by the referential properties of the arguments \citep[66]{Nichols1992Language}. This is well-studied for monotransitive verbs, but not for three-argument verbs.\footnote{The most prominent example for hierarchical alignment in ditransitives is the Yuman language Jamul Tiipay (\citet[162--163]{Miller2001A-grammar}, discussed e.g. in \citealt[348]{Siewierska2003Person}).} Two verbs of the double object class allow animate/human T arguments, namely \emph{soʔmeʔma} \rede{show} and \emph{cameʔma} \rede{feed}. Etymologically, both verbs are causatives, but they show the same behavior as non-derived verbs. Usually, the verb shows object agreement with G in this frame (see \Next[a]), but when G has third person reference and T is a speech act participant (\textsc{sap}), the verb agrees with T instead of G. The case marking of G also changes to locative, so that the verb now belongs to the indirective frame (see \Next[b]). \ex. \ag. a-ni-ŋa ka u-phoʈo soʔmet-a-ŋ=na\\ {\sc 1sg.poss-}elder.sister{\sc =erg} {\sc 1sg} {\sc 3sg.poss-}photo show{\sc -pst-1sg.P=nmlz.sg}\\ \rede{My elder sister showed me her photo.} (T[3]→G[\textsc{sap}]) \bg. ka nda appa-ama=be soʔmeʔ-nen=na\\ {\sc 1sg[erg]} {\sc 2sg} mother-father{\sc =loc} show{\sc [pst]-1>2=nmlz.sg}\\ \rede{I showed you to my parents.} (T[\textsc{sap}]→G[3]) This alternation is scenario-based, as it only applies in the T[\textsc{sap}]→G[3] constellation. In \Next, both T and G are are speech-act participants, and the agreement remains with the G argument. This scenario is also pragmatically marked, which is why locative marking on G is possible (though not obligatory) here. \exg. uŋ=ŋa ka {\bf nniŋda(=be)} soʔmet-i-g=ha\\ {\sc 3sg=erg} {\sc 1sg} {\sc 2pl(=loc)} show{\sc [3sg.A;pst]-2pl-2=nmlz.nsg}\\ \rede{He showed me to you (plural).} (T[\textsc{sap}]→G[\textsc{sap}]) In some contexts, this may yield more than one interpretation. As it is always the speech-act participant that triggers the agreement, a clause like in \Next is ambiguous. Note that the two verbs differ with respect to the acceptability of the locative on G. The effects of the T[\textsc{sap}]→G[\textsc{sap}] scenario are summarized in \figref{t-sap-table}. \exg. ka nda kiba(*=be) cameʔ-meʔ-nen=na\\ {\sc 1sg[erg]} {\sc 2sg} tiger{\sc (*=loc) } feed{\sc -npst-1>2=nmlz.sg}\\ \rede{I will feed you to the tigerǃ} (T-agr) OR\\ \rede{I will feed the tiger to you!} (G-agr) \todo{I made this into a rcc table. Is this OK?} \begin{figure}[htp] \begin{center} \begin{tabular}{rcc} \lsptoprule & G[\textsc{sap}] & G[3]\\ \midrule T[\textsc{sap}] & V-o[G], G-{\sc loc/nom} & {\bf V-o[T], G-{\sc loc}} \emph{soʔmeʔma} \rede{show}\\ & & {\bf V-o[T], G-{\sc nom}} \emph{cameʔma} \rede{feed}\\ % \cline{1-1} \cline{3-3} T[3] & \multicolumn{2}{c}{V-o[G], G-{\sc nom}} \\ % & \multicolumn{2}{c|}{ } \\ % I do not understand the purpose of this line \lspbottomrule \end{tabular}\\ \caption{The effects of the T[\textsc{sap}]→G[3] scenario}\label{t-sap-table} \end{center} \end{figure} The same T[\textsc{sap}]→G[3] scenario may also restrict alternations. The verb \emph{nakma} (stem: \emph{nakt}) \rede{ask, beg} alternates (almost) freely between the double object frame (see \Next) and the indirective frame (see \NNext). It is the only verb that shows this alternation. The argument encoding is conditioned by the question of which argument is central in a given discourse. \ex. \ag. ka nda chemha nak-nen=na\\ {\sc 1sg[erg]} {\sc 2sg} liquor ask{\sc [pst]-1>2=nmlz.sg} \\ \rede{I asked you for liquor.} \bg. ka i=ya=ca n-nakt-a-ŋa-n!\\ {\sc 1sg} what{\sc =nmlz.nsg=add} {\sc neg-}ask{\sc -imp-1sg.P-neg} \\ \rede{Do not ask me for anything!} \source{27\_nrr\_06:25} \ex. \ag. uŋ=ŋa ka=be unipma nakt-u=ha\\ {\sc 3sg=erg} {\sc 1sg=loc} money ask{\sc -3.P[pst]=nmlz.nc} \\ \rede{He asked me for his money.} \bg. uŋ=ŋa appa-ama=be ka nakt-a-ŋ=na\\ {\sc 3sg=erg} mother-father{\sc =loc} {\sc 1sg} ask{\sc -pst-1sg.P=nmlz.sg} \\ \rede{He asked my parents for me (i.e. to marry me).} However, when the T is a speech act participant and the G is not, as in \Last[b], the indirective frame is the only option. Clauses like the one in \Next are ungrammatical. Thus, the particular scenario in which the T is a speech act participant and the G is a third person restricts the alternations in the argument realization of this verb. \exg. *uŋci ka n-nakt-u-n-ci-n\\ {\sc 3nsg} {\sc 1sg} {\sc neg-}ask{\sc -3.P[imp]-neg-nsg.P-neg}\\ Intended: \rede{Do not ask them for me.} The preceding section has shown how the argument realization in three-argument verbs can be conditioned by referential factors. The scenario T[\textsc{sap}]→G[3] leads to an obligatory change in person and case marking for the verbs \emph{soʔmeʔma} \rede{show} and \emph{cameʔma} \rede{feed}, and to a restriction in the alternation possibilities for the verb \emph{nakma} \rede{ask, beg}. Hierarchical alignment, partly combined with inverse marking, is also known from the verbal paradigms of other Tibeto-Burman languages, e.g. from rGyalrong \citep{Nagano1984A-historical}, Rawang \citep{LaPolla2007Hierarchical}, and to some extent from other Kiranti languages, too, like Hayu and Dumi \citep{Michailovsky2003Hayu, Driem1993A-grammar}. In the Yakkha verbal person marking, however, hierarchical alignment as it is found in the three-argument verbs shown above is not found in the monotransitive paradigms.\footnote{Several morphemes in Yakkha verbal person marking are scenario-sensitive, see \sectref{verb-infl}. However, the alignment of the verbal person marking in Yakkha is too heterogenous to be captured by one principle or one hierarchy. It also includes ergative, accusative, tripartite and neutral alignment (cf. also \citet{Witzlacketal2011_Decomposing} for a Kiranti-wide study).}
{ "alphanum_fraction": 0.739840824, "avg_line_length": 103.6893203883, "ext": "tex", "hexsha": "9c0f99c61659250671655094d6fd6331f7f35ba5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "37a7473097d2c8ed7787bfda95096b940d2db6c5", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/66", "max_forks_repo_path": "chapters/10a_ValClasses.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37a7473097d2c8ed7787bfda95096b940d2db6c5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/66", "max_issues_repo_path": "chapters/10a_ValClasses.tex", "max_line_length": 1585, "max_stars_count": null, "max_stars_repo_head_hexsha": "37a7473097d2c8ed7787bfda95096b940d2db6c5", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/66", "max_stars_repo_path": "chapters/10a_ValClasses.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6528, "size": 21360 }
\documentclass[letterpaper,12pt]{article} % \documentclass[a4paper,12pt]{article} % twocolumn letterpaper 10pt 11pt twoside % for other type sizes, 8, 9, 10, 11, 12, 14pt, 17pt, 20pt % \documentclass[14pt]{extarticle} % also extbook, extletter available % \usepackage{extsizes} %\usepackage{endnotes} % then put \theendnotes where you want them \usepackage{times} \usepackage{xspace} %\usepackage{alltt} \usepackage{fancyvrb} % \begin{Verbatim}[fontsize=\small] % or [fontsize=\footnotesize] %\usepackage{upquote} % affects \verb and verbatim % to get straight quotes, straight single quote, straight double % quotes in verbatim environments %\usepackage{latexsym} % \LaTeX{} for LaTeX; \LaTeXe{} for LaTeX2e %\usepackage{mflogo} % \MF{} for METAFONT; \MP for METAPOST \usepackage{url} % %\url{http://www.xrce.xerox.com/people/beesley}I %\usepackage{lscape} % allows \begin{landscape} ... \end{landscape} %\usepackage{tipa} %\include{ipamacros} % my macros to allow same input for DA and IPA %\usepackage{desalph} %\usepackage{arabtex} % see usepackage{buck} and setcode{buck} below %\usepackage{buck} %\usepackage{mxedruli} %\usepackage{epsfig} %\usepackage{pslatex} % make whole doc. use postscript fonts % parallel columns, see also multicol %\usepackage{parcolumns} %... %\begin{parcolumns}[<options>]{3} %\colchunk{ column 1 text } %\colchunk{ column 2 text } %\colchunk{ column 3 text } %\colplacechunks %... %\end{parcolumns} % for more of these names, see Guide to LaTeX, p. 351 %\providecommand*{\abstractname}{} % in case the style defines one %\renewcommand*{\abstractname}{Transcriber notes} %\renewcommand*{\figurename}{Figure} %\renewcommand*{\tablename}{Table} %\renewcommand*{\bibname}{Bibliography} %\renewcommand*{\refname}{References} \providecommand{\acro}{}\renewcommand{\acro}{\textsc} \providecommand{\defin}{}\renewcommand{\defin}{\textsc} \newcommand{\xmlelmt}{\texttt} \newcommand{\xmlattr}{\texttt} \newcommand{\key}{\textbf} \newcommand{\translit}{\texttt} % forced pagebreak %\newpage %\usepackage{ulem} % \uline{important} underlined text % \uuline{urgent} double-underlined text % \uwave{boat} wavy underline % \sout{wrong} line drawn through word (cross out, strike out) % \xout{removed} marked over with //////. % {\em phasized\/} | In LaTeX, by default, these are underlined; use % \emph{asized} | \normalem or [normalem] to restore italics % \useunder{\uwave}{\bfseries}{\textbf} % use wavy underline in place of bold face % \usepackage{natbib} %\usepackage[authoryear]{natbib} % compatible with \bibliographystyle{plain}, harvard, apalike, chicago, astron, authordate %\citet for "textual" \citet{jon90} -> Jones et al. (1990) %\citet[before][after]{key} e.g. \citet[see][p.~47]{jon90} --> % see Jones et al.(1990, chap. 2) %\citet[chap. 2]{jon90} --> Jones et al. (1990, chap. 2) %\citet[after]{key} % citep for "parenthetical" %\citep{jon90} --> (Jones et al., 1990) %\citep[chap. 2]{jon90} --> (Jones et al., 1990, chap. 2) %\citep[see][]{jon90} --> (see Jones et al., 1990) %\citep[see][chap. 2]{jon90} --> (see Jones et al., 1990, chap. 2) %\citep for "parenthetical" (author's name in parens) %\citep similar % %\citet*{key} list all authors, not just et.al %\citetext{priv.\ comm.} comes out as (priv. comm.) % %just the author or year %\citeauthor{key} comes out as "Jones et al." %\citeauthor*{key} comes out as "Jones, Sacco and Vanzetti" %\citeyear{key} comes out as 1990 %\citeyearpar{key} (1990) % %Rare stuff: %use \Citet and \Citep for exceptional forcing of initcap on names %like 'della Robbia' when it appears first in a sentence. % %\citealt like \citet but without parens %\citealp like \citep but without parens % % fancyheadings from The Book (old, obsolete, I think) %\usepackage{fancyheadings} %\pagestyle{fancyplain} % remember the chapter title %\renewcommand{\chaptermark}[1]{\markboth{#1}{}} %\renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}} %\lhead[\fancyplain{}{\small\scshape\thepage}]{\fancyplain{}{\small\scshape\rightmark}} %\rhead[\fancyplain{}{\small\scshape\leftmark}]{\fancyplain{}{\small\scshape\thepage}} %\cfoot{} % new fancyhdr package %\usepackage{fancyhdr} %\pagestyle{fancy} %\fancyhead{} %% L/C/R denote left/center/right header (or footer) elements %% E/O denote even/odd pages %% \leftmark, \rightmark are chapter/section headings generated by the %% book document class %\fancyhead[LE,RO]{\slshape\thepage} %\fancyhead[RE]{\slshape \leftmark} %\fancyhead[LO]{\slshape \rightmark} %\fancyfoot[LO,LE]{\slshape Short Course on Asymptotics} %\fancyfoot[C]{} %\fancyfoot[RO,RE]{\slshape 7/15/2002} % another example %\fancyhead[LE]{\thepage} %\fancyhead[CE]{\bfseries Beesley} %\fancyfoot[CE]{First Draft} %\fancyhead[CO]{\bfseries My Article Title} %\fancyhead[RO]{\thepage} %\fancyfoot[CO]{For Review and Editing Only} %\renewcommand{\footrulewidth}{0.4pt} % \vspace{.5cm} % c, l, r, p{1cm} %\begin{tabular}{} %\hline % & & & \\ %\hline %\end{tabular} % \vspace{.5cm} % bigbox -- puts a box around a float % for {figure}, {table} or {center} \newdimen\boxfigwidth % width of figure box \def\bigbox{\begingroup % Figure out how wide to set the box in \boxfigwidth=\hsize \advance\boxfigwidth by -2\fboxrule \advance\boxfigwidth by -2\fboxsep \setbox4=\vbox\bgroup\hsize\boxfigwidth % Make an invisible hrule so that % the box is exactly this wide \hrule height0pt width\boxfigwidth\smallskip% % Some environments like TABBING and other LIST environments % use this measure of line size - % \LINEWIDTH=\HSIZE-\LEFTMARGIN-\RIGHTMARGIN? \linewidth=\boxfigwidth } \def\endbigbox{\smallskip\egroup\fbox{\box4}\endgroup} % example % \begin{figure} % \begin{bigbox} % \begin{whatever}...\end{whatever} % \caption{} % \label{} % \end{bigbox} % \end{figure} % % N.B. put the caption and label inside the bigbox \usepackage{graphicx} % Sample Graphics inclusion; needs graphicx package %\begin{figure}[ht] %\begin{bigbox} %\centering %\includegraphics{foobar.pdf} # e.g. PNG, PDF or JPG, _not_ EPS %\caption{} %\label{lab:XXX} %\end{bigbox} %\end{figure} %\pagestyle{empty} % to suppress page numbering % turn text upside down %\reflectbox{\textipa{\textlhookp}} % prevent line break: \mbox{...} \hyphenation{hy-po-cri-tical ri-bald} %%%%%%%%%%%%%%%%%%%% title %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{Optimization of Networks in OpenFst:\\ Third Draft\\ \emph{Corrections, Clarifications and Examples would be Very Welcome}} \author{Kenneth R.~Beesley} % to override automatic "today" date \date{30 November 2009} %%%%%%%%%%%%%%%%%%%%%% document %%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \maketitle \begin{abstract} The purpose of this paper is to explore the practical possibilities for routine, mathematically safe\footnote{Optimization can cause networks to ``blow-up'' in size, or take an inordinately long time to compute, overwhelming your finite computer resources. These blow-up problems are not the subject in this paper, which is concerned with ``mathematically safe'' operations.} optimization of finite-state networks in OpenFst, where ``optimization'' means to reduce the number of states and arcs as much as possible using the current off-the-shelf OpenFst algorithms Determinize(), Minimize(), RmEpsilon() and sometimes Encode() and Decode(). The results of this optimization may not be fully determinized and minimized in the mathematical sense, and that's OK. The point here is just to reduce the size of networks as much as possible, considering the properties of each network, and the limitations of the current OpenFst algorithms. I'm not an expert in OpenFst, regular-language and transducer theory (especially when weights and semirings are involved), or in C++. Corrections, clarifications and examples would be genuinely very welcome. \end{abstract} \section{Motivation} In the Kleene programming language, which I'm building on top of the OpenFst library, the principal way to define networks (where the term \emph{network} is used herein to cover both acceptors and 2-tape transducers) is via regular expressions, e.g. \begin{Verbatim}[fontsize=\footnotesize] $net = [A-Za-z]+s? | [A-Za-z][A-Za-z0-9]* | ( ( dog | cat | rat) & (fly | cat) ) ; \end{Verbatim} \noindent Such regular expressions are parsed into Abstract Syntax Trees (ASTs), which are then walked by an interpreter that calls algorithms in the OpenFst library. An individual regular-expression symbol like \texttt{d} is interpreted to produce the following simple network.\footnote{In fact, the labels in OpenFst networks are stored as integers, and in Kleene these integers are Unicode code point values, but I'll use the more readable alphabetic symbols in examples.} \begin{center} \includegraphics[scale=0.5]{images/d.jpg} \end{center} \noindent Interpretation of the regular expression \texttt{dog} starts with the building of simple networks for \texttt{d}, \texttt{o} and \texttt{g}, and them combining them via the Concat() algorithm. Interpretation of the entire regular expression may involve creating many basic networks, combining them step by step into successively ``larger''\footnote{The networks might not, of course, actually get larger.} networks via many calls to Concat(), Union(), Intersect(), Closure() and other functions. These operations usually introduce epsilon arcs, new states, and non-determinism that can, in many instances, be eliminated via calls to Determinize(), Minimize() and RmEpsilon(). The purpose of this paper is to explore how and when Determinize(), Minimize() and RmEpsilon() can be invoked automatically during the interpretation of a regular expression. \section{Preliminaries} The following statements reflect my understanding, and I'd be most grateful for corrections: \begin{enumerate} \item I will distinguish between theoretical/mathematical determinization and the Determinize() algorithm provided in OpenFst. The current Determinize() algorithm in OpenFst has limitations that do not hold for theoretical/mathe\-matical determinization. I will therefore distinguish between \emph{determinize} and \emph{Determinize()}, between \emph{determinization} and \emph{Determinize()ation}, between \emph{determinized} and \emph{Determinize()d}, etc. \item In particular, the current Determinize() algorithm requires that the argument network be \emph{functional}.\footnote{Definition of Functional: for every input string, there is a single output string. The input string may match multiple paths in a transducer, but the output must be the same for each path.} Acceptors are always functional, but transducers may be functional or non-functional. Unfortunately, there is no easy or readily-available algorithm at this time for testing functionality. \item We are informed by the OpenFst team that a future version of Determinize() will not be restricted to functional networks. \item The Determinize() algorithm is intended, where possible, to determinize the \emph{input} side of a transducer. This is called input-side \emph{sequentialization} in some traditions. \item The current OpenFst Determinize() algorithm is written to treat weighted acceptors as basic. (This, at least, is what I understood from Johan Schalkwyk.) \item The Determinize() algorithm is the algorithm described by Mohri in his 1997 Computational Linguistic paper.\footnote{\emph{Finite-State Transducers in Language and Speech Processing}} In the weighted automaton (i.e.\@ weighted acceptor) case, the implementation follows exactly Mohri's paper. A weighted transducer is dealt by considering the weighted transducer over semiring K as the weighted automaton over the semiring obtained by taking the cross product of the string semiring and the semiring K. (From Cyril Allauzen, 13 Nov 2008.) \item The Encode() algorithm can be used to reduce transducers to weighted acceptors so that they can be operated on safely by the current Determinize() algorithm. \item Some networks cannot be determinized (a theoretical/mathematical limitation) because the algorithm will never terminate. \begin{enumerate} \item Unweighted \emph{acceptors} can always be determinized and Determinize()d. \item If a transducer or acceptor is acyclic (no loops), it can always be determinized, but it may not always be Determinize()d (using the current OpenFst Determinize() algorithm) because the current Determinize() algorithm has the additional requirement that the network be functional. \item Determinization may not terminate for transducers and for weighted acceptors. (MarkusD - 20 Jan 2009) ``[T]he determinization algorithm does not halt for certain weighted finite-state machines. This is described in Mohri 2004.\footnote{\url{http://www.cs.nyu.edu/~mohri/postscript/fla.pdf}} If the machine contains `sibling states' then they must be `twins'. Otherwise, the machine is not determinizable (under the commonly used semirings). Roughly, two states are sibling states if they are reachable from the start state on different paths that have the same label sequence, and there are cycles at these two states that have similar labels. If these two cycles have the same weight then they are twins. On this machine,\footnote{MarkusD's example is shown in AT\&T text format for acceptors, where each line indicates the SourceState, DestinationState, ArcLabel and Weight.} for example, fstdeterminize doesn't terminate: \begin{Verbatim}[fontsize=\small] 0 1 1 1 0 2 1 2 1 1 2 3 1 3 3 5 2 2 2 4 2 3 4 6 3 \end{Verbatim} \item To be Determinize()d, a transducer must be both acyclic and functional. \item There is, as far as I know, no easy way in OpenFst to test if a network is functional. \end{enumerate} \item But many networks that cannot be determinized can still be Determinize()d if they are properly Encode()d first. The result may not be mathematically determinized, but the number of states and arcs may still be reduced. \item A network must be determinized before it can be minimized. A network must be Determinize()d before it can be Minimize()d. \end{enumerate} \section{Encode() and Decode()} OpenFst offers the Encode() algorithm, which offers two interesting options for optimization: encoding just the \emph{labels}, or encoding both \emph{labels and weights}. \subsection{Encode()ing Labels for Optimizing Acyclic Networks} \subsubsection{The Code} Recall that if a network is acyclic and functional, it can be Determinize()d; but recall also that there is no available test in OpenFst for functionality. So to safely optimize acyclic networks, we can first ``Encode() the labels''. This means that each input:output label is reduced to a single integer, effectively reducing the network to an acceptor (weighted or unweighted). To Encode() the labels, one first instantiates an EncodeMapper, specifying kEncodeLabels thus: \begin{Verbatim}[fontsize=\footnotesize] EncodeMapper<Arc> encoder(kEncodeLabels, ENCODE) ; \end{Verbatim} \noindent When fstp is a pointer to an acyclic FST, this EncodeMapper is then used in the following way: \begin{Verbatim}[fontsize=\footnotesize] EncodeMapper<Arc> encoder(kEncodeLabels, ENCODE) ; // for Encode, pass a ptr to encoder Encode(fstp, &encoder) ; // determinize the network in place *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; // for Decode, pass the encoder itself (not a pointer) Decode(fstp, encoder) ; \end{Verbatim} The idiom to determinize a network in place was kindly supplied by Cyril Allauzen, and is used throughout this paper. The off-the-shelf Determinize() algorithm is non-destructive, creating and returning a new network. \subsubsection{Why Encode() Labels for Acyclic Networks?} As explained by Cyril Allauzen, ``The library currently only supports the determinization of functional transducers (if two successful paths have the same input label, they need to also have the same output label). The reason for that is that we use the weighted automata [i.e.\@ weighted acceptor] determinization algorithm [i.e.\@ this is the current OpenFst Determinize() algorithm], viewing the output labels as weights in the String semiring. \subsubsection{Example} Because the current Determinize() algorithm requires that any acylic transducers to be Determinize()d also be functional, it crashes if you try to Determinize() even the following trivial network: \begin{center} \includegraphics[scale=0.5]{images/aab.jpg} \end{center} \noindent This example is not functional because the input string ``a'' has two outputs, ``a'' and ``b''. After the labels are Encode()d, label \texttt{a:a} is reduced to one integer, and label \texttt{a:b} is reduced to a different integer, and the Determinize() algorithm runs on the functional result. After Encode()ing the labels, the result is an acceptor, and acceptors are always functional. After the network is Decode()d, note that the result is not really determinized in the mathematical sense at all. But this encoding-the-labels trick can still help to optimize many acyclic networks, reducing the number of states and arcs. \subsection{Encode()ing Labels and Weights} In the previous subsection, we looked at \emph{a}cyclic networks and optimization; now we're going to look at \emph{cyclic} networks and optimization. If a network is \emph{cyclic} and, in addition, the semiring is idempotent, then it can still be Determinize()d if you first Encode() both the labels \emph{and} the weights. This reduces the network to an unweighted acceptor, with each input:output/weight triple reduced to a single integer. \subsubsection{The Code} The code to Encode() both labels and weights looks like this: \begin{Verbatim}[fontsize=\footnotesize] EncodeMapper<Arc> encoder(kEncodeLabels | kEncodeWeights, ENCODE) ; // for Encode, pass a ptr to encoder Encode(fstp, &encoder) ; // determinize in place *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; // for Decode, pass the encoder itself (not a pointer) Decode(fstp, encoder) ; \end{Verbatim} \subsubsection{Why Encode() Labels \emph{and} Weights for Cyclic Networks?} In general, you can't safely determinize cyclic networks, but Encode()ing the labels and weights reduces the network to an unweighted acceptor, which is always determinizable and Determinize()able. The mathematical restriction (explained to me by Andr\'e Kempe) is that this encode-labels-and-weights trick is valid only for networks under \emph{idempotent} semirings, which include the Tropical Semiring. (The Tropical Semiring is the standard/default semiring in OpenFst, and Kleene is currently limited to the Tropical Semiring.) Recall that an idempotent semiring has that property that for any weight w, Plus(w, w) = w. In the Tropical Semiring, the Plus() operation is min(), and obviously min(w, w) = w. The following example network highlights the restriction. \begin{center} \includegraphics[scale=0.5]{images/cyclic.jpg} \end{center} \noindent The start state has two output arcs with the same label and the same weight. When Encode() is invoked for both labels and weights, the two arcs will have exactly the same single-integer label. Then when Determinize() is invoked, the sameness of the two arcs will be recognized, and the two arcs will be conflated to one. After decoding, the original input:output labels and weight will be restored. However, when two arcs are conflated in this way, the weight should be the Plus() of the two original weights. For idempotent semirings, the original weight on both arcs (here 0.5) is the appropriate result weight; but if the Plus() operation were simple addition, for example, the decoded result would have an incorrect weight. \section{Determinize() and Minimize() Twice} The Determinize() and Minimize() algorithms treat the epsilon as a normal symbol. Cyril Allauzen has pointed out that the complexity of RmEpsilon() is quadratic, so you want to reduce the size of a machine first using Determinize() and Minimize() before invoking RmEpsilon(); and then call Determinize() and Minimize() again. Thus, as a general rule, one would want to do the following (pseudo-code): \begin{Verbatim}[fontsize=\footnotesize] Determinize() Minimize() if (!isEpsilonFree()) { RmEpsilon() Determinize() Minimize() } \end{Verbatim} \section{An Algorithm for Optimization} As best I understand things, the general algorithm for mathematically-safe\footnote{Again, I understand that networks can blow up in size, or require an inordinate amount of processing time, overwhelming your computer even if the operations are mathematically safe. Kleene will have to provide some way to turn off automatic optimization for cases that blow up.} optimization should be the following: \begin{Verbatim}[fontsize=\footnotesize] if (isUnweighted(fstp) && isAcceptor(fstp)) { // no need to Encode // unweighed acceptors can always be determinized and Determinize()d // determinize in place *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; if (!isEpsilonFree(fstp)) { RmEpsilon(fstp) *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; } } elsif (isAcyclic(fstp)) { // any acyclic network can be determinized, // however, to Determinize() a network, using the current OpenFst // Determinize() algorithm, it must also be Functional. // To get around this restriction (and avoid crashes), // Encode() the labels, reducing the network to an acceptor (which // is always functional) EncodeMapper<Arc> encoder(kEncodeLabels, ENCODE) ; Encode(fstp, &encoder) ; *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; Decode(fstp, encoder) ; if (!isEpsilonFree(fstp)) { RmEpsilon(fstp) ; Encode(fstp, &encoder) ; *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; Decode(fstp, encoder) ; } } elsif (isIdempotent(fstp)) { // The network is cyclic // and the semiring is idempotent, then encode both labels and weights. // The semiring of the FST can be tested in C++ using // fstp->Type(), e.g. // if (fstp->Type() == "standard") EncodeMapper<Arc> encoder(kEncodeLabels | kEncodeWeights, ENCODE) ; Encode(fstp, &encoder) ; *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; Decode(fstp, encoder) ; if (!isEpsilonFree()) { RmEpsilon(fstp) ; Encode(fstp, &encoder) ; *fstp = DeterminizeFst<Arc>(*fstp, DeterminizeFstOptions<Arc>(CacheOptions(true, 0))); Minimize(fstp) ; Decode(fstp, encoder) ; } } else { RmEpsilon(fstp) ; } \end{Verbatim} \noindent The actual C++ code for this algorithm is called \texttt{optimizeInPlace} and is found in \texttt{kleeneopenfst.cc}, which is the C++ bridge between Kleene (written in Java) and the OpenFst library, which is written in C++. \section{Still to be Resolved} \begin{itemize} \item Kleene is currently limited to the default Tropical Semiring. Generalizing it to handle multiple semirings will be a major operation.\footnote{The Lextools family of programming languages, built on the old AT\&T finite-state library by Richard Sproat, were always limited to the default Tropical Semiring.} \item Is there a better way, other than \texttt{fstp->Type()}, to retrieve the semiring or determine if the semiring of a network is idempotent? \item If a network is cyclic and not idempotent, it is always safe to call RmEpsilon()? \item Other questions or problems? \end{itemize} \appendix % causes subsequent sections to be lettered rather than numbered %\section*{} % to suppress lettering A, B, C of appendices \section{Determinization and Minimization at PARC} Some people reading this document will have experience with the Xerox/PARC Finite State Toolkit. In the Xerox/PARC implementation, determinization and minimization are invoked routinely on all intermediate and final networks, but this is possible only because \begin{itemize} \item The Xerox/PARC system handles only \emph{un}weighted networks, and \item Each arc label is stored as a single integer (which is a key into a separate table that contains the separate input and output labels for transducers) \item The determinize and minimize operations see only the single integer label on each arc \end{itemize} \noindent Thus the Xerox/PARC networks lack weights, and the labels on arcs are like OpenFst labels after the labels have been Encode()d. As far as the Xerox/PARC determinize and minimize algorithms are concerned, each network is always just an unweighted acceptor. The corollary of this approach is that the Xerox/PARC implementation (at least the publicly available one) doesn't offer input-side sequentialization (which is what the OpenFst Determinize() is supposed to do). \end{document}
{ "alphanum_fraction": 0.7436269511, "avg_line_length": 34.9766803841, "ext": "tex", "hexsha": "1a38a0b81232d37cd515879a4a28f3fa4eec5e55", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-08-24T12:16:26.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-20T03:29:18.000Z", "max_forks_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "cscott/kleene-lang", "max_forks_repo_path": "doc/engineering/notes/optimize/optimize.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "cscott/kleene-lang", "max_issues_repo_path": "doc/engineering/notes/optimize/optimize.tex", "max_line_length": 155, "max_stars_count": 8, "max_stars_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "cscott/kleene-lang", "max_stars_repo_path": "doc/engineering/notes/optimize/optimize.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-08T04:23:09.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-13T05:56:54.000Z", "num_tokens": 6967, "size": 25498 }
% File: AnalyticalDuctModes.tex % Created: Fri Feb 11 10:00 PM 2022 E % Last Change: Fri Feb 11 10:00 PM 2022 E % \documentclass[a4paper]{report} \usepackage{mathtools} \begin{document} Starting with equation 2.28 (Wave Equation) in Kousen's paper, \begin{equation} \frac{1}{A^2}\frac{D^2\tilde{p}}{Dt^2} - \nabla^2 \tilde{p} = 2 \bar{\rho} \frac{d V_x}{d x} \frac{\partial \tilde{v}_r}{ \partial x} \label{eqn:KousensWaveEquation} \end{equation} lets look at the no flow case. In the case of sheared flow, $dV_x/dx = 0$ the right hand side will be zero \begin{align*} \frac{1}{A^2}\left( \frac{\partial^2 \tilde{p}}{\partial t^2} + \vec{V}\cdot \vec {\nabla} (\tilde{p}) \right) - \nabla^2 \tilde{p} &= 0 \\ \end{align*} Substituting the definitions for $\nabla$ and $\nabla^2$ in cylindrical coordinates gives, \begin{align*} \frac{1}{A^2}\left( \frac{\partial^2 \tilde{p}}{\partial t^2} + \vec{V}\cdot \left( \frac{\partial\tilde{p}}{\partial t} + \frac{1}{\tilde{r}}\frac{\partial \tilde{p} }{\partial \tilde{r}} + \frac{\partial \tilde{p}}{\partial \theta} + \frac{\partial \tilde{p}}{\partial x} \right) \right)- \left( \frac{\partial^2 \tilde{p}}{\partial t^2} + \frac{1}{\tilde{r}}\frac{\partial \tilde{p}}{\partial r} + \frac{1}{\tilde{r}^2} \frac{\partial^2 \tilde{p}}{\partial \theta^2} + \frac{\partial^2 \tilde{p}}{\partial x^2} \right) &= 0 \end{align*} Setting $\vec{V} = 0$, \begin{align*} \frac{1}{A^2}\left( \frac{\partial^2 \tilde{p}}{\partial t^2} \right) - \left( \frac{\partial^2 \tilde{p}}{\partial t^2} + \frac{1}{\tilde{r}}\frac{\partial \tilde{p}}{\partial r} + \frac{1}{\tilde{r}^2} \frac{\partial^2 \tilde{p}}{\partial \theta^2} + \frac{\partial^2 \tilde{p}}{\partial x^2} \right) &= 0 \end{align*} Recall, $\tilde{p} = p/\bar{\rho} A^2$. To dimensionalize the equation, this is substituted and both sides are multiplied by $\bar{\rho}A^2$, \begin{align*} \frac{1}{A^2}\left( \frac{\partial^2 {p}}{\partial t^2} \right) - \left( \frac{\partial^2 {p}}{\partial t^2} + \frac{1}{\tilde{r}}\frac{\partial p}{\partial r} + \frac{1}{\tilde{r}^2} \frac{\partial^2 p}{\partial \theta^2} + \frac{\partial^2 p}{\partial x^2} \right) &= 0 \end{align*} The process of separation of variables(seperation indeterminatarum) was first written and formalized by John Bernoulli in a letter to Leibniz. The method of separation of variables requires an assumed solution as well as initial and boundary conditions. For a partial differential equation, the assumed solution can be a linear combination of solutions to a system of ordinary differential equations that comprises the partial differential equation. Since $p$ is a function of four variables, the solution is assumed to be a linear combination of four solutions. Each solution is assumed to be Euler's identity, a common ansant for linear partial differential equations and boundary conditions. Defining, \begin{equation} p(x,r,\theta,t) = X(x) R(r) \Theta(\theta) T(t) \end{equation} where, \begin{align*} X(x) &= A_1 e^{ik_x x} + B_1 e^{-ik_x x }\\ \Theta(\theta) &= A_2 e^{i k_{\theta} \theta } + B_2 e^{-ik_{\theta} \theta }\\ T(t) &= A_3 e^{i \omega t } + B_3 e^{-i\omega t } \end{align*} The next step is to rewrite the wave equation in terms of $X$, $R$, $\Theta$, and $T$. To further simplify the result, each term is divided by $p$. Before the substitution, the derivatives of the assumed solutions need to be evaluated. \subsubsection{Temporal Derivatives} \begin{align*} \frac{\partial p}{\partial t} &= \frac{\partial }{\partial t} \left( XR\Theta T \right) \\ &= XR\Theta\frac{\partial T}{\partial t} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial p}{\partial t} &= \frac{ 1}{X R \Theta T} \left( XR\Theta\frac{\partial T}{\partial t} \right) \\ &=\frac{ 1}{ T}\frac{\partial T}{\partial t} \end{align*} \begin{align*} \frac{\partial^2 p}{\partial t^2} &= \frac{\partial^2 }{\partial t^2} \left( XR\Theta T \right) \\ &= XR\Theta\frac{\partial^2 T}{\partial t^2} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial^2 p}{\partial t^2} &= \frac{ 1}{X R \Theta T} \left( XR\Theta\frac{\partial^2 T}{\partial t^2} \right) \\ &=\frac{ 1}{ T}\frac{\partial^2 T}{\partial t^2} \end{align*} \begin{align*} \frac{\partial T}{\partial t} &= \frac{\partial}{\partial t} \left( A_3 e^{i \omega t} + B_3 e^{-i \omega t} \right) \\ &= \frac{\partial}{\partial t} \left(A_3 e^{i \omega t} \right) + \frac{\partial}{\partial t} \left(B_3 e^{-i \omega t} \right)\\ &= i \omega A_3 e^{i \omega t} - i \omega B_3 e^{i \omega t} \end{align*} \begin{align*} \frac{\partial^2 T}{\partial t^2} &= \frac{\partial^2}{\partial t^2} \left( i \omega A_3 e^{i \omega t} + i \omega B_3 e^{-i \omega t} \right) \\ &= \frac{\partial^2}{\partial t^2} \left(i \omega A_3 e^{i \omega t} \right) + \frac{\partial^2}{\partial t^2} \left(- i \omega B_3 e^{-i \omega t} \right)\\ &= (i \omega)^2 A_3 e^{i \omega t} - (i \omega)^2 B_3 e^{i \omega t} \end{align*} \begin{align*} \frac{1}{T}\frac{\partial^2 T}{\partial t^2} &= (i\omega)^2 \\ &= -\omega^2 \end{align*} \subsubsection{Radial Derivatives} \begin{align*} \frac{\partial p}{\partial r} &= \frac{\partial }{\partial r} \left( XR\Theta T \right) \\ &= X\Theta T\frac{\partial R}{\partial r} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial p}{\partial r} &= \frac{ 1}{X R \Theta T} \left( X\Theta T\frac{\partial R}{\partial r} \right) \\ &=\frac{ 1}{ R}\frac{\partial R}{\partial r} \end{align*} \begin{align*} \frac{\partial^2 p}{\partial r^2} &= \frac{\partial^2 }{\partial r^2} \left( XR\Theta T \right) \\ &= X\Theta T\frac{\partial^2 R}{\partial r^2} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial^2 p}{\partial r^2} &= \frac{ 1}{X R \Theta T} \left( X\Theta T \frac{\partial^2 R}{\partial r^2} \right) \\ &=\frac{ 1}{ R}\frac{\partial^2 R}{\partial r^2} \end{align*} The radial derivatives will be revisited once the remaining derivatives are evaluated, \subsubsection{Tangential Derivatives} \begin{align*} \frac{\partial p}{\partial \theta } &= \frac{\partial }{\partial t} \left( XR\Theta T \right) \\ &= XRT\frac{\partial \Theta}{\partial \theta} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial p}{\partial \theta} &= \frac{ 1}{X R \Theta T} \left( XR\Theta\frac{\partial T}{\partial \theta} \right) \\ &=\frac{ 1}{ \Theta}\frac{\partial \Theta}{\partial \theta} \end{align*} \begin{align*} \frac{\partial^2 p}{\partial \theta^2} &= \frac{\partial^2 }{\partial \theta^2} \left( XR\Theta T \right) \\ &= XRT\frac{\partial^2 \Theta }{\partial \theta^2} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial^2 p}{\partial \theta^2} &= \frac{ 1}{X R \Theta T} \left( XRT\frac{\partial^2 \Theta}{\partial \theta^2} \right) \\ &=\frac{ 1}{ \Theta}\frac{\partial^2 \Theta}{\partial \theta^2} \end{align*} \begin{align*} \frac{\partial \Theta}{\partial \theta} &= \frac{\partial}{\partial \theta} \left( A_2 e^{i k_{\theta} \theta} + B_2 e^{-i k_{\theta} \theta} \right) \\ &= \frac{\partial}{\partial \theta} \left(A_2 e^{i k_{\theta} \theta} \right) + \frac{\partial}{\partial \theta} \left(B_2 e^{-i k_{\theta} \theta} \right)\\ &= i k_{\theta} A_2 e^{i k_{\theta} \theta} - i k_{\theta} B_2 e^{i k_{\theta} \theta} \end{align*} \begin{align*} \frac{\partial^2 \Theta }{\partial \theta^2} &= \frac{\partial^2}{\partial \theta^2} \left( i k_{\theta} A_2 e^{i k_{\theta} \theta} - i k_{\theta} B_2 e^{i k_{\theta} \theta} \right) \\ &= \frac{\partial^2}{\partial \theta^2} \left(i k_{\theta} A_2 e^{i k_{\theta} \theta} \right) + \frac{\partial^2}{\partial \theta^2} \left(- i k_{\theta} B_2 e^{-i k_{\theta} \theta} \right)\\ &= (i k_{\theta})^2 A_2 e^{i k_{\theta} \theta } - (i k_{\theta})^2 B_2 e^{i k_{\theta} \theta} \end{align*} \begin{align*} \frac{1}{\Theta}\frac{\partial^2 \Theta}{\partial \theta^2} &= (ik_{\theta})^2 \\ &= -k_{\theta}^2 \end{align*} \subsubsection{Axial Derivatives} \begin{align*} \frac{\partial p}{\partial x} &= \frac{\partial }{\partial x} \left( XR\Theta T \right) \\ &= R\Theta T \frac{\partial X}{\partial x} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial p}{\partial x} &= \frac{ 1}{X R \Theta T} \left( R\Theta\frac{\partial X}{\partial x} \right) \\ &=\frac{ 1}{ X}\frac{\partial X}{\partial x} \end{align*} \begin{align*} \frac{\partial^2 p}{\partial x^2} &= \frac{\partial^2 }{\partial x^2} \left( XR\Theta T \right) \\ &= R\Theta T \frac{\partial^2 X}{\partial x^2} \end{align*} \begin{align*} \frac{1}{p}\frac{\partial^2 p}{\partial x^2} &= \frac{ 1}{X R \Theta T} \left( R\Theta T \frac{\partial^2 X}{\partial x^2} \right) \\ &=\frac{ 1}{ X}\frac{\partial^2 X}{\partial x^2} \end{align*} \begin{align*} \frac{\partial X}{\partial x} &= \frac{\partial}{\partial t} \left( A_3 e^{i k_x t} + B_3 e^{-i \omega t} \right) \\ &= \frac{\partial}{\partial t} \left(A_1 e^{i k_x x} \right) + \frac{\partial}{\partial t} \left(B_1 e^{-i k_x x } \right)\\ &= i k_x A_1 e^{i k_x x } - i k_x B_1 e^{i k_x x} \end{align*} \begin{align*} \frac{\partial^2 X}{\partial x^2} &= \frac{\partial^2}{\partial x^2} \left( i k_x A_1 e^{i k_x x} + i k_x B_1 e^{-i k_x x} \right) \\ &= \frac{\partial^2}{\partial x^2} \left(i k_x A_1 e^{i k_x x} \right) + \frac{\partial^2}{\partial x^2} \left(- i k_x B_1 e^{-i k_x x} \right)\\ &= (i k_x)^2 A_1 e^{i k_x x} - (i k_x)^2 B_1 e^{i k_x x} \end{align*} \begin{align*} \frac{1}{X}\frac{\partial^2 X}{\partial x^2} &= (i k_x)^2 \\ &= -k_x^2 \end{align*} Substituting this back into the wave equation yields , \begin{align*} \frac{1}{A^2}\left( \frac{\partial^2 {p}}{\partial t^2} \right) &= \left( \frac{\partial^2 {p}}{\partial t^2} + \frac{1}{\tilde{r}}\frac{\partial p}{\partial r} + \frac{1}{\tilde{r}^2} \frac{\partial^2 p}{\partial \theta^2} + \frac{\partial^2 p}{\partial x^2} \right) \end{align*} \begin{equation} \frac{1}{A^2} \frac{1}{T}\frac{\partial^2 T}{\partial t^2} = \frac{1}{R}\frac{\partial^2 R}{\partial r^2 } + \frac{1}{r}\frac{1}{R}\frac{\partial R}{\partial r} + \frac{1}{r^2}\frac{1}{\Theta}\frac{\partial \Theta}{\partial \theta} + \frac{1}{X}\frac{\partial^2 X}{\partial x^2} \label{eqn:waveode} \end{equation} Notice that each term is only a function of its associated independent variable. So, if we vary the time, only the term on the left-hand side can vary. However, since none of the terms on the right-hand side depend on time, that means the right-hand side cannot vary, which means that the ratio of time with its second derivative is independent of time. The practical upshot is that each of these terms is constant, which has been shown. The wave numbers are the \textit{separation constants} that allow the PDE to be split into four separate ODE's. Substituting the separation constants into Equation (\ref{eqn:waveode}) gives, \begin{equation} -\frac{\omega^2}{A^2} = \frac{1}{R} \left( \frac{\partial^2 R}{\partial r^2 } + \frac{1}{r}\frac{\partial R}{\partial r} \right) - \frac{k_{\theta}^2}{r^2}- k_x^2 \label{eqn:waveode2} \end{equation} Note that the dispersion relation states $\omega = k A$ \begin{equation} \frac{1}{R} \left( \frac{\partial^2 R}{\partial r^2 } + \frac{1}{r}\frac{\partial R}{\partial r} \right) - \frac{k_{\theta}^2}{r^2}- k_x^2 + k^2 = 0 \label{eqn:waveode3} \end{equation} The remaining terms are manipulated to follow the same form as \textit{Bessel's Differntial Equation} , \begin{equation} x^2 \frac{d^2 y}{dx^2} + x \frac{dy }{dx } + (x^2 - n^2) y = 0 \label{eqn:besselODE} \end{equation} The general solution to Bessel's differential equation is a linear combination of the Bessel functions of the first kind, $J_n(x)$ and of the second kind, $Y_n(x)$ \cite{wolphram:bessel}. The subscript $n$ refers to the order of Bessel's equation. \begin{equation} y(x) = AJ_n(x) + BY_n(x) \label{eqn:besselsolution} \end{equation} By rearranging Equation (\ref{eqn:waveode3}), a comparison can be made to Equation (\ref{eqn:besselODE}) to show that the two equations are of the same form. The first step is to revisit the radial derivatives that have not been addressed. As was done for the other derivative terms, the radial derivatives will also be set equal to a separation constant, $-k_r^2$. \begin{align} \underbrace{\frac{1}{R} \left( \frac{\partial^2 R}{\partial r^2 } + \frac{1}{r}\frac{\partial R}{\partial r} \right) - \frac{k_{\theta}^2}{r^2}}_{-k_r^2}- k_x^2 + k^2 = 0 \label{eqn:wavenumber_without_kr} \end{align} The reader may be curious as to why the tangential separation constant $k_{\theta}$ is included within the definition of the radial separation constant. Recall the ODE for the tangential direction, \begin{align*} \frac{\partial \Theta}{\partial \theta} \frac{1}{\Theta} = - k_{\theta}^2\\ \frac{\partial \Theta}{\partial \theta} \frac{1}{\Theta} + \Theta k_{\theta}^2 = 0 \end{align*} where the solution is more or less, \begin{align*} \Theta(\theta) = e^{i k_{\theta} \theta} \end{align*} In order to have non trivial, sensible solutions, the value of $\Theta(0)$ and $\Theta(2\pi)$ need to be the same, and this needs to be true for any multiple of $2\pi$ for a fixed r. Taking $\Theta$ to be one, a unit circle, it can be shown that the domain is only going to be an integer multiple. Therefore, there is an implied periodic azimuthal boundary condition, i.e. $0<\theta\leq 2 \pi$ and $k_{\theta}=m$. Continuing with the radial derivatives\ldots \begin{align*} -k_r^2 =\frac{1}{R} \left( \frac{\partial^2 R}{\partial r^2 } + \frac{1}{r}\frac{\partial R}{\partial r} \right) - \frac{m^2}{r^2} \end{align*} To further simplify, the chain rule is used to do a change of variables, $x = k_r r$ \begin{align*} \frac{\partial R}{\partial r} &= \frac{dR}{dx}\frac{dx}{dr}\\ &= \frac{dR}{dx}\frac{d}{dr}\left( k_r r \right) \\ &= \frac{dR}{dx} k_r \end{align*} \begin{align*} \frac{\partial^2 R}{\partial r^2} &= \frac{d^2R}{dx^2}\left(\frac{dx}{dr}\right)^2 + \frac{dR}{dr}\frac{d^2x}{dr^2}\\ &= \frac{d^2R}{dx^2}\frac{d}{dr} k_r^2 + k_r \frac{d^2r}{dr^2}\\ &= \frac{d^2R}{dx^2}\frac{d}{dr} k_r^2 \end{align*} Substituting this into Equation (\ref{eqn:waveode3}), \begin{equation} \left(\frac{d^2R}{dx^2}k_r^2 + \frac{1}{r}\frac{d^2R}{dx^2}k_r\right) + \left(k_r^2 - \frac{m^2}{r^2}\right)R \label{eqn:waveode4} \end{equation} Dividing Equation \ref{eqn:waveode4} by $k_r^2$, \begin{equation} \left(\frac{d^2R}{dx^2} + \frac{1}{k_r r}\frac{d^2R}{dx^2}\right) + \left(1 - \frac{m^2}{k_r^2 r^2}\right)R \label{eqn:waveode5} \end{equation} \begin{equation} \left(\frac{d^2R}{dx^2} + \frac{1}{x^2}\frac{d^2R}{dx^2}\right) + \left(1 - \frac{m^2}{x^2}\right)R \label{eqn:waveode6} \end{equation} Multiplying Equation (\ref{eqn:waveode6}) by $x^2$ gives, \begin{equation} \frac{d^2R}{dr^2}x^2 + \frac{dR}{dr}x + \left( x^2 - m^2 \right)R \label{eqn:finalradialode} \end{equation} which matches the form of Bessel's equation In summary, the wave equation for no flow in a hollow duct with hard walls is obtained from Equation (\ref{eqn:wavenumber_without_kr}). \begin{equation} k^2 = k_r^2 + k_x^2 \label{eqn:wavenumber_equation} \end{equation} \subsubsection{Hard Wall boundary condition} \begin{equation} \frac{\partial P}{\partial r} = \frac{\partial}{\partial r} \left( X\Theta T R \right) \end{equation} \section{} To get the same equation but for uniform flow, the same procedure can be followed. Starting with Equation 2.27 redimensionalized, \begin{align*} \frac{ d^2 \tilde{p}}{d \tilde{r}^2} + \frac{1}{\tilde{r}} \frac{d \tilde{p}}{d \tilde{r}} + \frac{2 \bar{\gamma} \left( \frac{d m_x}{d \tilde{r}} \right)} {\left( k - \bar{\gamma} m_x \right)}\frac{d \tilde{p}}{d \tilde{r}}+ \left[ \left( k - \bar{\gamma} m_x \right)^2 - \frac{m^2}{\tilde{r}^2}- \bar{\gamma}^2 \right] \tilde{p} \end{align*} Let's separate the new terms from the old ones, \begin{align*} \frac{ d^2 \tilde{p}}{d \tilde{r}^2} + \frac{1}{\tilde{r}} \frac{d \tilde{p}}{d \tilde{r}} + \frac{2 \bar{\gamma} \left( \frac{d m_x}{d \tilde{r}} \right)} {\left( k - \bar{\gamma} m_x \right)}\frac{d \tilde{p}}{d \tilde{r}}+ \left[ \left( k - \bar{\gamma} m_x \right)^2 - \frac{m^2}{\tilde{r}^2}- \bar{\gamma} \right] \tilde{p} \end{align*} Recalling the non-dimensional definitions, \begin{align*} \tilde{p} &= \frac{p}{\bar{\rho} A^2} \\ \tilde{r} &= \frac{r}{r_T} \\ \frac{\partial \tilde{p}}{\partial \tilde{r}} &= \frac{ \partial \tilde{p}}{\partial r} \frac{\partial r}{ \partial \tilde{r}} \\ &= \frac{ \partial \tilde{p}}{\partial r} \frac{\partial }{ \partial \tilde{r}} \left( \tilde{r} r_T \right) \\ &= \frac{ \partial \tilde{p}}{\partial r} r_T \\ \frac{\partial^2 \tilde{p}}{\partial \tilde{r}^2} &= \frac{ \partial^2 \tilde{p}}{\partial r^2} (r_T)^2+ \frac{ \partial \tilde{p}}{\partial r} \frac{\partial^2 r}{ \partial \tilde{r}^2} \\ &= \frac{ \partial^2 \tilde{p}}{\partial r^2} (r_T)^2 \end{align*} \begin{align*} \frac{\partial}{\partial r} \left( \frac{p}{\bar{\rho} A^2} \right) &= \frac{\left(\frac{\partial}{\partial r} \left( p\right) \bar{\rho} A^2 - \underbrace{\frac{\partial \bar{\rho}A^2}{\partial r}}_0 p \right)}{\left( \bar{\rho} A^2 \right)^2}\\ &= \frac{1}{\bar{\rho}A^2} \frac{\partial p}{\partial r} \end{align*} \begin{align*} \frac{ d^2 \tilde{p}}{d \tilde{r}^2} + \frac{1}{\tilde{r}} \frac{d \tilde{p}}{d \tilde{r}}- \frac{m^2}{\tilde{r}^2}\tilde{p}- \bar{\gamma}^2 \tilde{p} + \frac{2 \bar{\gamma} \left( \frac{d M_x}{d \tilde{r}} \right)} {\left( k - \bar{\gamma} M_x \right)}\frac{d \tilde{p}}{d \tilde{r}}+ \left( k - \bar{\gamma} M_x \right)^2\tilde{p} \end{align*} If there is only uniform flow, then $dM_x/dr = 0$, \begin{align*} \frac{ d^2 \tilde{p}}{d \tilde{r}^2} + \frac{1}{\tilde{r}} \frac{d \tilde{p}}{d \tilde{r}}- \frac{m^2}{\tilde{r}^2}\tilde{p}- \bar{\gamma}^2 \tilde{p} + \left( k - \bar{\gamma} M_x \right)^2\tilde{p} \end{align*} Re-dimensionalizing, \begin{align*} \frac{1}{\bar{\rho} A^2}\left[ \frac{ d^2 p}{d r} r_T^2+ \frac{r_T}{r} \frac{d p}{d r} r_T - \frac{m^2}{r^2}r_T^2 p - k_x^2r_T^2 p\right] + \left( \frac{\omega }{A}r_T - k_x r_T M_x \right)^2p \end{align*} Expanding the last term and substituting $\omega/A = k$ \begin{align*} \frac{1}{\bar{\rho} A^2}\left[ \frac{ d^2 p}{d r} r_T^2+ \frac{r_T}{r} \frac{d p}{d r} r_T - \frac{m^2}{r^2}r_T^2 p - k_x^2r_T^2 p\right] +\left( r_T^2\left( k^2 - 2 k k_x M_x - k_x^2 M_x^2 \right) \right)p \end{align*} Canceling out $r_T/\bar{\rho}A$ in every term \begin{align*} \frac{ d^2 p}{d r} + \frac{1}{r} \frac{d p}{d r} + \left[ k^2 - 2 k k_x M_x - k_x^2 M_x^2- \frac{m^2}{r^2} - k_x^2\right]p \end{align*} Defining $$-N^2 = k_x^2 M_x^2 - 2 k k_x M_x - k_x^2 $$ $$-N^2 = -(1 - M_x^2)k_x^2 - 2 k k_x M_x $$ $$-N^2 = - \beta^2 k_x^2 - 2 k k_x M_x $$ \begin{align*} \frac{ d^2 p}{d r} + \frac{1}{r} \frac{d p}{d r} + \left[ k^2 - N^2 - \frac{m^2}{r^2} \right]p \end{align*} Let $k_r^2 = k^2 - N^2$ \begin{align*} \frac{ d^2 p}{d r} + \frac{1}{r} \frac{d p}{d r} + \left[ k_r^2 - \frac{m^2}{r^2} \right]p \end{align*} Looking at the radial wavenumber, \begin{align*} k_r^2 &= k^2 - N^2 \\ &= k^2-\beta^2 k_x^2 - 2 k k_x M_x \\ 0 &= - \beta ^2 k_x ^2 - \left( 2M_x k \right)k_x +(k^2 - k_r^2) \end{align*} Where the roots to this equation are the axial wavenumber, Applying the quadratic formula and taking \begin{align*} <+content+> \end{align*}<++> \bibliographystyle{plain} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.5817190145, "avg_line_length": 30.5574712644, "ext": "tex", "hexsha": "e795dfad7f87b5cf2437f13b4495d1a137c5a985", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "67d8e1729fca8a6ad269583591f6a0a61a274f8d", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "jeffs2696/AnalyticalDuctModes", "max_forks_repo_path": "docs/AnalyticalDuctModeds.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "67d8e1729fca8a6ad269583591f6a0a61a274f8d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "jeffs2696/AnalyticalDuctModes", "max_issues_repo_path": "docs/AnalyticalDuctModeds.tex", "max_line_length": 115, "max_stars_count": null, "max_stars_repo_head_hexsha": "67d8e1729fca8a6ad269583591f6a0a61a274f8d", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "jeffs2696/AnalyticalDuctModes", "max_stars_repo_path": "docs/AnalyticalDuctModeds.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8285, "size": 21268 }
\section{Experimental Results} \label{sec:experimental-results} \begin{figure}[b] \centering \includegraphics[width=.65\linewidth]{fn-speedup} \caption{FN Kernel} \label{figure:fn} \end{figure} % FN % Plot Overview Figure~\ref{figure:fn} presents the speedup for the FN application. % % Plot Analysis Since FN is CPU-bound, communication has little interference and the results show a similar behavior in both solutions. % % Additional discussion The observed behavior is due to the problem design itself and the input workload. The leader process performs an integer division to compute the minimum amount of work to be sent to each worker. Then, the reminder is added to the last worker, which may result in load imbalance. This imbalance is very small up to 8 workers, but becomes substantial with more workers. With 14 workers, however, the workload is well balanced and the overall performance is improved. % In general, the results show that \lwmpi scaled well and was able to provide an easy adaptation of the kernel without introducing an overhead as the parallelism is increased. % GF Graphic \begin{figure}[t] \centering \includegraphics[width=.65\linewidth]{gf-speedup} \caption{GF Kernel} \label{figure:gf} \vspace{-10pt} \end{figure} % KM Graphic \begin{figure}[t] \centering \includegraphics[width=.65\linewidth]{km-speedup} \caption{KM Kernel} \label{figure:km} \vspace{-15pt} \end{figure} % GF % Plot Overview Figure~\ref{figure:gf} pictures the speedup for the GF kernel. % % Plot Analysis As it can be noticed, \lwmpi presented suboptimal scalability whereas the default runtime library did not scale at all. % % Additional discussing The small problem sizes may have resulted in insufficient workloads, deteriorating the performance of the Kalray runtime. At the same time, for \lwmpi this problem seems to be attenuated as the parallelism increases, proving its scalability also in these situations. % % Possible improvement We believe that using asynchronous communications for both solutions would significantly reduce the bottleneck on the leader process and improve the overall performance. % KM % Plot Overview Figure~\ref{figure:km} shows the speedup for the KM kernel. This application has higher communication demands than the previous ones, which impacted the results where \lwmpi achieves lower speedups when compared to the Kalray runtime. % % Additional discussion This occurred because the baremetal runtime can fitly handle the irregular workload, while \lwmpi is limited by the coarse-grained fixed-size messages of the \portal abstraction in \nanvix. Thus, small problem sizes do not overcome the overhead imposed by this abstraction, designed to fit dense data transfers. % % Possible workaround Nevertheless, this situation can be settled by a mechanism that dynamically chooses which IPC abstraction fits better the data granularity to be sent. It would be possible to use the \mailbox abstraction to send fine-grained messages and the \portal abstraction for coarse-grained ones. As a result, we could transfer small messages with low latency and large messages with high bandwidth. % % Additional Discussion Even so, both solutions had similar linear behaviors, showing that \lwmpi was able to keep up with the speedup scalability presented by the Kalray runtime.
{ "alphanum_fraction": 0.7767360094, "avg_line_length": 35.5520833333, "ext": "tex", "hexsha": "982bbcc28df584f05bae6881d11c27d62c473d5a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d7dd8f7e3424ba6dcc9cc24f6140bbcf920e5608", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joaofel-u/ine410129", "max_forks_repo_path": "Paper (A3)/paper/text/5-experimental-results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d7dd8f7e3424ba6dcc9cc24f6140bbcf920e5608", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joaofel-u/ine410129", "max_issues_repo_path": "Paper (A3)/paper/text/5-experimental-results.tex", "max_line_length": 89, "max_stars_count": null, "max_stars_repo_head_hexsha": "d7dd8f7e3424ba6dcc9cc24f6140bbcf920e5608", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joaofel-u/ine410129", "max_stars_repo_path": "Paper (A3)/paper/text/5-experimental-results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 837, "size": 3413 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % GEANT manual in LaTeX form % % % % Michel Goossens (for translation into LaTeX) % % Version 1.00 % % Last Mod. Jan 24 1991 1300 MG + IB % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Documentation{M. Maire, F.Carminati} \Submitted{24.09.84}\Revised{26.07.93} \Version{Geant 3.16}\Routid{PHYS001} \Makehead{Introduction to the section PHYS} \section{Summary of the physics processes} The computer simulation of particles traversing an experimental setup has to take into account the interactions of those particles with the material of the detector. {\tt GEANT} is able to simulate the dominant processes which can occur in the energy range {\bf from 10 keV to 10 TeV} for electromagnetic interactions. As far as hadronic interactions are concerned, the range of validity is the one of the hadronic packages used, which usually does not extend below a few tens of MeV. For more information the user is invited to consult the relevant documentation~\cite{bib-GHEI,bib-FLUK,bib-FLU1,bib-FLU2,bib-FLU3,bib-FLU4}. Simulating a given process means: \begin{UL} \item Evaluating the probability of occurrence of the process, by sampling the {\bf total cross-section} of the process. \item Generating the final state after interaction, by sampling the {\bf differential cross-section} of the process. \item In case of (quasi-)continuous processes, e.g. CSDA (Continuous Slowing Down Approximation), energy losses or multiple scattering, computing the mean values of some characteristic quantities. \end{UL} In Table~\ref{phys001-1} below we summarise all the processes currently implemented in {\tt GEANT}, with a reference to the corresponding sections. \begin{table}[hbt] \begin{center} \begin{tabular}{|l|l|l|} \hline & \parbox{5cm}{Computation of total cross-section or energy losses} & \parbox{5cm}{Generation of the final state particles} \\ \hline & & \\ {\bf Processes involving the photon } & & \\ ($e^+,e^-$ ) pair conversion & PHYS 210 & PHYS 211 \\ Compton collision & PHYS 220 & PHYS 221 \\ Photoelectric effect & PHYS 230 & PHYS 231 \\ Photo fission of heavy elements & PHYS 240 & PHYS 240 \\ Rayleigh effect & PHYS 250 & PHYS 251 \\ & & \\ {\bf Processes involving $e^-/e^+ $ } & & \\ Multiple scattering & & PHYS 320 or 325 or 328\\ Ionisation and $\delta$-rays production & PHYS 330 & PHYS 331 or 332 \\ Bremsstrahlung & PHYS 340 & PHYS 341 \\ Annihilation of positron & PHYS 350 & PHYS 351 \\ Generation of \v{C}erenkov light & PHYS 260 & PHYS 260 \\ Synchrotron radiation & PHYS 360 & \\ & & \\ {\bf Processes involving}$\mu^-/ \mu^+$ & & \\ Decay in flight & CONS 310 & PHYS 400 \\ Multiple scattering & & PHYS 320 or 325 \\ Ionisation and $\delta$-rays production & PHYS 430 & PHYS 331 or 332 \\ Ionisation by heavy ions & PHYS 431 & \\ Bremsstrahlung & PHYS 440 & PHYS 441 \\ Direct ($ e^+,e^-$) pair production & PHYS 450 & PHYS 451 \\ Nuclear interaction & PHYS 460 & PHYS 460 \\ Generation of \v{C}erenkov light & PHYS 260 & PHYS 260 \\ & & \\ {\bf Processes involving hadrons} & & \\ Decay in flight & CONS 310 & PHYS 400 \\ Multiple scattering & & PHYS 320 or 325 \\ Ionisation and $\delta$-rays production & PHYS 430 & PHYS 331 or 332 \\ Hadronic interactions & PHYS 500 or 510& PHYS 500 or 510 \\ Generation of \v{C}erenkov light & PHYS 260 & PHYS 260 \\ \hline \end{tabular} \end{center} \label{phys001-1} \caption{Processes currently implemented in {\tt GEANT}} \end{table} \subsection{ Simulated Processes } \subsubsection{Hadronic Interactions} To simulate the interactions of hadrons with the nuclei of the matter traversed, two alternatives are provided: \begin{enumerate} \item The generator of the \Rind{FLUKA}~\cite{bib-FLUK,bib-FLU1} hadron shower MonteCarlo and the interface routines to {\tt GEANT}. See {\tt [PHYS520]} for more information. \item The generator of the \Rind{GHEISHA}~\cite{bib-GHEI} hadron shower MonteCarlo and the interface routines to {\tt GEANT}. See {\tt [PHYS510]} for more information. \end{enumerate} The code both of the \Rind{GHEISHA} and of the \Rind{FLUKA} generators is contained in the {\tt GEANT} library. Users should be aware that the routines of these packages do not follow the {\tt GEANT} naming conventions and therefore they can clash with the names of user procedures. \subsubsection{Electromagnetic Processes} By means of systematic fits to the existing data, the cross-sections of the electromagnetic processes are well reproduced (within a few percent) from 10 keV up to 100 GeV, both for light (low Z) and for heavy materials. This feature, together with the use of the interface with one of the hadronic shower generators available, makes {\tt GEANT} useful also for shower simulation in gases. \subsubsection{Muonic interactions} Muonic interactions are simulated up to 10 TeV, making {\tt GEANT} useful for cosmic rays studies. \subsubsection{Ionisation by charged particles} The following alternatives are provided to simulate this process: \begin{itemize} \item Sampling from the appropriate distribution around the mean value of the energy loss ({\tt [PHYS332]}). \item Explicit generation of $\delta$-rays (see {\tt [PHYS330/331/430]}) and restricted fluctuations below the energy threshold for the production of $\delta$-rays. \item Sampling the contribution of single collisions from statistical distributions. This can be used as an alternative to the first one when simulating energy losses in very thin layers (small value of g cm$^{-2}$) (see {\tt [PHYS334]}). \end{itemize} Full Landau fluctuations and generation of $\delta$-rays cannot be used together in order to avoid double counting of the fluctuations. An automatic protection has been introduced in {\tt GEANT} to this effect. See {\tt [PHYS333/332]} and {\tt [BASE040]} for further information. \subsubsection{Multiple Scattering} Two methods are provided: \begin{enumerate} \item Moli\`ere distribution or plural scattering ({\tt [PHYS325], [PHYS328]}). \item Gaussian approximation ({\tt [PHYS320]}). \end{enumerate} \subsubsection{The JMATE data structure} In order to save time during the transport of the particles, relevant energy-dependent quantities are tabulated at the beginning of the run, for all materials, as functions of the kinetic energy of the particle. In particular, the inverse of the total cross-sections of all processes involving photons, electrons and muons and the $dE/dx$ and range tables for electrons, muons and protons are calculated. The actual value of, say, the interaction length for a given process (i.e. the inverse of the macroscopic cross section) is then obtained via a linear interpolation in the tables. The data structure which contains all this information in memory is supported by the link {\tt JMATE} in the \FCind{/GCLINK/} common block. See {\tt [PHYS100]} and {\tt [CONS199]} for a more information on these tables. \subsubsection{Probability of Interaction} The total cross-section of each process is used at tracking time to evaluate the probability of occurrence of the process. See {\tt [PHYS010]} for an explanation of the method used. {\bf Note}: The section {\tt PHYS} is closely related to the section {\tt CONS}. Users wishing to have a complete overview of the physics processes included in {\tt GEANT} should read both sections. \subsection{ Control of the physical processes} For most of the individual processes the default option (indicated below) can be changed via data records {\tt [BASE040]}. The processes are controlled via a control variable which is in the common \FCind{/GCKING/}. If not otherwise noted, the meaning of the control variable is the following: \begin{DLtt}{MMM} \item[= 0] The process is completely ignored. \item[= 1] The process is considered and possible secondary particles generating from the interaction are put into the \FCind{/GCKING/} common. If the interacting particle disappears in the interaction, then it is stopped with {\tt ISTOP=1} (common \FCind{/GCTRAK/}) \item[= 2] The process is considered. If secondary particles result from the interaction, they are not generated and their energy is simply added in the variable {\tt DESTEP} (common \FCind{/GCTRAK/}. If the interacting particle disappears in the interaction, the variable {\tt ISTOP} is set to $2$. \end{DLtt} Below are listed the data record keywords, the flag names and values, and the resulting action: \begin{DLtt}{MMMMMMMM} \item[Keyword] Related process \item[DCAY] Decay in flight. The decaying particles stops. The variable {\tt IDCAY} controls this process. \begin{DLtt}{MMMMMMMM} \item[IDCAY =0] No decay in flight. \item[~~~~~~=1] ({\bf D}) Decay in flight with generation of secondaries. \item[~~~~~~=2] Decay in flight without generation of secondaries. \end{DLtt} \item[MULS] Multiple scattering. The variable {\tt IMULS} controls this process. \begin{DLtt}{MMMMMMMM} \item[IMULS =0] No multiple scattering. \item[~~~~~~=1] ({\bf D}) Multiple scattering according to Moli\`ere theory. \item[~~~~~~=2] Same as {\tt 1}. Kept for backward compatibility. \item[~~~~~~=3] Pure Gaussian scattering according to the Rossi formula. \end{DLtt} \item[PFIS] Nuclear fission induced by a photon. The photon stops. The variable {\tt IPFIS} controls this process. \begin{DLtt}{MMMMMMMM} \item[IPFIS =0] ({\bf D}) No photo-fission. \item[~~~~~~=1] Photo-fission with generation of secondaries. \item[~~~~~~=2] Photo-fission without generation of secondaries. \end{DLtt} \item[MUNU] Muon-nucleus interactions. The muon is not stopped. The variable {\tt IMUNU} controls this process. \begin{DLtt}{MMMMMMMM} \item[IMUNU =0] No muon-nucleus interactions. \item[~~~~~~=1] ({\bf D}) Muon-nucleus interactions with generation of secondaries. \item[~~~~~~=2] Muon-nucleus interactions without generation of secondaries. \end{DLtt} \item[LOSS] Continuous energy loss. The variable {\tt ILOSS} controls this process. \begin{DLtt}{MMMMMMMM} \item[ILOSS =0] No continuous energy loss,IDRAY is forced to 0. \item[~~~~~~=1] Continuous energy loss with generation of $\delta$-rays above {\tt DCUTE} (common \FCind{/GCUTS/}) and restricted Landau fluctuations below {\tt DCUTE}. \item[~~~~~~=2] ({\bf D}) Continuous energy loss without generation of $\delta$-rays and full Landau-Vavilov-Gauss fluctuations. In this case the variable {\tt IDRAY} is forced to $0$ to avoid double counting of fluctuations. \item[~~~~~~=3] Same as $1$, kept for backward compatibility. \item[~~~~~~=4] Energy loss without fluctuation. The value obtained from the tables is used directly. \end{DLtt} \item[PHOT] Photoelectric effect. The interacting photon is stopped. The variable {\tt IPHOT} controls this process. \begin{DLtt}{MMMMMMMM} \item[IPHOT =0] No photo-electric effect. \item[~~~~~~=1] ({\bf D}) Photo-electric effect with generation of the electron. \item[~~~~~~=2] Photo-electric effect without generation of the electron. \end{DLtt} \item[COMP] Compton scattering. The variable {\tt ICOMP} controls this process. \begin{DLtt}{MMMMMMMM} \item[ICOMP =0] No Compton scattering. \item[~~~~~~=1] ({\bf D}) Compton scattering with generation of \Pem. \item[~~~~~~=2] Compton scattering without generation of \Pem. \end{DLtt} \item[PAIR] Pair production. The interacting $\gamma$ is stopped. The variable {\tt IPAIR} controls this process. \begin{DLtt}{MMMMMMMM} \item[IPAIR =0] No pair production. \item[~~~~~~=1] ({\bf D}) Pair production with generation of \Pem/\Pep. \item[~~~~~~=2] Pair production without generation of \Pem/\Pep. \end{DLtt} \item[BREM] bremsstrahlung. The interacting particle (\Pem, \Pep, $\mu^{+}$, $\mu^{-}$) is not stopped. The variable {\tt IBREM} controls this process. \begin{DLtt}{MMMMMMMM} \item[IBREM =0] No bremsstrahlung. \item[~~~~~~=1] ({\bf D}) bremsstrahlung with generation of $\gamma$. \item[~~~~~~=2] bremsstrahlung without generation of $\gamma$. \end{DLtt} \item[RAYL] Rayleigh effect. The interacting $\gamma$ is not stopped. The variable {\tt IRAYL} controls this process. \begin{DLtt}{MMMMMMMM} \item[IRAYL =0] ({\bf D}) No Rayleigh effect. \item[~~~~~~=1] Rayleigh effect. \end{DLtt} \item[DRAY] $\delta$-ray production. The variable {\tt IDRAY} controls this process. \begin{DLtt}{MMMMMMMM} \item[IDRAY =0] No $\delta$-rays production. \item[~~~~~~=1] ({\bf D}) $\delta$-rays production with generation of \Pem. \item[~~~~~~=2] $\delta$-rays production without generation of \Pem. \end{DLtt} \item[ANNI] Positron annihilation. The \Pep is stopped. The variable {\tt IANNI} controls this process. \begin{DLtt}{MMMMMMMM} \item[IANNI =0] No positron annihilation. \item[~~~~~~=1] ({\bf D}) Positron annihilation with generation of photons. \item[~~~~~~=2] Positron annihilation without generation of photons. \end{DLtt} \item[HADR] Hadronic interactions. The particle is stopped in case of inelastic interaction, while it is not stopped in case of elastic interaction. The variable {\tt IHADR} controls this process. \begin{DLtt}{MMMMMMMM} \item[IHADR =0] No hadronic interactions. \item[~~~~~~=1] ({\bf D}) Hadronic interactions with generation of secondaries. \item[~~~~~~=2] Hadronic interactions without generation of secondaries. \item[~~~~~~$>$2] Can be used in the user code \Rind{GUPHAD} and \Rind{GUHADR} to chose a hadronic package. These values have no effect on the hadronic packages themselves. \end{DLtt} \item[LABS] Light ABSorption. This process is the absorption of light photons (particle type 7) in dielectric materials. It is turned on by default when the generation of \v{C}erenkov light is requested (data record {\tt CKOV}). For more information see {\tt [PHYS260]}. \begin{DLtt}{MMMMMMMM} \item[ILABS =0] No absorption of photons. \item[~~~~~~=1] Absorption of photons with possible detection. \end{DLtt} \item[STRA] This flag turns on the collision sampling method to simulate energy loss in thin materials, particularly gases. For more information see {\tt [PHYS334]}. \begin{DLtt}{MMMMMMMM} \item[ISTRA =0] ({\bf D}) Collision sampling switched off. \item[~~~~~~=1] Collision sampling activated. \end{DLtt} \item[SYNC] Synchrotron radiation in magnetic field. \begin{DLtt}{MMMMMMMM} \item[ISYNC =0] ({\bf D}) The synchrotron radiation is not simulated. \item[~~~~~~=1] Synchrotron photons are generated, at the end of the tracking step. \item[~~~~~~=2] Photons are not generated, the energy is deposit locally. \item[~~~~~~=3] Synchrotron photons are generated, distributed along the curved path of the particle. \end{DLtt} \end{DLtt}
{ "alphanum_fraction": 0.6744611118, "avg_line_length": 47.502994012, "ext": "tex", "hexsha": "7cebe41fa9f9b83dc5f0eaa80559dd9630b172da", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "geant/phys001.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "geant/phys001.tex", "max_line_length": 81, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "geant/phys001.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 4118, "size": 15866 }
\documentclass[11pt,a4paper,xcolor=dvipsnames, leqno]{beamer} \usetheme{show_and_tell} \title[Introduction to Mathematical Reasoning. Induction \& Contradiction Proofs]{An Introduction to Mathematical Reasoning with Applications to Induction and Contradiction Proofs} \author[M\textsuperscript{a} Asunci\'on Jim\'enez Cordero] {{{\bf M\textsuperscript{a} Asunci\'on Jim\'enez Cordero\\ {\href{mailto:[email protected]}{\tt [email protected]}}}}} \institute{Show & Tell Karumi} % Opcional \date{} \defbeamertemplate*{title page}{customized}[1][] {\hspace*{-0.3cm} \centering \begin{beamercolorbox}[wd=1.05\textwidth, ht = 1.4cm, rounded = true, shadow = true]{my_palette} \centering \usebeamerfont{title}\smallskip \inserttitle \smallskip \end{beamercolorbox} \insertdate\par } \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begingroup \setbeamertemplate{headline}{} \begin{frame} \titlepage \begin{center} \insertauthor\par \end{center} \begin{center} \scriptsize Departamento de Estad\'istica e Investigaci\'on Operativa\\ and Instituto de Matem\'aticas de la Universidad de Sevilla,\\ Sevilla, Spain \end{center} \vspace*{0.3cm} \begin{center} \includegraphics[scale=0.35]{imus_basico_oct.png} \end{center} \begin{center} \footnotesize{Show \& Tell Karumi} \end{center} \begin{center} \footnotesize{February 1st 2019} \end{center} \end{frame} \endgroup %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame} \begin{columns} \begin{column}{0.5\textwidth} \begin{flushright} \href{https://github.com/asuncionjc}{\includegraphics[scale=0.12]{github_girl.png}} \end{flushright} \end{column} \begin{column}{1\textwidth} \begin{flushleft} \small \href{https://github.com/asuncionjc/Show_and_Tell_Mathematical_Proofs}{\tt https://github.com/asuncionjc/\\ Show\_and\_Tell\_Mathematical\_Proofs} \end{flushleft} \end{column} \end{columns} \begin{center} \visible<2->{ \href{https://www.researchgate.net/profile/Asuncion_Jimenez-Cordero}{\includegraphics[scale=0.35]{researchgate.png}} \hspace*{0.3cm} \href{https://scholar.google.es/citations?user=JegcEYwAAAAJ&hl=es&oi=ao}{\includegraphics[scale=0.12]{google_scholar.png}} } \end{center} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \AtBeginSection[]{ \begin{without_headline} \begin{frame}{Outline} \tableofcontents[currentsection] \end{frame} \end{without_headline} } \begin{without_headline} \begin{frame}{Outline} % and our simple frame \tableofcontents \end{frame} \end{without_headline} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{What is a proof?} \begin{frame} A \texthighlighted{theorem} is a statement that can be shown to be true. \begin{itemize} \item <2->Hypothesis. \item <2->Thesis. \end{itemize} \begin{block}{Example}<3-> \visible<3->{\textbf{Bolzano's theorem:} If $f$ is a continuous function defined on a closed interval $[a, b]$ such that $sign(f(a)) \neq sign(f(b))$.\\ Then there exists a point $c$ in the open interval $(a, b)$ satisfying $f(c) = 0$.} \begin{itemize} \item<4-> \textbf{Hypothesis:} \begin{itemize} \item<5-> $f:[a, b]\rightarrow \mathbb{R}$ is continuous. \item<5-> $sign(f(a)) \neq sign(f(b))$ \end{itemize} \item<4-> \textbf{Thesis:} \begin{itemize} \item<6-> $\exists c\in (a, b): f(c) = 0$ \end{itemize} \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame} A \texthighlighted{proof} is a chain of logical deductions that establishes the truth of a theorem. \begin{itemize} \item<2-> Hypothesis. \item<2-> Axioms (\emph{Ex: A number is equal to itself, $a = a$}). \item<2-> Previous mathematical results. \end{itemize} \begin{columns} \begin{column}{0.5\textwidth} \vspace*{-0.1cm} \visible<3->{\begin{alertblock}{Mistakes in proofs} \textbf{Conjecture:} 1 = 2.\\ \emph{Assume $a, b$ are two equal positive integers:} \begin{enumerate} \item $a = b$ \item $a^2 = ab$ \item $a^2 - b^2 = ab -b^2$ \item $(a - b)(a+ b) = b(a - b)$ \item $a + b = b$ \item $2b = b$ \item $2 = 1$ \end{enumerate} \end{alertblock}} \end{column} \begin{column}{0.5\textwidth} \visible<4->{\texthighlighted{\Large Where is the error?}} \end{column} \end{columns} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Proof by contradiction} \begin{frame} \begin{enumerate} \item Assume the thesis is false. \item Show that this assumption leads to a contradiction. \item Hence, the thesis is true. \end{enumerate} \begin{alertblock}{Example}<2-> \textbf{Theorem:} $\sqrt{2}$ is irrational. \end{alertblock} \begin{block}{Previous concepts}<3-> \begin{itemize} \item \textbf{Definition:} A real number $r$ is \emph{rational} if there exists integers $p$ and $q$, with no common factors and $q\neq 0$ such that $r = p/q$. A real number that is not rational is called \emph{irrational}. \item \textbf{Theorem:} Let $a$ be an integer number such that $a^2$ is even. Then $a$ is also even. \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame} \begin{alertblock}{Example} \textbf{Theorem:} $\sqrt{2}$ is irrational. \begin{enumerate} \item<2-> Assume $\sqrt{2}$ is rational. \item<3-> $\sqrt{2} = \frac{p}{q}$, $p$ and $q$ without common factors. \item<4-> $2 = \frac{p^2}{q^2}\Rightarrow p^2 = 2q^2.$ \item<5-> $p^2$ is even and, by the previous theorem, $p$ is also even. \item<6-> $p = 2s$, for some integer $s$. \item <7->$(2s)^2 = 2q^2 \Rightarrow 4s^2 = 2q^2 \Rightarrow 2s^2 = q^2$. \item<8-> $q^2$ is even, and so it is $q$. \end{enumerate} \visible<9->{{\Large \texthighlighted{Contradiction! }}\\ \visible<10->{On the one hand, we assumed $p$ and $q$ have no common factors.\\ On the other hand, if $p$ and $q$ are even, $2$ is a common factor.\\ Therefore, \textbf{$\boldsymbol{\sqrt{2}}$ is irrational}.}} \end{alertblock} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{It's your turn!} \begin{block}{} \textbf{Theorem:} Let $a$ be an integer number such that $a^2$ is even. Then $a$ is also even. \end{block} \visible<2->{ \begin{enumerate} \item Assume $a$ is odd. \item $a = 2k + 1$ for some integer $k$. \item $a^2 = (2k + 1)^2 = 4k^2 + 4k + 1 = 2(2k^2 + 2k) + 1$. \item $a^2$ is odd. \end{enumerate} \hspace*{0.3cm}{\large \texthighlighted{Contradiction! }} \\ \hspace*{0.3cm}Since, by hypothesis, $a^2$ is even.\\ \hspace*{0.3cm}Therefore, \textbf{$\boldsymbol{a}$ is even}. } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Another example?} \begin{block}{} \textbf{Theorem:} For every natural number $a>2$ and $a$ prime, then it holds that $a$ is odd. \end{block} \visible<2->{ \begin{enumerate} \item Assume $a$ is even. \item $a = 2k$, for some $k$. Since $a>2$, then $k\neq 1$. \item $a$ is composite. \end{enumerate} \hspace*{0.3cm}{\large \texthighlighted{Contradiction! }} \\ \hspace*{0.3cm}Since, by hypothesis, $a$ is prime.\\ \hspace*{0.3cm}Therefore, $\boldsymbol{a}$ \textbf{is odd}. } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Proof by induction} \begin{frame} \begin{enumerate} \item Base case ($n=1$). \item Assume the result for all $k< n$ (Induction hypothesis). \item Prove for $n$. \end{enumerate} \begin{alertblock}{Example}<2-> \textbf{Theorem (Gauss):} For all $n\in\mathbb{N}$: \begin{equation*} \sum\limits_{i = 1}^n i = \frac{n(n+1)}{2} \end{equation*} \begin{enumerate} \item<3-> $(n=1) \sum\limits_{i = 1}^1 i = 1 = \frac{1(1+1)}{2}$. \item<4-> (Induction hypothesis) The result is true for all $k<n$. \item<5-> $\sum\limits_{i = 1}^n i = \left( \sum\limits_{i = 1}^{n-1} i \right)+ n = \frac{(n-1)(n-1 + 1)}{2}+ n = \frac{(n-1)n}{2}+ \frac{2n}{2} = \frac{n^2 -n + 2n}{2} = \frac{n^2 + n}{2} = \frac{n(n+1)}{2}$. \end{enumerate} \end{alertblock} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Your turn!} \begin{block}{} \textbf{Theorem (Geometric series):} For all $n\in\mathbb{N}$ and $r\neq 1$: \begin{equation*} \sum\limits_{i = 0}^n r^i = \frac{1 - r^{n+1}}{1 - r} \end{equation*} \end{block} \visible<2->{ \begin{enumerate} \item $(n = 0): \sum\limits_{i = 0}^0 r^i = r^0 = 1 = \frac{1-r}{1-r}$. \item (Induction hypothesis) The result is true for all $k<n$. \item $\sum\limits_{i = 0}^n r^i = \sum\limits_{i = 0}^{n-1} r^i + r^n = \frac{1 - r^n}{1 - r} + r^n = \frac{1 - r^n + r^n(1-r)}{1- r}= \frac{1 - r^n + r^n - r^{n+1}}{1-r} = \frac{1 - r^{n+1}}{1-r}$. \end{enumerate} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Last example} \begin{block}{} \textbf{Theorem:} For all $n\in\mathbb{N}$: \begin{equation*} 11^n - 6 \text{ is divisible by } 5 \end{equation*} \end{block} \visible<2->{ \begin{enumerate} \item $(n = 1): 11^1 - 6 = 5$ which is divisible by $5$. \item (Induction hypothesis) The result is true for all $k<n$, and therefore $11^k = 5m + 6$, for some integer $m$. \item $11^n - 6 = \left(11\cdot11^{n-1} \right) - 6 = 11\cdot (5m + 6) - 6 = 11\cdot 5m +66 - 6 = 11\cdot 5m +60 = 5\cdot (11m + 12)$, which is divisible by $5$. \end{enumerate} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions} \begin{frame} \begin{itemize} \item Some basic concepts about the mathematical results and how to prove them. \item Proof by contradiction. \item Proof by induction. \end{itemize} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begingroup \setbeamertemplate{headline}{} \begin{frame} \titlepage \begin{center} \insertauthor\par \end{center} \begin{center} \scriptsize Departamento de Estad\'istica e Investigaci\'on Operativa\\ and Instituto de Matem\'aticas de la Universidad de Sevilla,\\ Sevilla, Spain \end{center} \begin{center} \LARGE \texthighlighted{Thank you for your attention!} \end{center} \begin{center} \includegraphics[scale=0.35]{imus_basico_oct.png} \end{center} \begin{center} \footnotesize{Show \& Tell Karumi} \end{center} \begin{center} \footnotesize{February 1st 2019} \end{center} \end{frame} \endgroup \end{document}
{ "alphanum_fraction": 0.6026213316, "avg_line_length": 32.698757764, "ext": "tex", "hexsha": "27ca783216d0c0b10eae6230d4a91df8101d3e12", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68f2416496590158514ec5c9f76ee6d0486869e6", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "asuncionjc/Show_and_Tell_Mathematical_Proofs", "max_forks_repo_path": "Slides/MAJC_show_and_tell.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68f2416496590158514ec5c9f76ee6d0486869e6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "asuncionjc/Show_and_Tell_Mathematical_Proofs", "max_issues_repo_path": "Slides/MAJC_show_and_tell.tex", "max_line_length": 223, "max_stars_count": 3, "max_stars_repo_head_hexsha": "68f2416496590158514ec5c9f76ee6d0486869e6", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "asuncionjc/Show_and_Tell_Mathematical_Proofs", "max_stars_repo_path": "Slides/MAJC_show_and_tell.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-30T16:56:49.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-29T11:45:50.000Z", "num_tokens": 3577, "size": 10529 }
\section{Resources} \begin{itemize} \item \texttt{-->} \href{tut.html}{Tutorial} \texttt{<--} \item \href{example/index.html}{List of Examples} \item \href{benchmark.html}{Table of Benchmarks} \item \href{legit.html}{Different ways to specify legitimate states and behavior} \item \href{permit.html}{Action constraints and syntax} \item \href{man.html}{Manual Page} %\item \href{example/Coloring.html}{3-Coloring on a Ring} %\item \href{example/TokenPassing.html}{Three Bit Token Ring} %\item \href{example/Orientation.html}{Orientation of Odd-Sized Ring} %\item \href{example/Matching.html}{Maximal Matching using One Bit per Process} \end{itemize} \input{thanks} \input{changes}
{ "alphanum_fraction": 0.7521865889, "avg_line_length": 34.3, "ext": "tex", "hexsha": "12b5fffb2e6659fdfc34ba24b1ca8a851d2b6c84", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aa520f9edf3be7ff458f96e88cd2dab1eec4c505", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "czlynn/protocon", "max_forks_repo_path": "doc/webtex/content.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aa520f9edf3be7ff458f96e88cd2dab1eec4c505", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "czlynn/protocon", "max_issues_repo_path": "doc/webtex/content.tex", "max_line_length": 81, "max_stars_count": 1, "max_stars_repo_head_hexsha": "aa520f9edf3be7ff458f96e88cd2dab1eec4c505", "max_stars_repo_licenses": [ "0BSD" ], "max_stars_repo_name": "czlynn/protocon", "max_stars_repo_path": "doc/webtex/content.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-02T22:19:43.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-02T22:19:43.000Z", "num_tokens": 193, "size": 686 }
\documentclass{memoir} \usepackage[online,IS]{ruthesis} \usepackage[useregional]{datetime2} \usepackage{xparse}%Latex3 argument parsing %\usepackage{graphicx} \title{Minimal Document for Testing} \author{Joseph T. Foley \and Someone Else} % \author: Use \and as a separator and \thanks{details} for additional info \date{2020}{2}{2} %% TODO: use datetime2 to be smarter about dates % \usepackage[icelandic,english]{babel} % \usepackage[english,icelandic]{babel} %\usepackage[icelandic]{babel} \begin{document} % \frontrequiredpages{} %\ects{30} %% Titlepage the Memoir way \maketitle % \babelensure{icelandic} \selectlanguage{icelandic} \begin{abstract} This is a test abstract in Icelandic. % \icelandicmonthiiname to manually create names \today \end{abstract} \selectlanguage{english} \begin{abstract} This is a test abstract in English. \today \end{abstract} \newpage \chapter{Test Chapter} \newcommand{\varEN}{***English***} \newcommand{\varIS}{***Icelandic***} \NewDocumentCommand{\setVar}{O{#2} m} % ARGS: [optional use #2], {mandatory} { \renewcommand{\varIS}{#1} \renewcommand{\varEN}{#2} } \setVar{Universal Title} \begin{itemize} \item English: \varEN{} \item Icelandic: \varIS{} \end{itemize} blah blah blah %\copyrightpage{}%%RUM: Not mentioned %\signaturepage{}%%RUM: "Signature page (standard format) %\archivesigpage{}%%RUM: Not mentioned, optional, but should be required %\abstractpage{}%%RUM: "Abstract (in English and Icelandic) %\mainmatter{}%%Front matter done, get down to business %\pagestyle{headings}%default %Test %\backcover{} \end{document} %%%%%%%%%%%%%%%%%%%% TeXStudio Magic Comments %%%%%%%%%%%%%%%%%%%%% %% These comments that start with "!TeX" modify the way TeXStudio works %% For details see http://texstudio.sourceforge.neit/manual/current/usermanual_en.html Section 4.10 %% %% What encoding is the file in? % !TeX encoding = UTF-8 %% What language should it be spellchecked? % !TeX spellcheck = en_US %% What program should I compile this document with? % !TeX program = pflatex %%%%%%%%%%%%%%%%%%% Emacs Variables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.6966394187, "avg_line_length": 25.0227272727, "ext": "tex", "hexsha": "30655eeada9972e53994dbafec375056b4e2061f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b5625b6e5eb31203c44a79d6e526d39c96082661", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "foleyj2/ru-thesis", "max_forks_repo_path": "test.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "b5625b6e5eb31203c44a79d6e526d39c96082661", "max_issues_repo_issues_event_max_datetime": "2022-02-08T10:55:26.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-04T16:12:25.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "foleyj2/ru-thesis", "max_issues_repo_path": "test.tex", "max_line_length": 101, "max_stars_count": null, "max_stars_repo_head_hexsha": "b5625b6e5eb31203c44a79d6e526d39c96082661", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "foleyj2/ru-thesis", "max_stars_repo_path": "test.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 628, "size": 2202 }
\section{Installation} ks.py is a kernel search implementation written in Python, using Gurobi as back end MIP solver. In order to use it, you must install both Python 3.8, or above, and Gurobi 9 and some other dependencies. Gurobi is the only commercial software used by ks.py. ks.py can be used on any system supported by gurobipy (Gurobi's python bindings). \subsection{Install Python} The Python interpreter can be freely downloaded from the \href{https://www.python.org/}{official website}. It is mandatory to install the 64 bit version, otherwise Gurobi cannot be linked to the interpreter. If it is possible to install Python 3.8 using your system's package manager, feel free to install with this method. Ensure to also install Pip with Python. Pip is the official Python's package manager, the third party library installer, it is needed to install ks.py dependencies. ks.py can be downloaded from \href{https://github.com/FilippoRanza/ks.py}{the official repository}. My advice is to download the most stable version that fit your needs. So first consider to download the last stable \href{https://github.com/FilippoRanza/ks.py/releases}{tag}, if this is not enough download the last commit on \href{https://github.com/FilippoRanza/ks.py/archive/master.zip}{master}. Avoid the other branches especially the failing ones, unless you are considering to contribute to the project. It is possible to download ks.py anywhere in your file system, but consider the it can be run (without issues) only from it's root directory. Once you've downloaded ks.py and place it into a proper location it's time to install ks.py dependencies. This is done using pip. Open your system shell, Powershell on Windows, Terminal on Unix. Change directory \footnote{Usually in the file manager there is the possibility to \emph{open a terminal here}. Otherwise in the file manager copy the path, open a shell, write \textbf{cd} a space and paste the path} where you've downloaded ks.py, there run this command. \begin{lstlisting} pip install -r requirements.txt \end{lstlisting} This may take a while. Be patient. This command can be run as standard or super user. In the first case this libraries will be installed locally to the current user, in the second globally. This is up to you. \subsubsection{Troubleshooting} This is a good point to test your python installation. A quick way to do this consist in opening a shell and run the command \textbf{python} (on Windows it may be \textbf{py}), at this point you should carefully check the banner printed by the python interpreter. There you must check that the version is 3.8 o above and that you are using a 64 bit installation. Once done you can safely close the window. It is possible for this procedure to fail or not be 100\% smooth. Some common issues are: \begin{enumerate} \item Pip is not installed: depending on your system and on your chosen installation method it is possible that pip is not installed as a separate command. In this scenario usually pip is installed as a python's sub module. So the command above becomes \begin{lstlisting} python -m pip install -r requirements.txt \end{lstlisting} \item On Windows the command \textbf{python} results in a \emph{command not found}: some installers on Windows install python as \textbf{py} \item pip module not found: some installers are more minimalist then others, so it is possible that pip is not installed with python by default try to check again your installer's guide to see how to install pip. \end{enumerate} \subsection{Install Gurobi} Before installing Gurobi you must obtain a license, check on \href{https://www.gurobi.com/}{the official website}. Once you've obtained a license you can proceed installing, and activating, Gurobi for your platform. Once done, it comes the trickiest step in this configuration: installing gurobipy, the official python bindings for gurobi. This step can be done in to ways: \begin{itemize} \item The minimalist: using your system's cli shell change directory to Gurobi installation root\footnote{\href{https://packages.gurobi.com/9.0/README.txt}{Check here}}. There search for the file \emph{setup.py}. Once found run this command \begin{lstlisting} python setup.py install \end{lstlisting} once this is done, it is pretty fast, gurobipy is installed and ready. \item The easier: install the anaconda package manager, a third party python package manager, and follow the \href{https://www.gurobi.com/documentation/9.0/quickstart_mac/ins_the_anaconda_python_di.html}{instruction} provided by Gurobi. \end{itemize} \subsubsection{Troubleshooting} This is a good moment for testing gurobipy. A quick way to test it is to open a python interactive shell and import gurobipy. Open a python shell as before and insert: \begin{lstlisting} import gurobipy \end{lstlisting} At this point if there are no message are shown you've correctly installed python and gurobipy. If the interpreter shows a message saying \emph{ModuleNotFoundError} this means that the gurobipy is not installed. In both cases you can safely close the window.\\ Some common issues are: \begin{enumerate} \item wrong python version: especially on Unix is very common to have python already installed, but, except for some Linux distros, this default installation is not Python3.8, in this scenario is possible that gurobipy is been installed for another python version. Possible fixes: \begin{enumerate} \item Change default python version to 3.8 \item Explicitly use python3.8 in lieu of python \item If possible remove old python installation (do this only is you are 110\% sure) \end{enumerate} \item wrong python compilation: for Windows python is compiled for both 32 and 64 bit. It is a common mistake to install python 32 bit \item gurobipy is installed but is does not work: check that Gurobi is correctly activated using the \textbf{grbgetkey} command\ \end{enumerate}
{ "alphanum_fraction": 0.68723373, "avg_line_length": 81.0357142857, "ext": "tex", "hexsha": "40746622cf23955db2d8123d3faef5cd11af871c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "12242b32643198bc0703258bda3f171d2a7ad682", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "Optimization-Algorithms/User-Guide", "max_forks_repo_path": "installation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "12242b32643198bc0703258bda3f171d2a7ad682", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "Optimization-Algorithms/User-Guide", "max_issues_repo_path": "installation.tex", "max_line_length": 256, "max_stars_count": null, "max_stars_repo_head_hexsha": "12242b32643198bc0703258bda3f171d2a7ad682", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "Optimization-Algorithms/User-Guide", "max_stars_repo_path": "installation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1479, "size": 6807 }
%============================================================================== \chapter{Introduction} \label{sec:intro} %============================================================================== The introduction usually gives a few pages of introduction to the whole subject, maybe even starting with the Greeks. For more information on \LaTeX{} and the packages that are available see for example the books of Kopka~\citep{kopka04} and Goossens et al~\citep{goossens04}. A lot of useful information on particle physics can be found in the \enquote{Particle Data Book}~\citep{pdg2010}. I have resisted the temptation to put a lot of definitions into the file \texttt{thesis\_defs.sty}, as everyone has their own taste as to what scheme they want to use for names. However, a few examples are included to help you get started: \begin{itemize} \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt} \item cross-sections are measured in \si{\pb} and integrated luminosity in \si{\invpb}; \item the \KoS is an interesting particle; \item the missing transverse momentum, \pTmiss, is often called missing transverse energy, even though it is calculated using a vector sum. \end{itemize} Note that the examples of units assume that you are using the \textsf{siunitx} package. It also is probably a good idea to include a few well formatted references in the thesis skeleton. More detailed suggestions on what citation types to use can be found in the thesis guide~\citep{thesis-guide}: \begin{itemize} \item articles in refereed journals\citep{pdg2010,Aad:2010ey}; \item a book~\citep{Halzen:1984mc}; \item a PhD thesis~\citep{tlodd:2012} and a Diplom thesis~\citep{mergelmeyer:2011}; \item a collection of articles~\citep{lhc:vol1}; \item a conference note~\citep{ATLAS-CONF-2011-008}; \item a preprint~\citep{atlas:perf:2009} (you can also use \texttt{@online} or \texttt{@booklet}for such things; \item something that is only available online~\citep{thesis-guide}. \end{itemize} Note that astronomy publications use citation commands defined by \textsf{natbib}. You should choose which one you want to use: \begin{itemize} \item \textbackslash citep: a citation~\citep{pdg2010,Aad:2010ey}; \item \textbackslash citet: a citation~\citet{pdg2010,Aad:2010ey}; \item \textbackslash citealt: a citation~\citealt{pdg2010,Aad:2010ey}. \end{itemize} At the end of the introduction it is normal to say briefly what comes in the following chapters. The lines at the end of this file are used by AUCTeX to specify which is the master \LaTeX{} file, so that you can compile your thesis directly within \texttt{emacs}. % Print the bibliography at the end of the chapter. \printbibliography[heading=subbibliography] %%% Local Variables: %%% mode: latex %%% TeX-master: "../mythesis" %%% End:
{ "alphanum_fraction": 0.7290622763, "avg_line_length": 41.7014925373, "ext": "tex", "hexsha": "d9a22ee42e4e90a32f35e41d903738567ba82574", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b29c84f9a29e4a7c9a3499658a1dfa7f87d64c9c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "cmp0xff/Masterarbeit", "max_forks_repo_path": "ubonn-thesis-current/thesis_skel/thesis_astro_intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b29c84f9a29e4a7c9a3499658a1dfa7f87d64c9c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "cmp0xff/Masterarbeit", "max_issues_repo_path": "ubonn-thesis-current/thesis_skel/thesis_astro_intro.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "b29c84f9a29e4a7c9a3499658a1dfa7f87d64c9c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "cmp0xff/Masterarbeit", "max_stars_repo_path": "ubonn-thesis-current/thesis_skel/thesis_astro_intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 762, "size": 2794 }
\documentclass[output=paper]{langscibook} \author{Mahamane L. Abdoulaye\affiliation{Université Abdou Moumouni, Niamey}} \title{Reflexive constructions in Hausa} \abstract{This contribution describes reflexive constructions in Hausa (Chadic, Niger, Nigeria). The reflexive pronouns are based on the word \textit{kâi} ‘head, self’, in a possessive construction with a person affix that is coreferential with the clause subject (or sometimes with a preceding direct object or applied object). Subject-coreferential direct objects or applied objects are almost always expressed as reflexive pronouns (with the partial exception of the direct objects of some mental/sensation verbs). Subject-coreferential possessive NPs can optionally be expressed as reflexive pronouns but with an emphasis on the possessive relation. Subject-coreferential locative, benefactive, and instrumental/associative NPs are normally expressed as non-reflexive pronouns but they can also be optionally expressed as reflexive pronouns. The chapter also describes three different constructions that are related to the typical reflexive construction and which may be relevant for an account of its development.} \IfFileExists{../localcommands.tex}{%hack to check whether this is being compiled as part of a collection or standalone \input{../localpackages} \input{../localcommands} \input{../localhyphenation} \bibliography{localbibliography} %\togglepaper[5] }{} \begin{document} \maketitle \section{Introduction}\label{sec:Abdoulaye:1} Hausa (Chadic, Niger, Nigeria) generally requires a distinctive marking for the coreference between a subject NP and another NP in the minimal clause, in particular when the second NP is a direct object, an applied object, and, optionally, an adnominal possessive pronoun, or the object of certain prepositions. This distinctive marking, the reflexive pronoun, is built on the noun \textit{kâi} ‘head, self’ combined in a possessive construction with a person suffix referring to the antecedent (e.g., \textit{kâ-n-shì} ‘himself’, lit. ‘self-of.\textsc{m-3sg.m}’). An example is given in \REF{ex:Abdoulaye:1}: \ea%1 \label{ex:Abdoulaye:1} \gll Yaa bugè kânshì.\\ \textsc{3sg.m.cpl} hit \textsc{refl.3sg.m}\\ \glt `He hit himself.’ \z In sentence (\ref{ex:Abdoulaye:1}), the person/tense/aspect marker \textit{yaa} (or ``subject pronoun” in Hausa linguistics) is coreferential with the person suffix \nobreakdash-\textit{shi}, which is embedded in a possessive construction with the noun \textit{kâi} `head, self’, forming the reflexive pronoun \textit{kânshì} `himself’. According to \citet[529]{Newman2000} reflexive pronouns based on a word (ultimately) meaning `head’ are widespread among Chadic languages. This chapter describes the reflexive construction in Hausa, drawing heavily on \citet{Newman2000}, who gives the most detailed and exhaustive account of the construction in the language. The chapter also relies on the translation of the questionnaire sentences submitted to the judgment of informants (40 years-old and up, mostly Katsinanci dialect speakers) as well on data from published sources or collected otherwise, as indicated. The chapter also uses sentences constructed by the author, which are then checked with other native speakers. The chapter is structured as follows. \sectref{sec:Abdoulaye:2} gives the overview of the pronominal system in Hausa. \sectref{sec:Abdoulaye:3}--\ref{sec:Abdoulaye:4} describe, respectively, the coreference patterns between the subject and the direct object and those between the subject and other syntactic functions. \sectref{sec:Abdoulaye:5} describes the coreference patterns between non-subject NPs. \sectref{sec:Abdoulaye:6} describes two types of self-intensifiers in Hausa. Finally, \sectref{sec:Abdoulaye:7} discusses the word \textit{kâi} in its usage as ‘self, oneself’ in compounds and fixed expressions. \section{Overview of Hausa personal pronouns}\label{sec:Abdoulaye:2} Hausa distinguishes various sets of pronouns depending on their syntactic function: the independent pronouns set (with a long final vowel or with two syllables), the object set with a reduced form (monosyllabic, and with a short final vowel), and the subject pronouns sets which combine (and are sometimes fused) with the tense/aspect markers. Some of the pronouns sets are illustrated in Table~\ref{tab:Abdoulaye:1} (see \citealt[72ff]{Caron1991}; \citealt[476ff]{Newman2000} for more details). \begin{table}[ht] \centering \begin{tabular}{p{2cm}p{2cm}p{2cm}p{2cm}p{2cm}} \lsptoprule {Person} & {Independent} {pronouns} & {Direct} {object} {pronouns} & {Completive} {subject} {pronouns} & {Future} {subject} {pronouns} \\ \hline 1\textsc{sg} & nii & ni/nì & naa & zaa nì/zân\\ 2\textsc{sg.m} & kai & ka/kà & kaa & zaa kà\\ 2\textsc{sg.f} & kee & ki/kì & kin & zaa kì\\ 3\textsc{sg.m} & shii & shi/shì (ya/yà) & yaa & zaa shì/zâi\\ 3\textsc{sg.f} & ita & ta/tà & taa & zaa tà\\ 1\textsc{pl} & muu & mu/mù & mun & zaa mù\\ 2\textsc{pl} & kuu & ku/kù & kun & zaa kù\\ 3\textsc{pl} & suu & su/sù & sun & zaa sù\\ Impersonal & -- & -- & an & zaa à\\ \lspbottomrule \end{tabular} \caption{Some Hausa pronominal paradigms}\label{tab:Abdoulaye:1} \end{table} The independent pronouns appear in isolation, in topicalization, in nominal emphasis (e.g. \textit{ita} \textit{Maaɍìyaa} ‘as for Maria’), or as objects of some prepositions (e.g. \textit{dà} \textit{ita} ‘with her/it’). Direct object pronouns immediately following a verb assume a reduced form with a low or a high tone, as indicated in Table~\ref{tab:Abdoulaye:1} (the forms \textit{shi} vs. \textit{ya} for the 3\textsuperscript{rd} person masculine singular are free variants). Besides the regular 1\textsuperscript{st}, 2\textsuperscript{nd}, and 3\textsuperscript{rd} person, the subject pronouns also have an impersonal form, with usages similar to French \textit{on}, and for which there are no corresponding independent or direct object forms, as indicated. Since the subject pronouns are often morphologically fused with the tense/aspect markers, they are generally obligatory, whether or not a noun subject is specified in the clause. However, possessive pronouns are the pronouns most relevant for the structure of the reflexive markers, in particular the adnominal ‘Noun-of-Pronoun’ possessive constructions, which can have both a possessive and a reflexive meaning with the noun \textit{kâi} ‘head, self’, as seen in le~\ref{tab:Abdoulaye:2} for the Katsinanci dialect. \begin{table}[ht] \begin{tabularx}{0.95\textwidth}{p{3.5cm}p{4.5cm}p{3cm}} \lsptoprule Full ‘Noun-of-Noun’ & Full ‘Noun-of-Pronoun’ & Reduced ‘Noun-of-Pronoun’\\ \hline & kâi naa-shì/naa-sà ‘his head’ (lit. ‘head that.of.\textsc{m-3sg.m}’) & kâ-n-shì/kâ-n-sà ‘his head, himself’\\ \cmidrule{3-3} kâi na Abdù ‘head that.of.\textsc{m} Abdu’ & & kâi-na-s ‘his head’\\ \cmidrule{2-3} & kâi naa-yà ‘his head’ & kâ-n-yà ‘his head’\\ \cmidrule{3-3} & & kâi-nâ-i ‘his head’\\ \cmidrule{1-3} kâi na Maaɍìyaa ‘head that.of.M Maria’ & kâi naa-tà ‘her head’ (lit. ‘head that.of.M-3SG.F’) & kâ-n-tà ‘her head, herself’\\ %\multirow{4}{*}{\parbox{3.5cm}{kâi na Abdù \\‘head that.of.\textsc{m} Abdu’}} & \multirow{2}{*}{\parbox{4.5cm}{kâi naa-shì/naa-sà ‘his head’ \\ (lit. ‘head that.of.\textsc{m-3sg.m}’)}} & {kâ-n-shì/kâ-n-sà ‘his head, himself’}\\ % & & kâi-na-s ‘his head’\\ % \multirow{2}{*}{kâi naa-yà ‘his head’} & kâ-n-yà ‘his head’\\ % & & kâi-nâ-i ‘his head’\\ %%‘her head’ (lit. ‘head that.of.\textsc{m-3sg.f}’} & kâ-n-tà ‘her head, herself’\\ % kâi na Maaɍìyaa ‘head that.of.M Maria’ & \tabincell{l}{kâi naa-tà ‘her head’\\(lit. ‘head that.of.\textsc{m-3sg.f}’)} & kâ-n-tà ‘her head, herself’\\ \lspbottomrule \end{tabularx} \caption{Attributive possessive constructions in Hausa (3\textsuperscript{rd} person singular, Katsinanci dialect)} \label{tab:Abdoulaye:2} \end{table} To better show the structure of the possessive constructions in Hausa, the first column of Table~\ref{tab:Abdoulaye:2} gives the full ‘Noun-of-Noun’ constructions, where a masculine singular possessee noun (\textit{kâi} ‘head’) combines with a masculine and a feminine possessor noun (\textit{Abdù} and \textit{Maaɍìyaa}, respectively). In this column, the nouns are syntactically linked by a pronoun that refers and agrees in gender and number with the possessee noun \textit{kâi} (with a feminine possessee noun, the linking pronoun would be \textit{ta} ‘that.of.\textsc{f}’, as in \textit{mootàa} \textit{ta} \textit{Abdù} ‘the car of Abdu’, lit. ‘car that.of.F Abdu’; all plural possessee nouns use the pronoun \textit{na}; also, the ‘Noun-of-Noun’ constructions have reduced versions \textit{kâ-n} \textit{Abdù} ‘head of Abdu’/\textit{mootà-ɍ} \textit{Abdù} ‘car of Abdu’ which do not concern us here). In the second column, the noun \textit{Abdù} is replaced with a possessive pronoun, either \textit{shì}/\textit{sà} or \textit{yà} ‘\textsc{sg.m}’(cf. Table~\ref{tab:Abdoulaye:1}). In the full ‘Noun-of-Pronoun’ constructions of the second column, a possessive pronoun replaces the possessive noun (lit. ‘head of him/her’). These constructions are reduced in the third column in two ways: If the linking pronoun is reduced (\textit{na} > \textit{\nobreakdash-n}), then the derived form is ambiguous between a possessive and a reflexive form, as indicated. If, on the contrary, it is the possessive pronoun that is reduced (\textit{shì/sà} > \textit{\nobreakdash-s}) then only the possessive meaning is possible. When the variant \textit{yà} is used, as seen in the second row of the second column, again for many speakers, the resulting reduced forms do not have a reflexive use in Katsinanci dialect, no matter the reduction pattern followed (the western dialects, which only have the \textit{kâinâi} form, also use it as reflexive pronoun; see \citealt[74]{Caron1991}; see also the discussion in \sectref{sec:Abdoulaye:7}). With the 3\textsuperscript{rd} person feminine singular pronoun \textit{tà} (in the last row of Table~\ref{tab:Abdoulaye:2}), only the linking pronoun reduction is possible and the form is ambiguous between a possessive and a reflexive form. It may be noted that the reduced forms are more frequent than the full forms. The reflexive forms in Table~\ref{tab:Abdoulaye:2} are clearly “Head” reflexives in \citeauthor{Faltz1985}'s (\citeyear{Faltz1985}:~32f, 44) typology, given their composite nature incorporating a head noun, a linking pronoun, and a possessive pronoun. Nonetheless, they will be referred to as “reflexive pronouns”, following a usage now established in Hausa literature (see also \citealt[74]{Caron1991}, \citealt[522]{Newman2000}; \citealt[413]{Jaggar2001}; but see \citealt[117]{Wolff1993} for a different label). Following a recent proposal (\citealt{Will2019}, see also \citealt[117]{Wolff1993}), I assume that the meaning of \textit{kâi} as ‘self’ (instead of ‘head’) is the meaning relevant to the reflexive pronouns (see the discussion in \sectref{sec:Abdoulaye:7}). Also, to simplify the data presentation, the reflexive pronouns will be glossed globally as ‘\textsc{refl}’ plus the person features (e.g., \textit{kânshì} ‘\textsc{refl.3sg.m}’, instead of \textit{kâ-n-shì} ‘self-of.\textsc{m-3sg.m}’). Finally, although Table~\ref{tab:Abdoulaye:2} focuses on the 3rd person, the pronouns for all persons in Table~\ref{tab:Abdoulaye:1} have corresponding reflexive pronouns, as we will see in the data throughout the chapter. The next section looks at the subject/object coreference. \section{Subject and direct object coreference} \label{sec:Abdoulaye:3} In conformity with the general tendencies (see \citealt[16]{Haspelmath2020a} and references therein), sentences in Hausa with coreferring subject and direct object require -- with a few exceptions -- a distinctive reflexive marking. The following subsections present the basic uses of the reflexive pronouns, the contrast between exact and inclusive coreference, the contrast between extroverted and introverted verbs, and the contrast between body-part and whole-body actions. \subsection{Basic uses in subject-object coreference}\label{sec:Abdoulaye:3.1} Nearly all transitive verbs in Hausa require the reflexive form of the direct object when it is coreferential with the subject. This is illustrated in (\ref{ex:Abdoulaye:2}): \ea%2 \label{ex:Abdoulaye:2} \ea\label{ex:Abdoulaye:2a} \gll Taa yàbi kântà.\\ \textsc{3sg.f.cpl} praise \textsc{refl.3sg.f}\\ \glt `She praised herself.’ \ex \label{ex:Abdoulaye:2b} \gll Ta-nàa yàbo-n kântà.\\ \textsc{3sg.f-ipfv} praise-of.\textsc{m} \textsc{refl.3sg.f}\\ \glt`She is praising herself.’ \ex \label{ex:Abdoulaye:2c} \gll Mutàanê-n sun kashè kânsù.\\ people-\textsc{def} \textsc{3pl.cpl} kill \textsc{refl.3pl}\\ \glt `The men killed themselves.’ \ex \label{ex:Abdoulaye:2d} \gll Yaa reenà kânshì.\\ \textsc{3sg.m.cpl} belittle \textsc{refl.3sg.m}\\ \glt `He lost confidence in himself/renounced his ambitions.’ \ex \label{ex:Abdoulaye:2e} \gll Naa ga kâinaa cikin maduubii.\\ \textsc{1sg.cpl} see \textsc{refl.1sg} in mirror\\ \glt `I saw myself in the mirror.’ \z \z The sentences in (\ref{ex:Abdoulaye:2}) illustrate basic direct object structures. Notably, most Hausa researchers consider that \textit{kântà} in the imperfective sentence (\ref{ex:Abdoulaye:2b}), where it appears formally as the “possessor” of the verbal noun \textit{yàboo} ‘praising’, is the sentence direct object (it can be focused or questioned like the object of the basic verb \textit{yàbi} ‘praise’ in \REF{ex:Abdoulaye:2a}, but unlike true adnominal possessive nouns like \textit{Abdù} in \textit{gidan} \textit{Abdù} ‘the house of Abdu’). Except for the verb \textit{ga/gan/ganii} ‘see’ in (\ref{ex:Abdoulaye:2e}), the reflexive pronouns in sentences (\ref{ex:Abdoulaye:2}) are obligatory. In sentence (\ref{ex:Abdoulaye:2c}), like in its English equivalent, the men could have killed themselves deliberately or by accident, separately or together (mutuality would require the reciprocal marking \textit{juunaa} ‘each other’). When a non-reflexive pronoun is used as direct object, then a disjoint reference interpretation is obligatory. This is illustrated in (\ref{ex:Abdoulaye:3}): \ea%3 \label{ex:Abdoulaye:3} \ea \label{ex:Abdoulaye:3a} \gll Taa\textsubscript{1} yàbee tà\textsubscript{2}\\ \textsc{3sg.f.cpl} praise \textsc{3sg.f}\\ \glt `She praised her.’ \ex \label{ex:Abdoulaye:3b} \gll Mutàanê-n\textsubscript{1} sun kashèe sù\textsubscript{2}\\ people-\textsc{def} \textsc{3pl.cpl} kill \textsc{3pl}\\ \glt `The men killed them.’ \z \z Sentences (\ref{ex:Abdoulaye:3a}--\ref{ex:Abdoulaye:3b}) correspond to sentences (\ref{ex:Abdoulaye:2a}) and (\ref{ex:Abdoulaye:2c}), respectively. One may note that the reflexive pronoun, being morphosyntactically a noun, behaves like regular nouns in triggering the pre-nominal form of the verb (hence the contrast between \textit{yàbi} and \textit{yàbee} ‘praise’; see \citealt{Newman2000}:~627 for a complete description). Beside typical direct objects, the reflexive pronouns also occur in atypical direct object positions, such as in double object constructions, or as object of complex predicates, as seen in (\ref{ex:Abdoulaye:4}--\ref{ex:Abdoulaye:5}): \ea%4 \label{ex:Abdoulaye:4} \ea \label{ex:Abdoulaye:4a} \gll Taa hanà kântà kwaanaa.\\ \textsc{3sg.f.cpl} deny \textsc{refl.3sg.f} sleep\\ \glt `She prevented herself from sleeping.’ \ex \label{ex:Abdoulaye:4b} \gll Yaa biyaa kânshì Nairàa goomà.\\ \textsc{3sg.m.cpl} pay \textsc{refl.3sg.m} Naira ten\\ \glt `Ali payed himself 10 Nairas.’ \z \z \ea%5 \label{ex:Abdoulaye:5} \ea \label{ex:Abdoulaye:5a} \gll Abdù yaa mayaɍ\_dà kânshì waawaa.\\ Abdu \textsc{3sg.m.cpl} return.\textsc{caus} \textsc{refl.3sg.m} idiot\\ \glt `Abdu turned himself into an idiot.’ \ex \label{ex:Abdoulaye:5b} \gll Abdù yaa maidà kânshì waawaa.\\ Abdu \textsc{3sg.m.cpl} return.\textsc{caus} \textsc{refl.3sg.m} idiot\\ \glt `Abdu turned himself into an idiot.’ \z \z In sentences (\ref{ex:Abdoulaye:4a}--\ref{ex:Abdoulaye:4b}), the reflexive pronouns are dative/deprivative arguments (\textit{hanà} basically means ‘deny’) and such arguments, when present, are the true direct objects of the verbs, not the theme arguments, which are placed away from the verb. Example \REF{ex:Abdoulaye:5a} illustrates a complex causative predicate, made up of the basic verb \textit{mayà} ‘replace, repeat’ and the particle \textit{dà} in a close-knit syntax. The two parts can in fact merge into one word, as shown in the equivalent sentence (\ref{ex:Abdoulaye:5b}). As reported in \citet[524]{Newman2000}, a reflexive pronoun can alternate with a coreferential non-reflexive pronoun in direct object position with verbs he characterized as “mental/sensation” verbs. This is illustrated in (\ref{ex:Abdoulaye:6})--(\ref{ex:Abdoulaye:7}): \ea%6 \label{ex:Abdoulaye:6} \ea \label{ex:Abdoulaye:6a} \gll Naa ganee nì cikin maduubii.\\ \textsc{1sg.cpl} see \textsc{1sg} in mirror\\ \glt `I saw myself in the mirror.’ \ex \label{ex:Abdoulaye:6b} \gll Naa ga kâinaa cikin maduubii.\\ \textsc{1sg.cpl} see \textsc{refl.1sg} in mirror\\ \glt `I saw myself in the mirror.’ \z \z \ea%7 \label{ex:Abdoulaye:7} \ea \label{ex:Abdoulaye:7a} \gll Sai Bàlki\textsubscript{1} ta gan tà\textsubscript{1/2} cikin fîm.\\ The Balki \textsc{3sg.f.rp} see \textsc{3sg.f} in film\\ \glt `Then/suddenly, Balki saw herself in the movie.’ (cf. Sai Bàlki ta ga kântà cikin fîm.) \ex \label{ex:Abdoulaye:7b} \gll Yâara\textsubscript{1} sun jii sù\textsubscript{1/2} cikin ɍeediyòo.\\ children \textsc{3pl.cpl} hear \textsc{3pl} in radio\\ \glt `The children heard themselves on the radio.’ (cf. Yâara sun ji kânsù cikin ɍeediyòo.) \z \z In (\ref{ex:Abdoulaye:6a})--(\ref{ex:Abdoulaye:6b}), in the 1\textsuperscript{st} person, a non-reflexive pronoun can alternate with a reflexive pronoun with the same interpretation. For the 3\textsuperscript{rd} person in (\ref{ex:Abdoulaye:7a})--(\ref{ex:Abdoulaye:7b}), a non-reflexive pronoun can refer to the subject or to some other participant, giving rise to a disjoint reference interpretation. The alternative sentences given with reflexive pronouns are naturally unambiguous. There are, however, some strong restrictions on the alternation. For example, \citet[524]{Newman2000} lists 13 verbs allowing the alternation. Secondly, subject-coreference with a non-reflexive pronoun is more acceptable in the 1\textsuperscript{st} and 2\textsuperscript{nd} person than in the 3\textsuperscript{rd} person. For example, in Katsinanci dialect, the coreferential 3\textsuperscript{rd} person non-reflexive pronoun is restricted to about six verbs: \textit{ganii} ‘see’, \textit{jii} ‘hear, feel’, \textit{soo} ‘want’, \textit{sàamu} ‘find (oneself in a situation)’, \textit{gaanèe} ‘recognize’, and \textit{san} ‘be aware (of one’s own inclinations)’. Also, as hinted at in \citet[524]{Newman2000}, the subject-coreferential 3\textsuperscript{rd} person pronoun is also restricted to the Completive (with an anterior value) and the perfective aspect. This is illustrated in (\ref{ex:Abdoulaye:8}): \ea%8 \label{ex:Abdoulaye:8} \ea \label{ex:Abdoulaye:8a} \gll I-nàa jîi-naa ɗàazu à cikin ɍeediyòo.\\ \textsc{1sg-ipfv} hear-of.\textsc{m.1sg} moment at in radio\\ \glt `I was hearing myself a while ago on the radio.’ \ex \label{ex:Abdoulaye:8b} \gll Su\textsubscript{1}{}-nàa jî-n-sù\textsubscript{*1/2} ɗàazu à cikin ɍeediyòo.\\ \textsc{3pl-ipfv} hear-of.\textsc{m-3pl} moment at in radio\\ \glt `They were hearing them a while ago on the radio.’ \z \z Examples (\ref{ex:Abdoulaye:8}), in the imperfective aspect, show a contrast between the 1\textsuperscript{st} person in (\ref{ex:Abdoulaye:8a}), where a subject-coreferring non-reflexive pronoun is possible, and the 3\textsuperscript{rd} person in (\ref{ex:Abdoulaye:8b}), where a disjoint reference interpretation of the pronoun is obligatory. These restrictions are in accordance with the general tendency whereby the 3\textsuperscript{rd} person requires the reflexive marking more than the 1\textsuperscript{st} and 2\textsuperscript{nd} person (for a discussion see \citealt[43]{Haspelmath2008} and references cited there).\footnote{The intransitive motion verbs \textit{jee} ‘go’ and \textit{zoo} ‘come’ can immediately be followed by a pronoun agreeing with the subject, a pronoun known as the Chadic “intransitive copy pronoun” (the pronoun is more common in other Chadic languages; e.g., \textit{sun} \textit{jee} \textit{sù} \textit{makaɍantaa}, lit. ‘they went they to school’, see \citealt[407]{Jaggar2001}; \citealt[479]{Newman2000} and references cited there). In another variant of the phenomenon, a possessive pronoun agreeing with the subject is adjoined to nominalized intransitive motion and stance verbs (e.g., \textit{yaa} \textit{koomàawa-ɍ{}-shì makaɍantaa}, lit. ‘he.CPL returning-of-him [i.e., he returned] to school’). Reflexive pronouns are not possible in both cases.} \subsection{Contrast between exact and inclusive coreference} \label{sec:Abdoulaye:3.2} As reported in \citet[524]{Newman2000}, Hausa marks the contrast between exact coreference, e.g., between a singular subject and an agreeing singular reflexive pronoun, and inclusive coreference between a singular subject and a plural reflexive pronoun. This is illustrated in (\ref{ex:Abdoulaye:9}): \ea%9 \label{ex:Abdoulaye:9} \ea \label{ex:Abdoulaye:9a} \gll Màccê-n\textsubscript{1} taa yàbi kânsù\textsubscript{1+x} \\ woman-\textsc{def} \textsc{3sg.f.cpl} praise \textsc{refl.3pl}\\ \glt `The woman praised herself and the others in her group.’ \ex \label{ex:Abdoulaye:9b} \gll Yaa\textsubscript{1} kaarè kânsù\textsubscript{1+x} dàgà muugù-n zàrgii.\\ \textsc{3sg.m.cpl} protect \textsc{refl.3pl} from serious-of.\textsc{m} charge\\ \glt `He defended himself and the others in his group against a serious charge.’ \z \z Beside the direct object position, \citet[524]{Newman2000} shows that the inclusive reflexive pronoun is also possible in the applied object position (see \sectref{sec:Abdoulaye:4.1} below). \subsection{Contrast between extroverted and introverted verbs} \label{sec:Abdoulaye:3.3} Reflexive marking in Hausa is apparently sensitive to the contrast between extroverted and introverted verbs (on this contrast see \citealt[44]{Haspelmath2008} and references cited there). With the extroverted verbs, defined as verbs expressing socially antagonistic actions, such as in Hausa \textit{cìiji} ‘bite’, \textit{hàlbi} ‘shoot’, etc., reflexive marking is obligatory in case of coreference. This is illustrated in (\ref{ex:Abdoulaye:10}): \ea%10 \label{ex:Abdoulaye:10} \ea \label{ex:Abdoulaye:10a} \gll Kàree yaa cìiji kânshì.\\ dog \textsc{3sg.m.cpl} bite \textsc{refl.3sg.m}\\ \glt `The dog bit itself.’ \ex \label{ex:Abdoulaye:10b} \gll Yaarinyàa taa tsàni kântà.\\ girl \textsc{3sg.f.cpl} hate \textsc{refl.3sg.f}\\ \glt `The girl hates herself.’ \ex \label{ex:Abdoulaye:10c} \gll Ɗan\_sìyaasàa yaa sòoki kânshì.\\ politician \textsc{3sg.m.cpl} criticize \textsc{refl.3sg.m}\\ \glt `The politician criticized himself.’ \ex \label{ex:Abdoulaye:10d} \gll Soojà yaa hàlbi kânshì.\\ soldier \textsc{3sg.m.cpl} shoot \textsc{refl.3sg.m}\\ \glt `The soldier shot himself.’ \z \z Beside the obligatory reflexive marking in all sentences (\ref{ex:Abdoulaye:10}), one can also note that extroverted sentences can have a simple ‘Subject + Verb + Object’ structure. By contrast, introverted verbs, defined as verbs expressing body-care actions and the like, may not appear in a simple ‘Subject + Verb + Object’ structure in their autopathic use. This is illustrated in (\ref{ex:Abdoulaye:11}): \ea%11 \label{ex:Abdoulaye:11} \ea \label{ex:Abdoulaye:11a} \gll Yaaròo ya-nàa [yi-n] wankaa.\\ boy \textsc{3sg.m-ipfv} do-of.\textsc{m} wash\\ \glt `The boy was washing himself.’ \ex \label{ex:Abdoulaye:11b} \gll Yaarinyàa taa yi wankaa.\\ girl \textsc{3sg.f.cpl} do wash\\ \glt `The girl washed.’ \ex \label{ex:Abdoulaye:11c} \gll Yaa yi askìi.\\ \textsc{3sg.m.cpl} do haircut\\ \glt `He had a haircut (at the barber).’ Or: ‘He did a haircut (to himself).’ \ex \label{ex:Abdoulaye:11d} \gll Abdù yaa sâa kaayaa.\\ Abdu \textsc{3sg.m.cpl} put.on clothes\\ \glt `Abdu got dressed (dressed himself).’ \ex \label{ex:Abdoulaye:11e} \gll Abdù yaa shiryàa.\\ Abdu \textsc{3sg.m.cpl} prepare\\ \glt `Abdu got ready.’ \z \z Sentence (\ref{ex:Abdoulaye:11a}) is in the imperfective aspect, but the predicate \textit{wankaa} ‘wash, bathe, shower’ is more like an action noun that is the direct object of an understood generic verb \textit{yi} ‘do’ (see \citealt[281]{Newman2000}; \citealt[171]{Jaggar2001}). Indeed, the underlying \textit{yi} ‘do’ verb is obligatory when the sentence is in the Completive, as seen in (11b\nobreakdash-c) (in fact even in the imperfective, \textit{yi} is acceptable in the negative, e.g. \textit{bâi} \textit{yîn} \textit{wankaa} ‘he doesn’t wash’ or if \textit{wankaa} is modified, e.g., \textit{mun} \textit{iskè} \textit{yanàa} \textit{yî\nobreakdash-n} \textit{wani} \textit{irìn} \textit{wankaa} ‘we find him washing himself in a peculiar way’). In (\ref{ex:Abdoulaye:11d}) the sentence does have the structure ‘Subject + Verb + Object’ but the object is not coreferential with the subject. Finally in (\ref{ex:Abdoulaye:11e}) the sentence is intransitive. In all cases, a reflexive pronoun is not possible. It is possible however to express the introverted action with a reflexive pronoun in the applied object position, as seen in the following (for more on the applied object, see \sectref{sec:Abdoulaye:4.1}): \ea%12 \label{ex:Abdoulaye:12} \ea \label{ex:Abdoulaye:12a} \gll Yaaròo ya-nàa mà kânshì wankaa. \\ child \textsc{3sg.m-ipfv} \textsc{appl} \textsc{refl.3sg.m} wash\\ \glt `The boy is washing by himself/on his own.’ (= Yaaròo yanàa wankaa dà kânshì) \ex \label{ex:Abdoulaye:12b} \gll Yaa yi mà kânshì askìi. \\ \textsc{3sg.m.cpl} do \textsc{appl} \textsc{refl.3sg.m} haircut\\ \glt `He did a haircut by himself.’ (= Yaa yi askìi dà kânshì) \z \z Sentences (\ref{ex:Abdoulaye:12}) are used in contexts where it is assumed that the subject referent ordinarily cannot carry out the action but, as it happens, they did (for example a child may be too young to perform the action alone). These sentences, as indicated, are semantically equivalent to the ‘by himself’ emphatic sentences discussed later in \sectref{sec:Abdoulaye:6.1}, but formally they involve a bona fide reflexive pronoun in a verbal argument position, as we will see in \sectref{sec:Abdoulaye:4.1} To summarize, it can be said that overall Hausa clearly marks the contrast between extroverted and introverted verbs, and that only the former regularly require the reflexive pronoun in autopathic contexts. \subsection{Contrast between body-part and whole-body actions} \label{sec:Abdoulaye:3.4} Actions on specified body-parts are expressed in Hausa in a simple ‘Subject + Verb + Object’ structure, as seen in (\ref{ex:Abdoulaye:13}). \ea%13 \label{ex:Abdoulaye:13} \ea \label{ex:Abdoulaye:13a} \gll Yaa askè geemèe/ geemè-n-shì.\\ \textsc{3sg.m.cpl} shave beard beard-of.\textsc{m-3sg.m}\\ \glt `He shaved (himself).’ Or: ‘He had his beard shaved (at the barber).’ \ex \label{ex:Abdoulaye:13b} \gll Yaa wankè kâi/ kâ-n-shì.\\ \textsc{3sg.m.cpl} wash head/ head-of.\textsc{m-3sg.m}\\ \glt `He cleaned his head.’ \ex \label{ex:Abdoulaye:13c} \gll Yaa wankè jìkii/ jìki-n-shì.\\ \textsc{3sg.m.cpl} wash body body-of.\textsc{m-3sg.m}\\ \glt `He did a quick toilet.’ (Lit. ‘he cleaned his body’) \ex \label{ex:Abdoulaye:13d} \gll Yaa shaacè kâi/ kâ-n-shì.\\ \textsc{3sg.m.cpl} comb head/ head-of.\textsc{m-3sg.m}\\ \glt `He combed his head [hair].’ \z \z In sentences (\ref{ex:Abdoulaye:13}), simple verbs are followed by their direct objects expressing a body-part. There is hence a clear contrast with whole-body autopathic actions, which are expressed with the verb \textit{yi} ‘do’ plus a nominal (a verbal or an action noun) specifying the action, as seen in (\ref{ex:Abdoulaye:11})--(\ref{ex:Abdoulaye:12}) above (one may consider sentence (\ref{ex:Abdoulaye:11c}) to describe an action viewed holistically although it concerns the head only, in contrast to sentence (\ref{ex:Abdoulaye:13a}) with a specified body-part \textit{geemèe} ‘beard’). A possessive pronoun referring to the subject can be adjoined to the body-part noun in sentences (\ref{ex:Abdoulaye:13}), as indicated, although this is wholly unnecessary in normal contexts. One may note that even with the possessive \textit{kânshì} ‘his head’, sentences (\ref{ex:Abdoulaye:13b}) and (\ref{ex:Abdoulaye:13d}) are not really ambiguous, i.e., they do not have the reflexive meaning ‘he washed himself’ or ‘he combed himself’, respectively.\footnote{Sentence \ref{ex:Abdoulaye:13b}, with \textit{kânshì}, can take the reflexive meaning only in the context of a ceremonial cleansing. For example, in a marriage, a groom is ceremonially “washed” normally by female relatives (see \textit{sun} \textit{wankè} \textit{angòo} ‘they washed/cleansed the groom’). But a groom can also choose to retire aside and throw the ceremonial water on himself and, in that case, sentence (\ref{ex:Abdoulaye:13b}) with \textit{kânshì} ‘himself’ can be used to describe the situation. (\ref{ex:Abdoulaye:13b}), still with \textit{kânshì}, can also be used in the sense ‘he cleared himself (of some accusations).’} Sentence (\ref{ex:Abdoulaye:13c}) illustrates an expression \textit{wankè} \textit{jìkii} ‘have a quick toilet’ which, despite using the noun \textit{jìkii} ‘body’, in fact refers to the cleaning of the limbs and face. Similarly, in sentence (\ref{ex:Abdoulaye:13d}) the hair is combed. To conclude this section, one can say that in Hausa the use of a reflexive pronoun is obligatory for a direct object coreferential with the subject, except with a few mental/sensation verbs. Hausa also does not allow a reflexive pronoun in subject function. \section{Coreference between the subject and various semantic roles} \label{sec:Abdoulaye:4} Beside the direct object position, reflexive pronouns can also appear in positions not directly governed by the main verb. This section reviews the applied nominal position, the possessive NP, and the objects of various prepositions. The section also looks at long distance coreference cases. \subsection{Recipients and other \textit{mà/wà}{}-marked applied nominals}\label{sec:Abdoulaye:4.1} The applied nominal is the direct object of the applicative marker \textit{mà/wà}, a free particle that stands in a close-knit syntactic relation with the verb (see \citealt{Tuller1984}, \citealt{Abdoulaye1996}, \citealt{Newman2000}:~280). The applied object assumes a variety of semantic roles, chiefly the recipient role, but also the benefactive, malefactive, locative, and possessor roles, and other minor unspecified roles (most of these roles also have their proper, i.e., non-applied, morphosyntax, as discussed later in this section). Applied nominals that are coreferential with the subject are most naturally expressed as reflexive pronouns, as seen in (\ref{ex:Abdoulaye:14}): \ea%14 \label{ex:Abdoulaye:14} \ea \label{ex:Abdoulaye:14a} \gll John yaa bàa (wà) kânshì shaawaɍàa. \\ John \textsc{3sg.m.cpl} give \textsc{appl} \textsc{refl.3sg.m} advice\\ \glt `John advised himself/changed his mind.’ \ex \label{ex:Abdoulaye:14b} \gll Sun aikoo mà kânsù wàsiiƙàa.\\ \textsc{3pl.cpl} send \textsc{appl} \textsc{refl.3pl} letter\\ \glt `They sent a letter to themselves.’ \ex \label{ex:Abdoulaye:14c} \gll Yaarinyàa taa dafàa mà kântà àbinci.\\ girl \textsc{3sg.f.cpl} cook \textsc{appl} \textsc{refl.3sg.f} food\\ \glt `The girl cooked for herself.’ \ex \label{ex:Abdoulaye:14d} \gll Yaa zoo yaa ganaɍ mà kânshì àlamàɍî-n.\\ \textsc{3sg.m.cpl} come \textsc{3sg.m.cpl} see \textsc{appl} \textsc{refl.3sg.m} situation{}-\textsc{def}\\ \glt `He came and saw the situation for himself.’ \z \z Sentences (\ref{ex:Abdoulaye:14a}--\ref{ex:Abdoulaye:14c}) illustrate recipient and benefactive nominals expressed as reflexive pronouns following the applied marker \textit{mà/wà} (the applied marker is normally omitted with the verb \textit{bâa} ‘give’, as seen in \ref{ex:Abdoulaye:14a}). Sentence (\ref{ex:Abdoulaye:14d}) shows that a mental/sensation verb, \textit{ganii} ‘see’, requires a reflexive applied object pronoun under subject coreference (by contrast, we have seen in the discussion of (\ref{ex:Abdoulaye:6}--\ref{ex:Abdoulaye:7}) that mental/sensation verbs can allow a non-reflexive subject-coreferential direct object pronoun). When the non-reflexive pronoun is used in the applied object position, then a disjoint reference reading is normally obligatory, as seen next in (\ref{ex:Abdoulaye:15}), unless there is a partial coreference between a singular subject and a plural applied object pronoun, as illustrated in (\ref{ex:Abdoulaye:16}): \ea%15 \label{ex:Abdoulaye:15} \ea \label{ex:Abdoulaye:15a} \gll John\textsubscript{1} yaa baa shì\textsubscript{*1/2} shaawaɍàa.\\ John \textsc{3sg.m.cpl} give \textsc{3sg.m} advice\\ \glt `John advised him.’ \ex \label{ex:Abdoulaye:15b} \gll Sun\textsubscript{1} aikoo mà-sù\textsubscript{*1/2} wàsiiƙàa.\\ \textsc{3pl.cpl} send \textsc{appl-3pl} letter\\ \glt `They sent them a letter.’ \ex \label{ex:Abdoulaye:15c} \gll *Naa jaawoo ma-nì wàhalàa.\\ \textsc{1sg.cpl} draw \textsc{appl-1sg} troubles\\ \glt `I invited troubles on myself.’ \z \z \ea%16 \label{ex:Abdoulaye:16} \ea \label{ex:Abdoulaye:16a} \gll Naa\textsubscript{1} \textit{bâa} \textit{kânmù}\textsubscript{1+x}\textit{/} \textit{baa} \textit{mù}\textsubscript{1+x} wàhalàa.\\ \textsc{1sg.cpl} give \textsc{refl.1pl} give \textsc{1pl} troubles\\ \glt `I (uselessly) tired us.’ \ex \label{ex:Abdoulaye:16b} \gll Kaa\textsubscript{1} jaawoo \textit{mà} \textit{kânkù}\textsubscript{1+x}\textit{/} \textit{ma-kù}\textsubscript{1+x} wàhalàa.\\ \textsc{2sg.m.cpl} draw \textsc{appl} \textsc{refl.2pl} \textsc{appl-2pl} troubles\\ \glt `You invited troubles on you and your associates.’ \ex \label{ex:Abdoulaye:16c} \gll Yaa\textsubscript{1} jaawoo \textit{mà} \textit{kânsù}\textsubscript{1+x}\textit{/} \textit{ma-sù}\textsubscript{?1+x/2}\textit{\textsubscript{} }wàhalàa.\\ \textsc{3sg.m.cpl} draw \textsc{appl} \textsc{refl.3pl} \textsc{appl-3pl} troubles\\ \glt `He invited troubles on himself and his associates.’ OR: `He invited troubles on them.’ \z \z Sentences (\ref{ex:Abdoulaye:15a}--\ref{ex:Abdoulaye:15c}) show that a non-reflexive pronoun in the applied position, despite matching agreement features, cannot be coreferential with the subject. Sentence (\ref{ex:Abdoulaye:15c}) in particular shows that the non-reflexive pronoun is not possible even for the 1\textsuperscript{st} person (the same is true for the 2\textsuperscript{nd} person as well). But in plural pronoun constructions, as illustrated in (\ref{ex:Abdoulaye:16}a-b), the 1\textsuperscript{st} and 2\textsuperscript{nd} person may allow a non-reflexive subject-coreferential pronoun in the applied position, while for the 3\textsuperscript{rd} person the reflexive pronoun is strongly preferred by speakers, as seen in (\ref{ex:Abdoulaye:16c}). \subsection{Possessive NPs} \label{sec:Abdoulaye:4.2} When a possessive NP is coreferential with the subject, Hausa requires a simple possessive pronoun in basic, pragmatically neutral sentences, as illustrated in (\ref{ex:Abdoulaye:17}): \ea%17 \label{ex:Abdoulaye:17} \ea \label{ex:Abdoulaye:17a} \gll Taa\textsubscript{1} ɗàuki laimà-ɍ{}-tà\textsubscript{1/2}.\\ \textsc{3sg.f.cpl} take umbrella-of.\textsc{f-3sg.f}\\ \glt `She took her umbrella.’ \ex \label{ex:Abdoulaye:17b} \gll John\textsubscript{1} ya-nàa kaɍàntà littaafì-n-shì\textsubscript{1/2}.\\ John \textsc{3sg.m-ipfv} read book-of.\textsc{m-3sg.m}\\ \glt `John is reading his book.’ \ex \label{ex:Abdoulaye:17c} \gll Maatâ-n\textsubscript{1} sun shaarè ɗaakì-n-sù\textsubscript{1/2}.\\ women-\textsc{def} \textsc{3pl.cpl} sweep room-of.\textsc{m-3pl}\\ \glt `The women swept their rooms.’ \z \z As shown in (\ref{ex:Abdoulaye:17}), the simple possessive pronoun can be coreferential with the subject or not. Nonetheless, and as \citet[525]{Newman2000} notes, the coreference between the subject and the possessive pronoun can also be expressed as a reflexive pronoun, but with a marked emphasis, as seen in (\ref{ex:Abdoulaye:18}): \ea%18 \label{ex:Abdoulaye:18} \ea \label{ex:Abdoulaye:18a} \gll Sun ginà gida-n-sù.\\ \textsc{3pl.cpl} build house-of.\textsc{m-3pl}\\ \glt `They built their house.’ \ex \label{ex:Abdoulaye:18b} \gll Sun ginà \textit{gida-n} \textit{kânsù}/ \textit{gidaa} \textit{na} \textit{kânsù/} gida-n-sù na kânsù.\\ \textsc{3pl.cpl} build house-of.\textsc{m} \textsc{refl.3pl} \textsc{refl.3pl} house-of.\textsc{m-3pl} one.of.\textsc{m} \textsc{refl.3pl}\\ \glt `They built their own house.’ \ex \label{ex:Abdoulaye:18c} \gll {Ùbaa-naa} {na} {kâinaa!} {(cf. *ùba-n kâinaa/*ùbaa na kâinaa)}\\ {father-of.\textsc{m.1sg}} {one.of.\textsc{m}} \textsc{refl.1sg} {}\\ \glt `Hey you my dear [for me alone] “uncle”!’ \z \z Sentence (\ref{ex:Abdoulaye:18a}), with a non-reflexive pronoun, has a pragmatically neutral interpretation, just like sentences (\ref{ex:Abdoulaye:17}). By contrast, sentence (\ref{ex:Abdoulaye:18b}) has a reflexive pronoun in a reduced, a full, or a double possessive construction. In all three options, sentence (\ref{ex:Abdoulaye:18b}) contrasts with sentence (\ref{ex:Abdoulaye:18a}) by being more emphatic and, naturally, the more profuse the formal means used, the greater the emphasis. Indeed in appropriate contexts, the emphasis can even imply an exclusive use by the possessor of the possessed object, beyond the state of possession itself. In particular, the double possessive appositional construction, i.e., the 3\textsuperscript{rd} option in (\ref{ex:Abdoulaye:18b}), is the one that mostly implies the exclusive use of the possessed object by the possessor. So, sentence (\ref{ex:Abdoulaye:18c}) expresses - jokingly – the exclusive use meaning and the shorter reflexive constructions cannot be used, as indicated (the expression is used to affectionately greet a familiar – but unrelated - senior person; the senior person greeted can in fact reply \textit{ɗìyaa-taa} \textit{ta} \textit{kâinaa} ‘my dear own “niece”, i.e., other kin relations can be used, but always between unrelated people). To summarize, Hausa likely does not have genuine reflexive adnominal possessives and sentence (\ref{ex:Abdoulaye:18b}) can be compared to English sentences with the emphatic possession marker \textit{own} (see \citealt{Haspelmath2008}:~51 for discussion). \subsection{Locatives} \label{sec:Abdoulaye:4.3} Hausa uses basic and derived prepositions to express static locative relations. The derived prepositions are generally homophonous with locational nouns that are formally heads of a possessive constructions taking as “possessor” the NP expressing the location ground (see \textit{baya\nobreakdash-n} \textit{iccèe} ‘behind the tree’, lit. ‘back-of.M tree’). Most of these possessive constructions have grammaticalized towards a prepositional phrase structure and no longer have the behavioral properties typical of true possessive constructions (see \citealt{Abdoulaye2018}:~48f). When the location ground NP is coreferential with the subject, a non-reflexive pronoun must be used. This is illustrated in (\ref{ex:Abdoulaye:19}): \ea%19 \label{ex:Abdoulaye:19} \ea \label{ex:Abdoulaye:19a} \gll Ta\textsubscript{1} mayaɍ\_dà yaaròo baaya-n-tà\textsubscript{1}/ *baaya-n kântà\textsubscript{1}.\\ \textsc{3sg.f.rp} return.\textsc{caus} child back-of.\textsc{m-3sg.f} back-of.\textsc{m} \textsc{refl.3sg.f}\\ \glt `She moved the child behind her.’ \ex \label{ex:Abdoulaye:19b} \gll Ka\textsubscript{1}{}-nàa\_dà aikìi gàba-n-kà\textsubscript{1} / *gàba-n kânkà\textsubscript{1}.\\ \textsc{2sg.m}-have work front-of.\textsc{m-2sg.m} { } front-of.\textsc{m} \textsc{refl.2sg.m}\\ \glt `You have much work to do [in front of you]. \z \z These sentences show that a locative ground NP coreferential with the subject cannot be a reflexive pronoun. There is hence a contrast between locative phrases based on the possessive construction and genuine possessive constructions which at least admit an emphatic reflexive pronoun optionally. The locative phrases based on the possessive constructions also contrast with locative phrases based on simple prepositions which, sometimes, allow a reflexive pronoun, as noted by \citet[522f]{Newman2000}. This is illustrated in (\ref{ex:Abdoulaye:20})--(\ref{ex:Abdoulaye:21}): \ea%20 \label{ex:Abdoulaye:20} \ea \label{ex:Abdoulaye:20a} \gll Ta\textsubscript{1} ga wani macìijii kusa \textit{gàree} \textit{tà}\textsubscript{1/2}/ *\textit{gà} \textit{kântà}\textsubscript{1}.\\ \textsc{3sg.f.rp} see one snake near on \textsc{3sg.f} on \textsc{refl.3sg.f}\\ \glt `She saw a snake beside her/herself.’ \ex \label{ex:Abdoulaye:20b} \gll John\textsubscript{1} ya ajè littaafìi neesà dà shì\textsubscript{1/2}/ *kânshì\textsubscript{1}.\\ John \textsc{3sg.m.rp} put.down book away to \textsc{3sg.m} \textsc{refl.3sg.m}\\ \glt `John put a book away from him.’ \z \z \ea%21 \label{ex:Abdoulaye:21} \ea \label{ex:Abdoulaye:21a} \gll Taa\textsubscript{1} shaafà fentìi \textit{gàree} \textit{tà\textsubscript{1/2}}/ \textit{gà} \textit{kântà}\textsubscript{1}.\\ \textsc{3sg.f.cpl} rub paint on \textsc{3sg.f} on \textsc{refl.3sg.f}\\ \glt `She rubbed paint on her/herself.’ \ex \label{ex:Abdoulaye:21b} \gll Sun\textsubscript{1} jaawoo bàɍgoo bisà suu\textsubscript{1/2}/ kânsù\textsubscript{2}.\\ \textsc{3pl.cpl} draw blanket on \textsc{3pl} \textsc{refl.3pl}\\ \glt `They pulled the blanket over them/themselves.’ \z \z In sentences (\ref{ex:Abdoulaye:20}--\ref{ex:Abdoulaye:21}), the particles \textit{gà} ‘on’ (\textit{gàree} before pronoun), \textit{dà} ‘with, and, to’ are basic prepositions (without an evident source). \textit{Bisà} ‘on, on top of’ is derived from the noun \textit{bisà} ‘top, sky’ (see \textit{bisà-n-shì} ‘its top part’ or ‘on it’), but it can be used without possessive marking and behaves like basic prepositions. Sentences (\ref{ex:Abdoulaye:20}) require a non-reflexive pronoun even when subject-coreference is intended, as indicated by the ungrammaticality of a reflexive pronoun. This may be due to the fact that the sentences express a non-contact locative relation. Although this needs to be investigated more, one can see that in sentences (\ref{ex:Abdoulaye:21}), which express a contact location, a locative NP, which is coreferential with the subject, can be a reflexive or a non-reflexive pronoun. However, in sentences (\ref{ex:Abdoulaye:21}) a non-reflexive pronoun is still the most natural option. \subsection{Benefactives with preposition \textit{don} ‘for’}\label{sec:Abdoulaye:4.4} \sectref{sec:Abdoulaye:4.1} showed that benefactive NPs can be expressed as applied nominals. They can also be expressed as objects of the preposition \textit{don} ‘for, for the sake of’. Under subject-coreference, the benefactive argument is most naturally expressed as a reflexive pronoun, although the non-reflexive pronoun is also possible. This is illustrated in the following (see also \citealt[524f]{Newman2000}): \ea%22 \label{ex:Abdoulaye:22} \ea \label{ex:Abdoulaye:22a} \gll Taa\textsubscript{1} sàyi littaafìi don kântà\textsubscript{1}/ ita\textsubscript{1/2}.\\ \textsc{3sg.f.cpl} buy book for \textsc{refl.3sg.f} \textsc{3sg.f}\\ \glt `She bought a book for herself/for her.’ \ex \label{ex:Abdoulaye:22b} \gll Yaaròo\textsubscript{1} yaa dafà àbinci don kânshì\textsubscript{1}/ shii\textsubscript{1/2}.\\ boy \textsc{3sg.m.cpl} cook food for \textsc{refl.3sg.m} \textsc{3sg.m}\\ \glt `The boy cooked food for himself/for him.’ \ex \label{ex:Abdoulaye:22c} \gll Naa ginà gidaa don kâinaa/ nii.\\ \textsc{1sg.cpl} build house for \textsc{refl.1sg} \textsc{1sg}\\ \glt `I built a house for myself/for me.’ \ex \label{ex:Abdoulaye:22d} \gll (To) don kânkà!/ Don kânshì!/ Don kânsù!\\ OK for \textsc{refl.2sg.m} for \textsc{refl.3sg.m} for \textsc{refl.3pl}\\ \glt `OK, (that’s) your problem!/His problem!/Their problem!’ \z \z In sentences (\ref{ex:Abdoulaye:22a}--\ref{ex:Abdoulaye:22c}) the reflexive pronoun is preferred, even for (\ref{ex:Abdoulaye:22c}) with a 1\textsuperscript{st} person pronoun. When a non-reflexive 3\textsuperscript{rd} person pronoun is used, it is naturally ambiguous between subject-coreference and disjoint reference, as indicated. Examples (\ref{ex:Abdoulaye:22d}) show that the benefactive phrase with the reflexive pronoun can be used as an idiomatic expression (which can be used by a speaker after hearing someone rejecting a sound advice). In this expression, the reflexive pronoun cannot be replaced with a non-reflexive pronoun (i.e., \textit{don} \textit{kuu} would mean ‘for you’, not ‘that’s your problem’). \subsection{Instrumental, associative and other oblique NPs}\label{sec:Abdoulaye:4.5} In \sectref{sec:Abdoulaye:3.1} (see discussion of sentence \ref{ex:Abdoulaye:4}) we saw that causative `Verb\nobreakdash-\textit{dà}’ constructions take true direct objects, which are expressed as reflexive pronouns in subject-coreference contexts. However, \textit{dà} is a multipurpose free particle which, in its basic functions, marks the comitative and the instrumental relations (it also marks ‘and’-conjunction, a function that does not concern us here). In these basic functions, \textit{dà}, like other oblique markers, can optionally take a reflexive complement. This is illustrated in (\ref{ex:Abdoulaye:23}): \ea%23 \label{ex:Abdoulaye:23} \ea \label{ex:Abdoulaye:23a} \gll Naa gamàa da nii/ kâina.\\ \textsc{1sg.cpl} include with \textsc{1sg}/ \textsc{refl.1sg}\\ \glt `I included myself.’ \ex \label{ex:Abdoulaye:23b} \gll Balki\textsubscript{1} taa gamàa dà ita\textsubscript{1/2}/ kânta\textsubscript{1}.\\ Balki \textsc{3sg.f.cpl} include with \textsc{3sg.f}/ \textsc{refl.3sg.f}\\ \glt `Balki included her/herself.’ \ex \label{ex:Abdoulaye:23c} \gll Balki\textsubscript{1} taa yi shaawaɍàa gàme dà ita\textsubscript{1/2}/ kânta\textsubscript{1}.\\ Balki \textsc{3sg.f.cpl} do advice about with \textsc{3sg.f} \textsc{refl.3sg.f}\\ \glt `Balki1 made a proposal concerning her/herself.’ \z \z It may be noted that in (\ref{ex:Abdoulaye:23a}--\ref{ex:Abdoulaye:23b}), the reflexive pronoun is the best option in case of subject-coreference. When a non-reflexive 3\textsuperscript{rd} person pronoun is used, as in (\ref{ex:Abdoulaye:23b}--\ref{ex:Abdoulaye:23c}), it can be coreferential with the subject or refer to another participant. It may also be noted that the reflexive pronouns in (\ref{ex:Abdoulaye:23}) are not emphatic pronouns and one must distinguish them from the adverbial self-intensifier constructions, which are also built with \textit{dà}{}-phrases (see \sectref{sec:Abdoulaye:6.1}). \subsection{Long-distance coreference}\label{sec:Abdoulaye:4.6} When a higher subject is coreferential with an NP in the lower clause, a non-reflexive pronoun is obligatorily used when the second NP is a subject, a direct object, an applied object, or a prepositional object. In fact, the only cases of long-distance reflexives concerns a position inside the adnominal possessive construction or a long-distance coreference mediated by an understood lower subject in a non-finite clause. This is illustrated in the following (sentence \ref{ex:Abdoulaye:25b} adapted from \citealt{Newman2000}:~523): \ea%24 \label{ex:Abdoulaye:24} \ea \label{ex:Abdoulaye:24a} \gll Taa\textsubscript{1} azà [(*kântà\textsubscript{1}) ta\textsubscript{1/2}{}-nàa\_dà ìsàssun kuɗii].\\ \textsc{3sg.f.cpl} think \textsc{refl.3sg.f} \textsc{3sg.f}-have enough money\\ \glt `She thought that she had enough money.’ \ex \label{ex:Abdoulaye:24b} \gll Yaa\textsubscript{1} soo Bintà\textsubscript{2} tà \textit{zàaɓee} \textit{shì\textsubscript{1/3}}/ \textit{*zàaɓi} \textit{kânshì}\textsubscript{1}/ zàaɓi kântà\textsubscript{2}.\\ \textsc{3sg.m.cpl} want B. \textsc{3sg.f.sbj} choose \textsc{3sg.m} choose \textsc{refl.3sg.m} choose \textsc{refl.3sg.f}\\ \glt `He wanted that Binta choose him/*himself/herself.’ \z \z \ea%25 \label{ex:Abdoulaye:25} \ea \label{ex:Abdoulaye:25a} \gll Yaa\textsubscript{1} soo Bintà\textsubscript{2} tà sàyi hòoto-n shì\textsubscript{1/3}/ kânshì\textsubscript{1}.\\ \textsc{3sg.m.cpl} want B. \textsc{3sg.f.sbjv} buy photo-of.\textsc{m} \textsc{3sg.m} \textsc{refl.3sg.m}\\ \glt `Abdu wanted that Binta buy his picture/his own picture.’ \ex \label{ex:Abdoulaye:25b} \gll Abdù\textsubscript{1} yaa tàmbàyi Bintà\textsubscript{2} [hanyà-ɍ [kaarè kânshì\textsubscript{1}/ kântà\textsubscript{2}]].\\ Abdu \textsc{3sg.m.cpl} ask B. way-of.\textsc{f} protect \textsc{refl.3sg.m} \textsc{refl.3sg.f}\\ \glt `Abdu asked Binta how to protect himself/herself.’ \ex \label{ex:Abdoulaye:25c} \gll Abdù\textsubscript{1} yaa tàmbàyi Bintà\textsubscript{2} [hanyà-ɍ [kaarèe shì\textsubscript{1/3}/ tà\textsubscript{2/3}]].\\ Abdu \textsc{3sg.m.cpl} ask B. way-of.\textsc{f} protect \textsc{3sg.m}/ \textsc{3sg.f}\\ \glt `Abdu asked Binta how to protect himself/herself/him/her.’ \z \z In sentences (\ref{ex:Abdoulaye:24a}-\ref{ex:Abdoulaye:24b}), the coreferential lower subject (pronoun \textit{ta}\nobreakdash- `3SG.F’) and direct object (pronoun \textit{shi} `3SG.M’), respectively, cannot be expressed as reflexive pronouns. By contrast, the coreferential adnominal possessive argument can be a reflexive pronoun but with an emphatic meaning, as seen in (\ref{ex:Abdoulaye:25a}). In sentence (\ref{ex:Abdoulaye:25b}), the main verb is followed by two object NPs. The second NP (in first brackets) contains a possessive construction with \textit{hanyàa} ‘way’ as head and an adnominal non-finite clause (inner brackets). The direct object of the non-finite clause, when coded as a reflexive pronoun, can refer to main subject (\textit{Abdù}) or the main direct object (\textit{Bintà}). In this case, the referent of the main subject or the main direct object would, respectively, be understood to be the agent of the verb \textit{kaarè} ‘protect’. When simple pronouns are used as direct objects of \textit{kaarè}, as seen in (\ref{ex:Abdoulaye:25c}), then these pronouns can refer to Abdu, Binta, or someone else. If the pronoun refers to Abdu, then Abdu cannot be the understood agent of verb \textit{kaarè}, and similarly with Binta. In other words, sentence (\ref{ex:Abdoulaye:25b}) may not illustrate a genuine long-distance coreference (see the discussion in \citealt[14, note 15]{Haspelmath2020a}). \section{Coreference between non-subject arguments} \label{sec:Abdoulaye:5} In Hausa, the coreference between non-subject arguments is most naturally expressed with non-reflexive pronouns or, alternatively, with a reflexive pronoun. The coreference relation can take place between a direct object, an applied object, or a prepositional object on the one hand, and an adnominal possessive pronoun or a prepositional object, on the other hand. This is illustrated in the following (see also \citealt[523]{Newman2000} for similar data): \ea%26 \label{ex:Abdoulaye:26} \ea \label{ex:Abdoulaye:26a} \gll Yaa\textsubscript{1} nuunàa mà Màaɍi\textsubscript{2} \textit{hòoto-n-tà}\textsubscript{2/3}/ \textit{hòoto-n} \textit{kântà}\textsubscript{2}. \\ \textsc{3sg.m.cpl} show \textsc{appl} \textsc{m}. photo-of.\textsc{m-3sg.f} photo-of.\textsc{m} \textsc{refl.3sg.f}\\ \glt `He showed Mary her picture/a picture of herself (her own picture).’ \ex \label{ex:Abdoulaye:26b} \gll Muusaa\textsubscript{1} yaa yii wà Abdù\textsubscript{2} zancee gàme dà shii\textsubscript{1/2/3}/ kânshì\textsubscript{1/2}).\\ Musa \textsc{3sg.m.cpl} do \textsc{appl} A. talk about with \textsc{3sg.m} \textsc{refl.3sg.m}\\ \glt `Musa spoke with Abdu about himself.’ \z \z Sentence (\ref{ex:Abdoulaye:26a}), with the reflexive pronoun \textit{kântà}, implies that the photo likely pictures Mary, whereas this reading is not obligatory with the non-reflexive pronoun \textit{tà}. In (\ref{ex:Abdoulaye:26b}), the (non-emphatic) reflexive pronoun \textit{kânshì} can only refer to either of the nouns, i.e. \textit{Muusaa} or \textit{Abdù.} The non-reflexive pronoun \textit{shii} can refer to either noun or a third understood participant. Sentence (\ref{ex:Abdoulaye:26b}) shows that Hausa reflexive pronouns are not exclusively subject-oriented. \section{Self-intensifiers} \label{sec:Abdoulaye:6} We have already seen in \sectref{sec:Abdoulaye:4.2} that adnominal possessive reflexive pronouns can put emphasis on the possessive relation (see \textit{mootàɍ} \textit{kânshì} ‘his own car’). \citet{Newman2000} discusses at length two other emphatic constructions in Hausa that are related to the reflexive constructions and which are referred to in typological studies as the adverbial and the adnominal self-intensifiers (see \citealt[43]{KoenigSiemund1999}). This section is largely based on Newman’s account, although I will use the general terminology. The section presents the two types of constructions, in turn. \subsection{Adverbial self-intensifiers} \label{sec:Abdoulaye:6.1} According to \citet[526]{Newman2000}, what he calls “pseudoemphatic” reflexives are prepositional phrases with the preposition \textit{dà} ‘with, and, to, etc.’ followed by an (apparent) reflexive pronoun which is coreferential with the sentence subject. Semantically, they emphasize the fact that the subject referent did an action or underwent a process on their own, by themselves. This is illustrated in (\ref{ex:Abdoulaye:27}--\ref{ex:Abdoulaye:28}): \ea%27 \label{ex:Abdoulaye:27} \ea \label{ex:Abdoulaye:27a} \gll Yâaraa sun koomàa gidaa dà kâ-n-sù.\\ children \textsc{3pl.cpl} return home with self-of.\textsc{m-3pl}\\ \glt `The children returned home by themselves.’ \ex \label{ex:Abdoulaye:27b} \gll Wutaa taa mutù dà kâ-n-tà.\\ fire \textsc{3sg.f.cpl} die with self-of.\textsc{m-3sg.f}\\ \glt `The fired died out on its own.’ \z \z \ea%28 \label{ex:Abdoulaye:28} \ea \label{ex:Abdoulaye:28a} \gll Yâaraa dà kâ-n-sù su-kà koomàa gidaa.\\ children with self-of.\textsc{M-3PL} \textsc{3pl-rp} return home\\ \glt `The children returned home all by themselves.’ \ex \label{ex:Abdoulaye:28b} \gll Yâaraa sun koomàa gidaa \textit{dà} \textit{gudù}/ \textit{dà} \textit{tàimako-n} \textit{mutàanee}.\\ children \textsc{3pl.cpl} return home with running with help-of.\textsc{m} people\\ \glt `The children returned home running/with help from others.’ \ex \label{ex:Abdoulaye:28c} \gll tàimako-n kâi (dà kâi)\\ help-of.\textsc{m} self with self\\ \glt `self-help (all by oneself)’ \z \z \citet{Newman2000} calls the reflexive-like forms in (\ref{ex:Abdoulaye:27}) “pseudoemphatic” because he believes they are bona fide reflexive pronouns in an adjunct structural position and which are coreferential with the subject. He notes that they typically appear near or at the end of the sentence. He also notes that they can be focus-fronted, just like any other clause constituent, as seen in (\ref{ex:Abdoulaye:28a}). Furthermore, (\ref{ex:Abdoulaye:28b}) shows that they can alternate with manner phrases introduced with the same preposition \textit{dà} ‘with, and, to’. Nonetheless, it is clear that the reflexive pronouns in (\ref{ex:Abdoulaye:27}-\ref{ex:Abdoulaye:28}) signal emphasis and should be characterized accordingly. They are indeed used in contexts where a speaker believes the hearer does not expect the subject referent to be able to carry out the action on their own. Nonetheless, one may not consider them to be true reflexive pronouns. Indeed, example (\ref{ex:Abdoulaye:28c}) shows that \textit{kâi} meaning ‘self’ can appear without an adnominal possessive pronoun, i.e., a coreference with an antecedent noun is not required to mark the emphasis. These forms are very likely the Hausa instantiation of the adverbial self-intensifiers and can be glossed literally as ‘with self-of-pronoun’, marking more precisely the emphatic meaning ‘with (just) the self, all alone’ (see \citealt[44]{KoenigSiemund2000} who refer to this use of the intensifiers as the exclusive ‘alone’ use; for more on \textit{kâi} as ‘self’ see next section). Sentence (\ref{ex:Abdoulaye:28a}), without the intensifier, would have no implication on how the children returned home. \citet[529]{Newman2000} also notes that for an even greater emphasis, the intensifier can combine with true reflexive pronouns, as seen in (\ref{ex:Abdoulaye:29}): \ea%29 \label{ex:Abdoulaye:29} \ea \label{ex:Abdoulaye:29a} \gll Bintà taa zàrgi kântà dà kâ-n-tà.\\ Binta \textsc{3sg.f.cpl} accuse \textsc{refl.3sg.f} with self-of.\textsc{m-3sg.f}\\ \glt `Binta charged herself knowingly, deliberately.’ \ex \label{ex:Abdoulaye:29b} \gll Sun ƙaaràa wà kânsù kuɗii (suu) dà kâ-n-sù.\\ \textsc{3pl.cpl} augment \textsc{appl} \textsc{refl.3pl} money \textsc{3pl} with self-of.\textsc{m-3pl}\\ \glt `They raised their pay all by themselves, deliberately.’ \z \z Sentences (\ref{ex:Abdoulaye:29a}--\ref{ex:Abdoulaye:29b}) have, respectively, a direct object and an applied object reflexive pronoun combined with the emphatic \textit{dà\nobreakdash-}phrase, here underlining the deliberate aspect of the action. As \citet[527]{Newman2000} notes, an independent pronoun can optionally precede the \textit{dà\nobreakdash-}phrase, as seen in sentence (\ref{ex:Abdoulaye:29b}). In such cases, \citeauthor{Newman2000} proposes that the \textit{dà\nobreakdash-}phrase is not an independent sentence constituent but is simply adjoined to the pronoun. This construction then comes close to the second type of emphatic reflexive pronouns, which \citeauthor{Newman2000} also believes are adnominal adjunctions, and which are presented next. \subsection{The adnominal self-intensifiers} \label{sec:Abdoulaye:6.2} Indeed, according to \citet{Newman2000}, the genuine reflexive-like emphatic pronouns are not sentence-level constituents, that is, they do not fulfill a semantic or syntactic role in the clause. Instead, they always appear in apposition next to a noun or pronoun. Functionally, they seem to signal a scalar ‘even X’/‘X himself’ emphasis or contrast. This is illustrated in the following (see also \citealt[527]{Newman2000}): \ea%30 \label{ex:Abdoulaye:30} \ea \label{ex:Abdoulaye:30a} \gll Bellò (shii) kânshì yaa san bâi\_dà gaskiyaa.\\ Bello \textsc{3sg.m} \textsc{emp.3sg.m} \textsc{3sg.m.cpl} know \textsc{neg.3sg.m}.have truth\\ \glt `[Even] Bello himself knows he is wrong.’ \ex \label{ex:Abdoulaye:30b} \gll Sun ruusà makaɍantâ-ɍ (ita) kântà.\\ \textsc{3pl.cpl} break.up school-\textsc{def} \textsc{3sg.f} \textsc{emp.3sg.f}\\ \glt `They destroyed the school itself.’ \ex \label{ex:Abdoulaye:30c} \gll Ɗàalìbâ-n duk su-kà gudù, àmmaa maalàmî-n shii kânshì ya tsayàa.\\ students-\textsc{def} all \textsc{3pl-pf} run but teacher-\textsc{def} \textsc{3sg.m} \textsc{emp.3sg.m} \textsc{3sg.m.rp} stay\\ \glt `The students all ran away, but the teacher himself stood.’ \z \z In (\ref{ex:Abdoulaye:30a}--b), the self-intensifier follows the modified noun, with an optional (but preferred) pronoun between the two. The pronoun becomes obligatory if the modified noun is omitted or positioned after (or away from) the intensifier (e.g., \textit{shii} \textit{kânshì} ‘he himself’, \textit{shii} \textit{kânshì} \textit{Bellò} ‘Bello himself’). Consequently, one can easily formally distinguish the adverbial self-intensifier (see \sectref{sec:Abdoulaye:6.1}) from the adnominal self-intensifier, no matter their position in the sentence (see discussion of \ref{ex:Abdoulaye:31}--\ref{ex:Abdoulaye:32} below). Semantically, the adnominal self-intensifiers seem to primarily signal emphasis and, secondarily, contrast, but both in the background of a scalar context. For example, sentence (\ref{ex:Abdoulaye:30a}) expresses a clear scalar emphasis: i.e., adversaries and all other people, as expected, think Bello is wrong; however, and quite unexpectedly, Bello, too, knows he is wrong. As for sentence (\ref{ex:Abdoulaye:30b}), while it can be used in contexts where no other building was destroyed, it nonetheless supposes an understood scalar background, i.e., if a school can be destroyed, then other less important buildings might as well. This account is then similar to the one given in a number of studies, such as \citet{EdmondsonPlank1978, Primus1992, Kibrik1995}, as cited in \citet[47--48]{KoenigSiemund2000}, however, reject this type of account, citing as evidence English data on which sentence (\ref{ex:Abdoulaye:30c}) is modeled. They would argue that in (\ref{ex:Abdoulaye:30c}), it is well expected that the referent of the marked noun (\textit{maalàmîn} ‘the teacher’) is the one not afraid to face a danger. Nonetheless for Hausa, it can also be noted that sentence (\ref{ex:Abdoulaye:30c}), like sentences (\ref{ex:Abdoulaye:30a})--(\ref{ex:Abdoulaye:30b}), still has a scalar context: the marked noun refers to an entity situated at the higher end of a scale. The only difference is that sentence (\ref{ex:Abdoulaye:30c}) expresses a contrast (between the scaled entities ‘students’ and ‘teacher’; see also sentence (\ref{ex:Abdoulaye:32b}) below). That the adnominal self-intensifiers may express both emphasis and contrast should not be surprising, since in general focus studies, too, the same formal means can signal various pragmatic situations (such as when a cleft construction is claimed to signal new information focus, contrastive focus, and exhaustive listing focus). Nonetheless, this preliminary account may not extend to other languages like English, or even crosslinguistically, where the uses of the self-intensifiers are more diverse (see \citealt[224]{KoenigGast2006}) than it appears to be the case in Hausa (at least pending further data). Adnominal self-intensifiers can be reinforced in a number of ways, for extra emphasis. They can also have idiomatic uses. This is illustrated in (\ref{ex:Abdoulaye:31})--(\ref{ex:Abdoulaye:32}): \ea%31 \label{ex:Abdoulaye:31} \ea \label{ex:Abdoulaye:31a} \gll Bellò shii dà kâ-n-shì yaa san gaskiyaa.\\ Bello \textsc{3sg.m} with self-of.\textsc{m-3sg.m} \textsc{3sg.m.cpl} know truth\\ \glt `Bello, really he himself, knows the truth.’ \ex \label{ex:Abdoulaye:31b} \gll Bello shii kân\_kânshì yaa san gaskiyaa.\\ Bello \textsc{3sg.m} \textsc{emp-emp.3sg.m} \textsc{3sg.m.cpl} know truth\\ \glt `Bello, really he himself, knows the truth.’ \z \z \ea%32 \label{ex:Abdoulaye:32} \ea \label{ex:Abdoulaye:32a} \gll Wâyyoo mu(u) kânmù!\\ alas \textsc{1pl} \textsc{emp.1pl}\\ \glt `Alas, poor us!’ \ex \label{ex:Abdoulaye:32b} \gll Kee \textit{kânkì}/ \textit{dà} \textit{kâ-n-kì} zaa\_kì kunnà wutaa à nân!\\ \textsc{2sg.f} \textsc{emp.2sg.f} with self-of.\textsc{m-2sg.f} \textsc{fut-2sg.f} light fire at here\\ \glt `How come you [who should know better] would light a fire in this place!’ \z \z In (\ref{ex:Abdoulaye:31a}), the subject noun \textit{Bellò} is followed by a reinforced adnominal self-intensifier \textit{shii} \textit{dà} \textit{kânshì,} which clearly contains the adverbial intensifier \textit{dà} \textit{kânshì} (see \sectref{sec:Abdoulaye:6.1}). The pronoun \textit{shii} is obligatory hence, the noun \textit{Bellò} cannot be followed by just \textit{dà} \textit{kânshì.} Semantically, the modified noun in (\ref{ex:Abdoulaye:31a}) is emphasized, as indicated. Sentence (\ref{ex:Abdoulaye:31b}) shows that adnominal self-intensifiers can be partially repeated (or, more likely, reduplicated prefixally), for an even greater emphasis. The partial repetition/reduplication device seems not to be available to the adverbial self-intensifiers (in fact to no other reflexive or reflexive-like construction). I will follow \citet[527]{Newman2000} in separating out the two formal types of self-intensifiers and globally gloss the adnominal self-intensifiers as ‘EMP’, plus the person features (see also discussion of sentences \ref{ex:Abdoulaye:38} below). Nonetheless, as reported by other researchers (see \citealt[117]{Wolff1993}), it seems that speakers have come to make the two types of self-intensifiers overlap (see sentence \ref{ex:Abdoulaye:31a}, \ref{ex:Abdoulaye:32b}, but also sentence \ref{ex:Abdoulaye:38b} below with its double meaning). Sentences (\ref{ex:Abdoulaye:32}) show that adnominal self-intensifiers can partake in fixed or idiomatic expressions (sentences like \ref{ex:Abdoulaye:32b} are generally used for scolding, i.e., the referent of the pronoun \textit{kèe} ‘2SG.F’, in contrast to all other relevant people, should know that fire should not be lit at the place). In conclusion, Hausa uses forms akin to reflexive pronouns as adverbial and adnominal intensifiers to mark, respectively, the ‘by himself’-action emphasis and the scalar ‘even X’/’X himself’ emphasis or contrast. \section{The meanings of \textit{kâi} ‘head, self’}\label{sec:Abdoulaye:7} In Hausa, as in many other languages in the area,\footnote{See for example \citet[39]{BernardWhite-Kaba1994} for Zarma.} the word for `head’ has many derived meanings, including: `intelligence’, `consciousness’, `mind’, `person’, and `self, oneself’ (see \citealt{Will2019} for a review). Indeed, in Hausa the noun \textit{kâi} ‘self, oneself’, independently from the reflexive pronouns in Table~3, can appear alone in many nominal compounds, semi-fixed verbal expressions, and even proverbs.\footnote{Some \textit{kâi}{}-based proverbs one can find in dictionaries and the internet are: \textit{iyà} \textit{ruwa} \textit{fit} \textit{dà} \textit{kâi} ‘saving oneself is the measure of one’s swimming skills’, lit. ‘swimming [is] saving self’ (a proverb used to mean one should first test oneself before claiming an expertise; a variant of which is: \textit{koowaa} \textit{ya} \textit{fid} \textit{dà} \textit{kâi} \textit{naa\nobreakdash-sà} \textit{shii} \textit{nèe} \textit{gwànii} ‘whoever saves himself is the expert’, using a full ‘self that.of.M\nobreakdash-3SG.M’ possessive construction.); \textit{yàbon} \textit{kâi} \textit{jaahilcìi} ‘bragging is shallowness’, lit. ‘praise of self [is] ignorance’; \textit{girman} \textit{kâi} \textit{rawànin} \textit{tsìyaa} ‘pride is destructive’, lit. ‘big-ness of self/head [is] turban of poverty’; \textit{anàa} \textit{ta} \textit{kâi} \textit{bâa} \textit{a} \textit{ta} \textit{kaayaa} ‘one should attend to the most urgent issue first’, lit. ‘while saving the self, one does not care about properties’. The proverbs usually shed the functional words, like copulas (see \citealt[164f]{Newman2000}), the light verb \textit{yi} ‘do’ (see \citealt[171]{Jaggar2001}, \citealt[281]{Newman2000}), or even reduce phonological material (cf. \textit{ruwa} above vs. the full form \textit{ruwaa} ‘water’).} Some of the \textit{kâi}{}-based compounds and idiomatic expressions are illustrated in (\ref{ex:Abdoulaye:33}). \ea%33 \label{ex:Abdoulaye:33} \ea \label{ex:Abdoulaye:33a} \gll àbu-n kâi/ (àbù) na kâi\\ thing-of.\textsc{m} self thing one.of.\textsc{m} self\\ \glt `property, wealth, own item’ \ex \label{ex:Abdoulaye:33b} \gll kiishì-n kâi\\ jealousy-of.\textsc{m} self\\ \glt `self-protection’ \ex \label{ex:Abdoulaye:33c} \gll sô-n kâi\\ loving-of.\textsc{m} self\\ \glt `selfishness’ \ex \label{ex:Abdoulaye:33d} \gll yii ta kâi\\ do one.of.\textsc{f} self\\ \glt `save oneself’ \z \z The expressions in (\ref{ex:Abdoulaye:33a})--(\ref{ex:Abdoulaye:33c}) are compound nouns which, like any noun, can be used independently from any previously mentioned referent (for example as subject in \textit{sôn} \textit{kâi} \textit{yaa} \textit{yi} \textit{yawàa} \textit{gidan} \textit{nàn} ‘there is too much selfishness in this house’, for the compound in (\ref{ex:Abdoulaye:33c}); for a crosslinguistic investigation of the reflexive compounds, see \citealt{Koenig2003}). Sentence (\ref{ex:Abdoulaye:33d}) presents an idiomatic expression. Compounds based on \textit{kâi} ‘self’, both with predictable or less predictable meanings, are numerous. Some frequent examples cited in the dictionaries are: \textit{ɓatàn} \textit{kâi} ‘confusion’, lit. ‘loss of self’; \textit{incìn} \textit{kâi} ‘independence, autonomy’; \textit{sanìn} \textit{ciiwòn} \textit{kâi} ‘self-care’, lit. ‘knowing of pain of self’ (cf. also \textit{ciiwòn} \textit{kâi} ‘headache’); \textit{girman} \textit{kâi} ‘pride, vanity’, lit. ‘big-ness of self’ (though this may also be ‘big-ness of head’); \textit{jîn} \textit{kâi}, ‘pride, vanity’ lit. ‘feeling of self’; \textit{sâa} \textit{kâi} ‘volunteerism’, lit. ‘putting self’ (cf. \textit{aikìn} \textit{sâa} \textit{kâi} ‘voluntary work’); etc. These expressions and compounds can sometimes keep their idiomatic reading even when \textit{kâi} is adjoined a possessive pronoun (e.g., \textit{kâ-n-shì} ‘self-of-3SG.M’) referring to the sentence subject. This is illustrated in (\ref{ex:Abdoulaye:34})--(\ref{ex:Abdoulaye:35}): \ea%34 \label{ex:Abdoulaye:34} \ea \label{ex:Abdoulaye:34a} \gll Yaara su-kà yi ta kâ-n-sù.\\ children \textsc{3pl-rp} do one.of.\textsc{f} self-of.\textsc{m-3pl}\\ \glt `The children bolted away/escaped threat.’ OR `The children did their own [chair].’ (i.e., ‘they made one [chair] for themselves’) \ex \label{ex:Abdoulaye:34b} \gll Koo-waa yà yi ta kâ-n-shì!\\ even-who \textsc{3sg.m.sbjv} do one.of.\textsc{f} self-of.\textsc{m-3sg.m}\\ \glt `Every man for himself!’ (cf. Fr. \textit{sauve-qui-peut!}’); OR `May every one make his own [chair].’ ‘May every one follow his own way.’ \z \z \ea%35 \label{ex:Abdoulaye:35} \ea \label{ex:Abdoulaye:35a} \gll Abdù yaa nuunà irì-n [kiishì-n kâ]-n-shì.\\ Abdu \textsc{3sg.m.cpl} show type-of.\textsc{m} protection-of.\textsc{m} self-of.\textsc{m-3sg.m}\\ \glt `Abdu displayed his art of self-protection.’ \ex \label{ex:Abdoulaye:35b} \gll Abdù, à yi kiishì-n kâi/ *kâ-n-kà!\\ Abdu \textsc{imprs.sbjv} do protection-of.\textsc{m} self/ self-of.\textsc{m-2sg.m}\\ \glt `Abdu, you should protect yourself.’ \ex \label{ex:Abdoulaye:35c} \gll Abdù, kà yi kiishì-n kâ-n-kà!\\ Abdu, \textsc{2sg.m.sbjv} do protection-of.\textsc{m} self-of.\textsc{m-2sg.m}\\ \glt `Abdu, you should protect yourself.’ \z \z Sentences (\ref{ex:Abdoulaye:34}) illustrate the expression \textit{yi} \textit{ta} \textit{kâi} ‘save self’ given in \REF{ex:Abdoulaye:33d}. In both sentences (\ref{ex:Abdoulaye:34a}--\ref{ex:Abdoulaye:34b}) the idiomatic meaning is still recoverable even though \textit{kâi} is adjoined a possessive pronoun referring to the subject. The sentences however are ambiguous, with possible true reflexive readings, as indicated. Sentence (\ref{ex:Abdoulaye:35a}) shows that the compound \textit{kiishìn} \textit{kâi} ‘self-protection’, too, can take an adnominal possessive pronoun (see also \textit{irìn} \textit{[kiishìn} \textit{kâ]n} \textit{Abdù} ‘Abdu’s way in self-protection’, with an adnominal possessive noun). The compound structure is also clear in (\ref{ex:Abdoulaye:35b}) where an impersonal subject-pronoun occurs with a specified referent, yet the sentence cannot license an adnominal possessive pronoun. However, with a matching 2\textsuperscript{nd} person subject-pronoun, as in (\ref{ex:Abdoulaye:35c}), an adnominal possessive pronoun is possible and one gets a typical reflexive construction, no matter how one might analyze the sequence \textit{kiishì-n} \textit{kâ-n-kà} (as a compound ‘self-protection of you’, or as a reflexive pronoun ‘protection of yourself’). The typical reflexive reading is more easily available when the compound or fixed expression has a transparent meaning, as seen in the following case (examples adapted from \citealt[523]{Newman2000}): \ea%36 \label{ex:Abdoulaye:36} \ea \label{ex:Abdoulaye:36a} \gll Abdù yaa tàmbàyi Bintà hanyà-ɍ kaarè kâi.\\ Abdu \textsc{3sg.m.cpl} ask Binta way-of.\textsc{f} protect self\\ \glt `Abdu asked Binta about how to protect oneself [way of self-protection].’ \ex \label{ex:Abdoulaye:36b} \gll Abdù\textsubscript{} yaa faɗàa wà Bintà hanyà-ɍ kaarè kânshì/ kântà.\\ Abdu \textsc{3sg.m.cpl} tell \textsc{appl} Binta way-of.\textsc{f} protect \textsc{refl.3sg.m} \textsc{refl.3sg.f}\\ \glt `Abdu told Binta about how to protect himself/herself.’ \z \z In (\ref{ex:Abdoulaye:36a}) with the bare expression \textit{kaarè} \textit{kai} ‘self-protection’, the person that needs to protect themselves can be Abdu, Balki, or some other person, while in (\ref{ex:Abdoulaye:36b}), with a reflexive pronoun, Abdu (with \textit{kânshì}) or Balki (with \textit{kântà}) are referred to by the reflexive pronoun, in a typical reflexive construction. Other semantically transparent \textit{kâi}{}-based compounds and expressions are: \textit{kaa\_dà} \textit{kâi} ‘falling all by oneself [self-defeat]’; \textit{kashè} \textit{kâi} ‘suicide’ (lit. ‘kill self’, cf. \textit{kisà\nobreakdash-n} \textit{kâi} ‘murder’, lit. ‘killing\nobreakdash-of head/person’); \textit{bìncìken} \textit{kâi} ‘self-exploration’; \textit{àmfàanin} \textit{kâi} ‘self-benefit’ (i.e., doing something for one’s own sake); \textit{tàimakon} \textit{kâi} ‘self-help’, etc. Some of these can be reinforced with the ‘by himself’ adverbial intensifiers seen in \sectref{sec:Abdoulaye:6.1}: \textit{bìncìken} \textit{kâi} \textit{dà} \textit{kâi} lit. ‘self-exploration by self’, \textit{tàimakon} \textit{kâi} \textit{dà} \textit{kâi} lit. ‘self-help by self’ (see also \citealt[523]{Newman2000}). As suggested already in \sectref{sec:Abdoulaye:6.1}, these reinforced compounds show that both \textit{dà} \textit{kâi} and \textit{dà} \textit{kânshì} can mark the ‘by himself’ emphasis. Finally, there is at least one case where \textit{kâi} ‘self’ appears embedded in typical reflexive constructions, i.e., when the plural form \textit{kaawunàa} ‘selves’ is used, as seen in the following (sentence \ref{ex:Abdoulaye:37a} from a radio broadcast and \ref{ex:Abdoulaye:37b} from \citealt[383]{Jaggar2001}; see also \citealt[45]{Abdoulaye2018}): \ea%37 \label{ex:Abdoulaye:37} \ea \label{ex:Abdoulaye:37a} \gll ...na aamulàa dà tsaftàa dà kuma kaarè kaawunà-n-mù dàgà cî-n naamà-n ɓeeràayee...\\ one.of.\textsc{m} practice with hygiene and also protect selves-of.\textsc{pl-1pl} from eating-of.\textsc{m} meat-of.\textsc{m} rodents\\ \glt `[appeals made to us] for practicing hygiene and protecting [restraining] ourselves from eating rodents...’ \ex \label{ex:Abdoulaye:37b} \gll Zaa\_mù wankè kaawunà-n-mù dàgà zàrgi-n dà a-kèe ma-nà.\\ \textsc{fut-1pl} clear selves-of.\textsc{pl-1pl} from charge{}-of.\textsc{m} that \textsc{imprs.ri} \textsc{appl-1pl}\\ \glt `We will clear ourselves of the accusation against us.’ \ex \label{ex:Abdoulaye:37c} \gll Ɗaya baayan ɗaya, su-kà zwaagè kaawunà-n-sù dàgà haɍakà-ɍ.\\ one after one \textsc{3pl-rp} extract selves-of.\textsc{pl-3pl} from matter-\textsc{def}\\ \glt `One by one, they extracted themselves from the matter.’ \z \z Sentences (\ref{ex:Abdoulaye:37}), with the plural form \textit{kaawunaà} ‘selves’, have a special semantics. Indeed, they tend to imply individualized actions by many people. This is clear in sentences (\ref{ex:Abdoulaye:37a}) and (\ref{ex:Abdoulaye:37c}), where it is understood that people performed the action separately and at various times. According to \citet[485]{Newman2000}, the building of the reflexive pronouns uses only the singular \textit{kâi} and this claim would be true if indeed it applies only to the reflexive pronouns that solely mark coreference between arguments, that is, without an added semantics or an emphasis. Indeed, if the regular reflexive pronoun \textit{kânmù} ‘ourselves’ (lit. ‘our-self’) is used in (\ref{ex:Abdoulaye:37a})--(\ref{ex:Abdoulaye:37b}), as is possible, then the sentences would not have the individualized actions reading. Although most Hausa researchers assume that the reflexive pronouns are directly based on the meaning ‘head’ (see \citealt[74]{Caron1991}, ;\citealt[529]{Newman2000}; \citealt[413]{Jaggar2001}; \citealt[147f]{Pawlak2014}; for a general proposal in this regard see \citealt[32f,109f]{Faltz1985}), a few sources have instead explicitly linked the reflexive pronouns with \textit{kâi} meaning ‘self’ (e.g., \citealt[117]{Wolff1993}; \citealt[161]{Will2019}). The data presented in this section show indeed that the meaning of ‘self’ may be relevant for an account of the development of the typical reflexive pronouns. Self-intensifier forms, too, are sometimes evoked as possible source of reflexive pronouns (see \citealt[44]{KoenigSiemund2000}; \citealt[105f]{Schladt2000}; and \citealt[22]{Haspelmath2020a} for discussions) and this proposal may be relevant for Hausa as well. We have seen in \sectref{sec:Abdoulaye:6} that Hausa has two types of self-intensifiers. There is some evidence in Katsinanci dialect that adnominal self-intensifiers are formally closer to typical reflexive pronouns than adverbial self-intensifiers. Indeed, adnominal self-intensifiers and reflexive pronouns tend to have less flexibility in their choice of the 3\textsuperscript{rd} person masculine singular pronoun variants, as given in Table~\ref{tab:Abdoulaye:2}, and so contrast with adverbial self-intensifiers and the \textit{kai} ‘self’ found in compounds and idiomatic expressions, as seen in (\ref{ex:Abdoulaye:38}): \ea%38 \label{ex:Abdoulaye:38} \ea \label{ex:Abdoulaye:38a} \gll Koo-waa yà yi ta kâ-n-shì/ kâ-n-yà/ kâi-nâ-i!\\ even-who \textsc{3sg.m.sbjv} do one.of.\textsc{f} self-of.\textsc{m-3sg.m} self-of.\textsc{m-3sg.m} self-of.\textsc{m-3sg.m}\\ \glt `Every man for himself!’ (cf. sentence \ref{ex:Abdoulaye:34b} above) \ex \label{ex:Abdoulaye:38b} \gll Bello yaa jee makaɍantâ-ɍ dà kâ-n-shì/ kâ-n-yà/ kâi-nâ-i.\\ Bello \textsc{3sg.m.cpl} go school-\textsc{def} with self-of.\textsc{m-3sg.m} self-of.\textsc{m-3sg.m} self-of.\textsc{m-3sg.m}\\ \glt`Bello went to the school by himself.’ (Also: ‘Bello himself went to the school.’) \ex \label{ex:Abdoulaye:38c} \gll Bello yaa ga kânshì/ ?kânyà/ ?kâinâi cikin maduubii.\\ Bello \textsc{3sg.m.cpl} see \textsc{refl.3sg.m} \textsc{refl.3sg.m} \textsc{refl.3sg.m} in mirror\\ \glt `Bello saw himself in the mirror.’ \ex \label{ex:Abdoulaye:38d} \gll Bello shii kânshì/ ?kânyà/ *kâinai yaa san gaskiyaa.\\ Bello \textsc{3sg.m} \textsc{emp.3sg.m} \textsc{emp.3sg.m} \textsc{emp.3sg.m} \textsc{3sg.m.cpl} know truth\\ \glt `Bello himself knows the truth.’ \ex \label{ex:Abdoulaye:38e} \gll Bello shii kân\_kânshì/ *kân\_kânyà/ *kân\_kâinai yaa san gaskiyaa.\\ Bello \textsc{3sg.m} \textsc{emp-emp.3sg.m}/ \textsc{emp-emp.3sg.m} \textsc{emp-emp.3sg.m} \textsc{3sg.m.cpl} know truth\\ \glt `Bello, really he himself, knows the truth.’ \z \z As shown in Table~\ref{tab:Abdoulaye:2}, Katsinanci dialect has four reduced variants for the 3\textsuperscript{rd} person masculine singular possessive pronoun, three of which are relevant for our discussion here (the \textit{kâi-na-s} ‘his head’ variant is marginal even for typical possessive constructions). All speakers consulted agree without hesitation that the three variants are grammatical with \textit{kâi} ‘self’, as seen in (\ref{ex:Abdoulaye:38a}), and with the adverbial self-intensifiers, as seen in sentence (\ref{ex:Abdoulaye:38b}). This result, together with the fact that \textit{dà} \textit{kâi}, lit. ‘by self’, can alone mark emphasis (e.g., \textit{bìncìken} \textit{kâi} \textit{dà} \textit{kâi} lit. ‘self-exploration by self’), supports analyzing the ‘by himself’ emphatic constructions as having the literal comitative meaning ‘with (just) his self’, i.e. ‘alone’. By contrast, speakers are less firm in their judgments with the reflexive pronouns and the adnominal self-intensifiers. All speakers consulted immediately favor the form \textit{kânshì} for both constructions, as seen in (\ref{ex:Abdoulaye:38c})--(\ref{ex:Abdoulaye:38d}), respectively. Most consulted speakers tolerate \textit{kânyà} for both constructions. By contrast, \textit{kâinâi} is acceptable for the reflexive pronouns but is rejected by most speakers for the adnominal self-intensifiers. Finally, for all consulted speakers, in sentence (\ref{ex:Abdoulaye:38e}), the adnominal intensifier reinforced with partial repetition/reduplication (see sentence \ref{ex:Abdoulaye:31b} above) can only have the \textit{kânshì} form. \section{Conclusion} \label{sec:Abdoulaye:8} This contribution has shown that Hausa distinctively marks coreference between the subject and another NP in the same minimal clause using reflexive pronouns formally based on the possessive construction ‘\textit{kâi} + \nobreakdash-n + Pronoun’, lit. ‘self + of + Pronoun’, where the pronoun is coreferential with the clause subject (or sometimes with a preceding direct object or applied object). Subject-coreferential direct objects are almost always expressed as reflexive pronouns (with the exception of the direct objects of some mental and sensation verbs). Subject-coreferential applied objects are also always expressed as reflexive pronouns, except for the 1\textsuperscript{st} and 2\textsuperscript{nd} persons, where a non-reflexive pronoun is possible. Subject-coreferential locative NPs are always expressed as simple pronouns with prepositions derived from location nouns, but they can also be reflexive pronouns with simple, non-derived prepositions. Similarly, prepositional phrases with \textit{dà} ‘with, and’ basically accept simple pronouns, but they also allow the reflexive pronouns, particularly in the 3\textsuperscript{rd} person. Subject-coreferential possessive NPs can optionally be expressed as reflexive pronouns but they then have a special ‘own’-emphasis on the possessive relation. The chapter also described three different constructions that are related to the typical reflexive constructions: compounds and semi-fixed expressions involving \textit{kâi} ‘self’, adverbial self-intensifiers marking the ‘by himself’ emphasis, and adnominal self-intensifiers marking the scalar ‘even X’/’X himself’ emphasis and contrast. These three constructions may be relevant for an account of the origin of the typical reflexive pronouns in Hausa. \section*{Notes and abbreviations} The data discussed in this paper are based on Katsinanci dialect. Katsinanci was the dialect of precolonial Katsina State, the territory of which today straddles the border between the Republic of Niger (towns of Maradi and Tessaoua) and the Federal Republic of Nigeria (town of Katsina). It is in a central position between the two main Hausa dialectal clusters, the western and the eastern dialects, but it shares more features with the western dialects (see \citealt[7]{Wolff1993}; \citealt[1]{Newman2000}). The transcription in this chapter follows the Hausa orthography, with some changes. Long vowels are represented as double letters, low tone as grave accent and falling tone as circumflex accent. High tone is unmarked. The symbol `ɍ{}' represents an alveolar trill distinct from the flap `r'. Final ‘ɍ’ generally assimilates to the following consonant. Written `f' is pronounced [h] (or [hw] before [a]) in Katsinanci and other western dialects. The abbreviations are: \begin{tabularx}{.45\textwidth}[t]{lQ} 1, 2, 3 & 1st, 2nd, 3rd person\\ \textsc{appl} & applicative\\ \textsc{cpl} & completive\\ \textsc{def} & definiteness\\ \textsc{emp} & emphasis\\ \textsc{f} & feminine\\ \textsc{fut} & future\\ \textsc{imprs} & impersonal\\ \textsc{ipfv} & imperfective\\ \textsc{m} & masculine\\ \textsc{neg} & negative\\ \textsc{np} & noun phrase\\ \textsc{pl} & plural\\ \textsc{refl} & reflexive\\ \textsc{ri} & relative imperfective\\ \textsc{rp} & relative perfective\\ \textsc{sg} & singular\\ \textsc{sbjv} & subjunctive\\ \end{tabularx} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figures/Hausa1.png} \caption{Hausa language primary area (from \citealt{Newman2000})} \label{fig:Abdoulaye:1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figures/Hausa2.png} \caption{Hausa dialectal areas \textmd{(line = Niger/Nigeria border; from \citealt{Wolff1993})}} \label{fig:Abdoulaye:2} \end{figure} %%please move the includegraphics inside the {figure} environment %%\includegraphics[width=\textwidth]{figures/Hausareflexivesabdoulaye20200526-img001.jpg} %%please move the includegraphics inside the {figure} environment %%\includegraphics[width=\textwidth]{figures/Hausareflexivesabdoulaye20200526-img002.jpg} %\section*{Acknowledgements} {\sloppy\printbibliography[heading=subbibliography,notkeyword=this]} \end{document}
{ "alphanum_fraction": 0.729344794, "avg_line_length": 88.9868819374, "ext": "tex", "hexsha": "5cac035ddbb118f315579dbd9f9d0b5c06f071f2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "67efe767d14512b2f96d2d324e854100015c8480", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/284", "max_forks_repo_path": "chapters/04_Abdoulaye_Hausa.tex", "max_issues_count": 1593, "max_issues_repo_head_hexsha": "67efe767d14512b2f96d2d324e854100015c8480", "max_issues_repo_issues_event_max_datetime": "2021-06-30T00:15:49.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-21T14:46:34.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/284", "max_issues_repo_path": "chapters/04_Abdoulaye_Hausa.tex", "max_line_length": 2803, "max_stars_count": null, "max_stars_repo_head_hexsha": "67efe767d14512b2f96d2d324e854100015c8480", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/284", "max_stars_repo_path": "chapters/04_Abdoulaye_Hausa.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 30470, "size": 88186 }
% Latex template: https://github.com/mqTeXUsers/Macquarie-University-Beamer-Theme % Slide Masters: % Title % Text % 2 column % Full-image % Bibliography % Closing \documentclass[aspectratio=43, 11pt]{beamer} % Aspect ratio % https://tex.stackexchange.com/a/14339/5483 % Possible values: 1610, 169, 149, 54, 43 and 32. % 169 = 16:9 \PassOptionsToPackage{table}{xcolor} %https://tex.stackexchange.com/a/5365/5483 \usetheme{macquarie} \usepackage{multicol} % https://tex.stackexchange.com/a/396018/5483 \usepackage{xurl} \usepackage[british]{babel} % Set language % \usepackage[utf8x]{inputenc} % Set encoding \usepackage{colortbl} \mode<presentation> % Set options { \usetheme{default} % Set theme \usecolortheme{default} % Set colors \usefonttheme{default} % Set font theme \setbeamertemplate{caption}[numbered] % Set caption to be numbered } % Uncomment this to have the outline at the beginning of each section highlighted. %\AtBeginSection[] %{ % \begin{frame}{Outline} % \tableofcontents[currentsection] % \end{frame} %} \usepackage{graphicx} % For including figures \usepackage{booktabs} % For table rules \usepackage{hyperref} % For cross-referencing \usepackage{enumitem} % https://tex.stackexchange.com/a/2292/5483 %https://tex.stackexchange.com/a/371844/5483 \setbeamerfont{bibliography entry author}{size=\tiny} \setbeamerfont{bibliography entry title}{size=\tiny} \setbeamerfont{bibliography entry location}{size=\tiny} \setbeamerfont{bibliography entry note}{size=\tiny} \setbeamerfont{bibliography item}{size=\tiny} %https://tex.stackexchange.com/q/333587/5483 %TODO SHAWN REPLACE OSF URL %\setbeamertemplate{footline}{\strut~\texttt{https://github.com/MQ-FOAR705/MQ-FOAR705-Week1}\hfill\insertframenumber~/~\inserttotalframenumber\strut~~~} \title{FOAR705 Week 2} % Presentation title \author{Brian Ballsun-Stanton | Shawn A Ross | Kathryn Elliot} % Presentation author \institute{Faculty of Arts} % Author affiliation \date{Friday 09 August 2019} % Today's date \begin{document} % Title page % This page includes the informations defined earlier including title, author/s, affiliation/s and the date % \begin{frame}[noframenumbering] \maketitle % \end{frame} \begin{frame}{Today's Plan} \tableofcontents \end{frame} \section{The Learning Journal} \begin{frame}{Learning Journals} \begin{itemize}[label=\textbullet] \item To serve as a Laboratory Notebook -- a record of: \begin{itemize}[label=\textbullet] \item Thoughts \item Intentions \item Results \end{itemize} \item Mechanism for showing your work \item Reminders of common mistakes and solutions \item Demonstration of growth over time \end{itemize} \end{frame} \begin{frame}{Grading Overview} Percentages: \begin{itemize}[label=\textbullet] \item Exercise Documentation (70) \item Committing your work (10) \item Error Reflection and Solution (20) \end{itemize} Dates: \begin{itemize}[label=\textbullet] %(Weeks 4, 6, 8, 13) \item Spreadsheets Learning Journal (with all exercises complete) due before class, 23 August \item Shell Scripts learning Journal due (with all exercises) before class, 6 September \item Open Refine learning Journal due (with all exercises) before class, 4 October \item R Learning Journal Due (up to dplyr and tidyr) before class, 8 November \end{itemize} \end{frame} \begin{frame}{Learning Journal pattern} All technical work outside of class, and all Carpentry exercises should be recorded in a document in cloudstor. For each discrete action taken. (Exercise, part of exercise, command run, code changed, code added, etc...) \begin{itemize}[label=\textbullet] \item The intention of the action: ``What do you intend to be the result of the action you are about to take'' \item The specifics of the action taken: Timestamp, commands or actions \item Results: note success or failure, screenshot or copy-paste or summary, error states \item Marginal notes for improvements on what to do or how to think about the idea more effectively \end{itemize} \textbf{Documentation of errors is critical for success} % , , and the results, along with any marginal notes for improvement or updating your mental model of what should have had happened % All technical work outside of class should be recorded in a laboratory notebook (document kept with the code or on cloudstor) which documents the intention of the action, the specifics of the action taken, and the results, along with any marginal notes for improvement or updating your mental model of what should have had happened. This documentation includes: an answer to the specific objective: "What do you intend to be the result of the action you are about to take", the action to be taken containing timestamp, and commands or actions performed, and the result, documenting what happened, success or failure in relation to the objective, and error states. Documentation of errors (each in its own entry) and their remediation are strongly encouraged. \end{frame} \begin{frame}{Example} \begin{figure}[H] \centering \includegraphics[height=.75\textheight]{figures/anaScreenshot.png} \caption{Screenshot from a Software Carpentry Learning Journal} \label{fig:screenshotAna} \end{figure} \end{frame} \begin{frame}{What is an HD?} \begin{itemize}[label=\textbullet] \item Errors are documented along with steps to recover, just like any other action in the lab notebook. \item All entries have an objective articulated before they are run. \item Commands and code are clearly indicated for future reference. \item Results are clearly documented, and contain a minimal amount of self-reflective feedback to inform the next attempt. \end{itemize} \end{frame} \begin{frame}{Committing code and exercises} For Carpentries exercises or other experiments (code not directly related to your proof of concept), make a repository (public or private) inside the MQ-FOAR705 organisation on Github. Commit code or outputs (reorganised sheets, etc) to that repository. An HD: \begin{itemize}[label=\textbullet] \item Commit messages are clear and have useful descriptions, not just summary lines. \item Files and directories are organised consistently in a fashion which allows for easy command line navigation and sorting (no spaces or other problematic characters). \item All work, not just technical, is committed to an appropriate repository. \end{itemize} % , , and the results, along with any marginal notes for improvement or updating your mental model of what should have had happened % All technical work outside of class should be recorded in a laboratory notebook (document kept with the code or on cloudstor) which documents the intention of the action, the specifics of the action taken, and the results, along with any marginal notes for improvement or updating your mental model of what should have had happened. This documentation includes: an answer to the specific objective: "What do you intend to be the result of the action you are about to take", the action to be taken containing timestamp, and commands or actions performed, and the result, documenting what happened, success or failure in relation to the objective, and error states. Documentation of errors (each in its own entry) and their remediation are strongly encouraged. \end{frame} \begin{frame}{Error Reflection and Solutions} Document your errors and how you've found solutions. An HD: \begin{itemize}[label=\textbullet] \item Using the error documentation in the labratory notebook as the basis, a library of common errors, solutions, and ways to find solutions is built for future reference. \item These solutions contain links to good internet resources, and useful rules of thumb for where best to find assistance. \end{itemize} \end{frame} \section{Data Carpentry} \begin{frame}{Shared notes} Shared notes document for the entire unit. Put questions, thoughts or observations from readings or prior class onto page 2. The person who isn't lecturing will try to answer them inline. \begin{figure}[H] \centering \includegraphics[height=.6\textheight]{figures/cloudstorqr.png} \caption{Cloudstor: \url{http://bit.ly/2YW1Owm}} \label{fig:programmed} \end{figure} \end{frame} \begin{frame}{Introduction} Guiding question for this episode: What are basic principles for using spreadsheets for good data organisation? \end{frame} \begin{frame}{Introduction - Exercise} In cloudstor shared document: \begin{itemize}[label=\textbullet] \item How many people have used spreadsheets in their research? \item How many people have accidentally done something that made them frustrated or sad? \end{itemize} \end{frame} \begin{frame}{Formatting data tables in Spreadsheets} \begin{columns} \begin{column}{0.5\textwidth} \begin{figure}[H] \centering \includegraphics[height=.6\textheight]{figures/multiple-info.png} \caption{Data Carpentry: combined info (CC-BY)} \label{fig:combined} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure}[H] \begin{center} \includegraphics[height=.6\textheight]{figures/single-info.png} \caption{Data Carpentry: single info (CC-BY)} \label{fig:single} \end{center} \end{figure} \end{column} \end{columns} \end{frame} \begin{frame}{Exercise} We’re going to take a messy version of the SAFI data and describe how we would clean it up. \begin{itemize}[label=\textbullet] \item Download the messy data. \item Open up the data in a spreadsheet program. \item Notice that there are two tabs. Two researchers conducted the interviews, one in Mozambique and the other in Tanzania. They both structured their data tables in a different way. Now, you’re the person in charge of this project and you want to be able to start analyzing the data. \item With the person next to you, identify what is wrong with this spreadsheet. Discuss the steps you would need to take to clean up the two tabs, and to put them all together in one spreadsheet. \item Document your group's thoughts in the cloudstor shared document. \end{itemize} \textbf{Important} Do not forget our first piece of advice, to create a new file (or tab) for the cleaned data, never modify your original (raw) data. After you go through this exercise, we’ll discuss as a group what was wrong with this data and how you would fix it. \end{frame} \begin{frame}{Metadata} ``Data about data'' Exercise (maybe): Download a clean version of this dataset and open the file with your spreadsheet program. This data has many more variables that were not included in the messy spreadsheet and is formatted according to tidy data principles. Discuss this data with a partner and make a list of some of the types of metadata that should be recorded about this dataset. It may be helpful to start by asking yourself, “What is not immediately obvious to me about this data? What questions would I need to know the answers to in order to analyze and interpret this data?” \end{frame} \begin{frame}{Git clients} \begin{itemize}[label=\textbullet] \item Web interface (demonstrate now) \item desktop.github.com client \item other clients \end{itemize} Exercise: commit the results (a text file containing your thoughts and the original data) of the cleanup exercise to your own repository on Github now. \end{frame} \begin{frame}{Homework} Before class next week, in your learning journal, finish reading to ``Formatting Problems'' and document the two exercises we've (hopefully) done today in your Github repository. Also in your learning journal, find an example of each problem in data produced by your discipline. (Bonus points if you can find these problems in published datasets in your discipline). \end{frame} \begin{frame}{Breaktime} 5 minute break \end{frame} \section{Research design 101} \begin{frame}{Articulating research design} Robust research requires articulation of an explicit research design \begin{itemize}[label=\textbullet] \item Deductive vs. Inductive vs. Abductive \item Idiographic vs. Nomothetic \end{itemize} Each type is valuable, but you must recognise what you are doing and not conflate them. \end{frame} \section{Proof of Concept scoping 101} \begin{frame}{What is scoping?} To ensure that a product actually solves user problems, software development begins with `Business Analysis' (BA), where user requirements are enumerated. But how do we gather requirements and act on them to produce a solution? It is surprisingly difficult to learn what clients (even yourself!) really need and want \end{frame} \begin{frame}{Approach \#1: Ask what people want (opinions)} \begin{figure}[Top-down-planning] \centering \includegraphics[height=.75\textheight]{figures/Archaeologists-standards.png} \caption{Archaeologists contemplate data standards (FAIMS Stocktaking, 2012)} \label{fig:standards} \end{figure} \end{frame} \begin{frame}{Approach \#2: Ask or observe what people do (facts)} Borrowed from 'Lean startup' methodology. All materials are on Cloudstor or available from \url{https://www.strategyzer.com/} (free registration required) The overview document is the 'Value Proposition canvas', but see also the 'Mission Model canvas' for an indication of how the approach can be applied outside tech industry settings. \end{frame} \begin{frame}{Understand your client} See the 'Customer Profile' worksheet \begin{itemize}[label=\textbullet] \item Identify 'jobs' (see 'A day in the life worksheet') \item Identify 'pains' (see 'Customer Pains trigger questions') \item Identify 'gains' (see 'Customer Gains trigger questions') \end{itemize} \end{frame} \begin{frame}{Ideate a solution} After you have completed the 'Customer Profile' worksheet, look at the 'Value Map' worksheet \begin{itemize}[label=\textbullet] \item Identify 'pain relievers' that map to pains (see 'Pain Relievers trigger questions') \item Identify 'gains creators' that map to gains (see 'Gain Creator trigger questions') \item Finally, you can articulate 'Products and Services' - what you are going to build \end{itemize} \end{frame} \section{Project management 101} \begin{frame}{Developing ideas towards a solution} So you have great ideas, what next? \end{frame} \begin{frame}{Approach \#1: Top-down design (`Waterfall')} \begin{figure}[Waterfall] \centering \includegraphics[height=.75\textheight]{figures/waterfall.png} \caption{Traditional and linear approach} \cite{Parody2018-if} \label{fig:6} \end{figure} \end{frame} \begin{frame}{Approach \#2: Iterative design with course corrections (`Agile')} \begin{figure}[Agile] \centering \includegraphics[height=.75\textheight]{figures/agile.png} \caption{Design-test-repeat approach to PM} \cite{Parody2018-if} \label{fig:7} \end{figure} \end{frame} \begin{frame}{Manifesto for Agile Software Development} We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: \begin{itemize}[label=\textbullet] \item \textbf{Individuals and interactions} over processes and tools \item \textbf{Working software} over comprehensive documentation \item \textbf{Customer collaboration} over contract negotiation \item \textbf{Responding to change} over following a plan \end{itemize} That is, while there is value in the items on the right, we value the items on the left more. \cite{Atlassian2019-xl} \end{frame} \begin{frame}{Tool \#1: Gantt chart} \begin{figure}[Gantt] \centering \includegraphics[height=.75\textheight]{figures/gantt.png} \caption{A Gantt Chart template (on Cloudstor)} \label{fig:8} \end{figure} \end{frame} \begin{frame}{Tool \#2: Kanban board} \begin{figure}[Kanban] \centering \includegraphics[height=.75\textheight]{figures/kanban.png} \caption{Schematic Kanban board \cite{Atlassian2019-bo}} \label{fig:9} \end{figure} \end{frame} \begin{frame}{A Kanban board should} \begin{itemize}[label=\textbullet] \item Visualise your work \item Limit work in progress \end{itemize} Kanban boards often have columns like: Backlog (wish list), To do, In progress, Done, with the 'To do' and 'In progress' columns having work limits (e.g., 3-5 tasks). Trello is a popular application for Kanban. Atlassian has Kanban learning materials online \cite{Atlassian2019-bo}. \end{frame} % Outline % This page includes the outline (Table of content) of the presentation. All sections and subsections will appear in the outline by default. % \begin{frame}{The context of Research Data Management} % \tableofcontents % \end{frame} % % The following is the most frequently used slide types in beamer % % The slide structure is as follows: % % % %\begin{frame}{<slide-title>} % % <content> % %\end{frame} % \section{Code of Conduct} % \begin{frame}{Unit Code of Conduct} % This class is using a great deal of material from The Carpentries. All interactions related to this class, inside and outside, abide by The Carpentries Code of Conduct. % Report code of conduct violations to Shawn, Brian, or [email protected]. % \url{https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html} % In summary, we want to emphasise: % \begin{itemize}[label=\textbullet] % \item Use welcoming and inclusive language % \item Be respectful of different viewpoints and experiences % \item Gracefully accept constructive criticism % \item Focus on what is best for the community % \item Show courtesy and respect towards other community members % \end{itemize} % \end{frame} % \section{Expectations} % \begin{frame}{Is the content 'too hard'?} % `I still have my concerns about how over-technical this course is given it is now meant to be taken by students from across the entire Faculty from diverse backgrounds and with diverse interests...I suspect will cause students anxiety and maybe lead to drop out.' % \begin{itemize}[label=\textbullet] % \item Before we start, what was your reaction to reading the Unit description? % \item Do you agree with the quote above? % \end{itemize} % \end{frame} % \begin{frame}{Expectations and workload} % You are undertaking an Masters of Research at a top one-percent university (QS ranking 125 in Arts and Humanities, 202 in Social Sciences). Expectations and workload higher than what you are accustomed to. % \begin{itemize}[label=\textbullet] % \item Expect a workload of six hours per week outside of class to earn a DN or HD. % \item Avoid missing classes. If you do, expect to spend four hours to catch up. % \item If you want to continue to a PhD you need to maintain a DN or HD average. % \item Both of us have taught overseas and are engaged with international trends in research technology. This unit has been calibrated to the international environment. % \item Considering the academic job market, competition is fierce. % \item Most of you will not get academic jobs, so transferable skills are crucial. % \item It is our job to prepare you for this environment, and yours to make yourself competitive. % \end{itemize} % \end{frame} % \begin{frame}{Assessment} % \begin{itemize}[label=\textbullet] % \item Proof of Concept % \item Original Software Publication % \item Lightning talk % \item Learning journal % \end{itemize} % \end{frame} % \section{Don't panic!} % \begin{frame}{Data Carpentry: a proven approach} % `Building communities teaching universal data literacy' % `Data Carpentry trains researchers in the core data skills for efficient, shareable, and reproducible research practices. We run accessible, inclusive training workshops; teach openly available, high-quality, domain-tailored lessons; and foster an active, inclusive, diverse instructor community that promotes and models reproducible research as a community norm.' \cite{Teal2016-gy} % `Since 1998, Software Carpentry has been teaching researchers the computing skills they need to get more done in less time and with less pain. Our volunteer instructors have run hundreds of events for more than 34,000 researchers since 2012.' \cite{Duckles2018-fu} % \end{frame} % \begin{frame}{Data Carpentry: widely used worldwide in HASS} % Carpentries training is used all over the world to teach digital literacy and computational thinking to Humanities and Social Sciences students and researchers. % \begin{itemize}[label=\textbullet] % \item Digital Humanities at Oxford Summer School % \item CODATA-RDA School of Research Data Science % \item Australian Research Data Cloud training % \item THATCamps (e.g., at Sydney ResBaz 2019) % \end{itemize} % \end{frame} % \begin{frame}{Data Carpentry: used at Macquarie} % Other MRes students at this university have successfully undergone DC training: % \begin{itemize}[label=\textbullet] % \item BIOL703 Research Skills for Biology % \item No excess attrition, high student satisfaction, good feedback % \item Nominated for a Vice-Chancellor's Learning and Teaching award % \item Is the background or needs of Arts students that different from ecology, biology, environmental sciences, and related fields? % \end{itemize} % \end{frame} % \begin{frame}{Previous HASS MRes students have thrived} % \url{https://www.youtube.com/watch?v=r9jpe9_2z3c} % \end{frame} % \section{What, and why?} % \begin{frame}{Digital literacy: creators, not consumers} % \begin{figure}[H] % \centering % \includegraphics[height=.6\textheight]{figures/2011-ProgOrBeProgged-248x340.jpg} % \caption{Program or be Programmed, Douglas Rushkoff} % \label{fig:programmed} % \end{figure} % See also: \url{https://impossiblehq.com/an-unexpected-ass-kicking/} % %insert 'Program or be programmed' book cover image, and link to 'An unexpected ass kicking' %https://rushkoff.com/books/program-or-be-programmed/ % %https://impossiblehq.com/an-unexpected-ass-kicking/ % \end{frame} % \begin{frame}{Computational thinking: what can you do with a computer?} % \begin{figure}[H] % \centering % \includegraphics[height=.6\textheight]{figures/ctc-w2b.jpg} % \caption{'To flourish in today's world, computational thinking has to be a fundamental part of the way people think and understand the world.' \cite{Center_for_Computational_Thinking2012-tt}} % \label{fig:ctc} % \end{figure} % %insert https://www.cs.cmu.edu/~CompThink/images/ctc-w2b.jpg % %caption: 'To flourish in today's world, computational thinking has to be a fundamental part of the way people think and understand the world.' https://www.cs.cmu.edu/~CompThink/ % \end{frame} % \begin{frame}{Tools and approaches} % Only within these frameworks can you use available tools and approaches - but we will introduce you to a range of them, customised to the disciplinary mix in the class. % \begin{itemize}[label=\textbullet] % \item Research design and project management % \item Data management planning % \item Data capture % \item Data analysis and collaboration % \item Data archiving and dissemination % \end{itemize} % \end{frame} % \section{Tools and Communication} % \begin{frame}{Discussion on which tools we will use as a class} % \begin{itemize}[label=\textbullet] % \item Chat/coordination/project management software % \item Typesetting software % \item Version control online repository % \item File sharing mechanisms % \item Backup mechanisms % \end{itemize} % \end{frame} % \begin{frame}{Coordination outside of class} % \begin{itemize}[label=\textbullet] % \item Hacky-hour/study groups: \url{https://science.mozilla.org/programs/studygroups} % \item Consultation Hours: Friday 12:45-1:45pm (AHH Level 2 lobby) and 4:15-5:15pm, campus hub (before and after seminar) % \item \url{https://twitter.com/Rusers_MQ} % \end{itemize} % \end{frame} % \section{Moving on to Data Carpentry} % \begin{frame}{Pre-Carpentry survey} % At the start and end of every carpentries workshop, we poll participants. % \url{https://bit.ly/FOAR705-pre} % \begin{figure}[H] % \centering % \includegraphics[height=.6\textheight]{figures/qr.jpeg} % \caption{\url{https://mqedu.qualtrics.com/jfe/form/SV_5v6iQJSBZDNhq4d?workshop=FOAR705-2019}} % \label{fig:foarqr} % \end{figure} % \end{frame} % \begin{frame}{Sticky notes} % We use sticky notes during our workshops (and thus during our classes) to indicate progress or needs for assistance. % We also use them as minute cards for feedback and the end of each session. % \end{frame} % \begin{frame}{Starting the workshop} % \begin{itemize} % \item \url{https://datacarpentry.org/socialsci-workshop/} % \item \url{https://datacarpentry.org/spreadsheets-socialsci/setup.html} % \item \url{https://datacarpentry.org/openrefine-socialsci/setup.html} % \item \url{https://datacarpentry.org/r-socialsci/setup.html} % \end{itemize} % \end{frame} % % \bibliographystyle{apalike} % % Adding the option 'allowframebreaks' allows the contents of the slide to be expanded in more than one slide. % % The "1" comes from the outer theme" % \section{Minute cards!} % \begin{frame}{Feedback time} % On your green sticky, write one thing we did well today. % On your red sticky, write one thing we could improve upon for next time. Be specific. % \end{frame} \section{References} \begin{multicols}{2}[] \bibliography{references} \bibliographystyle{apalike} \end{multicols} % \begin{frame}[allowframebreaks]{References} % \bibliography{references} % \bibliographystyle{apalike} % \end{frame} \begin{frame}{Thank you!} % This presentation is available at: % \texttt{https://osf.io/...} Source code for this presentation is available at: \url{https://github.com/MQ-FOAR705/MQ-FOAR705-Week2} This work is licensed under a Creative Commons Attribution 4.0 International License. \end{frame} \end{document}
{ "alphanum_fraction": 0.7355998349, "avg_line_length": 40.2552870091, "ext": "tex", "hexsha": "47a5c6ddf1d1fcc8febaef66ffa9ae3226ea373b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cf333d4594ae8cc0fcb8f20810b41fc961403ac5", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "MQ-FOAR705/MQ-FOAR705-Week2", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cf333d4594ae8cc0fcb8f20810b41fc961403ac5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "MQ-FOAR705/MQ-FOAR705-Week2", "max_issues_repo_path": "main.tex", "max_line_length": 760, "max_stars_count": null, "max_stars_repo_head_hexsha": "cf333d4594ae8cc0fcb8f20810b41fc961403ac5", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "MQ-FOAR705/MQ-FOAR705-Week2", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6709, "size": 26649 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % APP.TEX July 1990 % % % % This file is part of the AMS-LaTeX Version 1.0 distribution % % American Mathematical Society, Technical Support Group, % % P. O. Box 6248, Providence, RI 02940 % % 800-321-4AMS (321-4267) or 401-455-4080 % % Internet: [email protected] % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \appendix \chapter[Nonselfadjoint Equations]% {On the Eigenvalues and Eigenfunctions\\ of Certain Classes of Nonselfadjoint Equations} \section{Compact operators} In an appropriate Hilbert space, all the equations considered below can be reduced to the form \begin{equation} y=L(\lambda)y+f,\qquad L(\lambda)=K_0+\lambda K_1+\dots+\lambda^n K_n, \end{equation} where $y$ and $f$ are elements of the Hilbert space, $\lambda$ is a complex parameter, and the $K_i$ are compact operators. A compact operator $R(\lambda)$ is the resolvent of $L(\lambda)$ if $(E+R)(E-L)=E$. If the resolvent exists for some $\lambda=\lambda_0$, it is a meromorphic function of $\lambda$ on the whole plane. We say that $y$ is an eigenelement for the eigenvalue $\lambda=c$, and that $y_1,\dots,y_k$ are elements associated with it (or associated elements) if \begin{equation} y=L(c)y,\quad y_k=L(c)y_k+\frac{1}{1!}\,\frac{\partial L(c)}{\partial c} y_{k-1}+\dots+\frac{1}{k!}\,\frac{\partial^kL(c)}{\partial c^k}y. \end{equation} Note that if $y$ is an eigenelement and $y_1,\dots,y_k$ are elements associated with it, then $y(t)=e^{ct}(y_k +y_{k-1}t/1!+\dots+yt^k/k!)$ is a solution of the equation $y=K_0y+K_1\partial y/\partial t+\dots+K_n\partial^ny/ \partial t^n$. \endinput
{ "alphanum_fraction": 0.5755208333, "avg_line_length": 43.6363636364, "ext": "tex", "hexsha": "4802f91db8e98f6f8cc6efa1509dfeb380f74e05", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "47575dc4a4638a1ee0d9eed78d88a9f1720a4430", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "henesy/plan9-1e", "max_forks_repo_path": "sys/lib/tex/macros/doc/ams/app.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "47575dc4a4638a1ee0d9eed78d88a9f1720a4430", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "henesy/plan9-1e", "max_issues_repo_path": "sys/lib/tex/macros/doc/ams/app.tex", "max_line_length": 76, "max_stars_count": null, "max_stars_repo_head_hexsha": "47575dc4a4638a1ee0d9eed78d88a9f1720a4430", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "henesy/plan9-1e", "max_stars_repo_path": "sys/lib/tex/macros/doc/ams/app.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 538, "size": 1920 }
\documentclass[]{article} \usepackage{lmodern} % [September 3, 2018 ALR] % Changed to allow for left justified equations \usepackage{amssymb} \usepackage[fleqn]{amsmath} % \usepackage{tikz} % \usetikzlibrary{arrows,intersections} \allowdisplaybreaks % [Oct 27, 2019 ALR] % to put nice boxes around correct answers :) % https://tex.stackexchange.com/questions/36524/how-to-put-a-framed-box-around-text-math-environment/36528 \usepackage{tcolorbox} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=2cm]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={EN-625-661-81-FA19 Module 11 Assignment}, pdfauthor={Adam Rich}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} \title{EN-625-661-81-FA19 Module 11 Assignment} \author{Adam Rich} \date{November 12, 2019} % [September 15, 2018 ALR] % Inspired by % https://tex.stackexchange.com/questions/15894/evaluation-of-differentiation-and-integration % \newcommand*\evalbar{\left|.\right|} \def\at{ \left. \vphantom{\int} \right| } \begin{document} \maketitle \hypertarget{problem-1a}{% \subsection{Problem \#1a}\label{problem-1a}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{123876}\NormalTok{)} \NormalTok{d132_all <-}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{'./m11/data-prob-13-2.csv'}\NormalTok{)} \NormalTok{d132_all}\OperatorTok{$}\NormalTok{ID <-}\StringTok{ }\DecValTok{1}\OperatorTok{:}\KeywordTok{nrow}\NormalTok{(d132_all)} \NormalTok{rows <-}\StringTok{ }\KeywordTok{sort}\NormalTok{(}\KeywordTok{sample}\NormalTok{(d132_all}\OperatorTok{$}\NormalTok{ID, }\DecValTok{15}\NormalTok{))} \NormalTok{d132 <-}\StringTok{ }\NormalTok{d132_all[rows, ]} \KeywordTok{print}\NormalTok{(d132)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## x y ID ## 1 38000 0 1 ## 2 51200 1 2 ## 3 39600 0 3 ## 4 43400 1 4 ## 6 53000 0 6 ## 7 41500 1 7 ## 8 40800 0 8 ## 10 52400 1 10 ## 11 38700 1 11 ## 13 49500 1 13 ## 16 54000 1 16 ## 17 51700 1 17 ## 18 39400 0 18 ## 19 40900 0 19 ## 20 52800 1 20 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mx <-}\StringTok{ }\KeywordTok{glm}\NormalTok{(y }\OperatorTok{~}\StringTok{ }\NormalTok{x, }\DataTypeTok{data =}\NormalTok{ d132, }\DataTypeTok{family =} \KeywordTok{binomial}\NormalTok{(}\DataTypeTok{link =} \StringTok{'logit'}\NormalTok{))} \KeywordTok{summary}\NormalTok{(mx)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## glm(formula = y ~ x, family = binomial(link = "logit"), data = d132) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -2.0874 -0.8949 0.4997 0.6318 1.5630 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -8.804002 4.961702 -1.774 0.0760 . ## x 0.000205 0.000112 1.829 0.0673 . ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 20.190 on 14 degrees of freedom ## Residual deviance: 15.855 on 13 degrees of freedom ## AIC: 19.855 ## ## Number of Fisher Scoring iterations: 4 \end{verbatim} \begin{tcolorbox} $$ \hat{y} = \frac{1}{1 + \exp(8.8040 - 0.0002x)} $$ \end{tcolorbox} \hypertarget{problem-1b}{% \subsection{Problem \#1b}\label{problem-1b}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{p0 <-}\StringTok{ }\KeywordTok{sum}\NormalTok{(d132}\OperatorTok{$}\NormalTok{y) }\OperatorTok{/}\StringTok{ }\KeywordTok{nrow}\NormalTok{(d132)} \NormalTok{px <-}\StringTok{ }\KeywordTok{fitted}\NormalTok{(mx)} \NormalTok{ll0 <-}\StringTok{ }\KeywordTok{sum}\NormalTok{(d132}\OperatorTok{$}\NormalTok{y }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(p0) }\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{d132}\OperatorTok{$}\NormalTok{y) }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{p0))} \NormalTok{llx <-}\StringTok{ }\KeywordTok{sum}\NormalTok{(d132}\OperatorTok{$}\NormalTok{y }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(px) }\OperatorTok{+}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{d132}\OperatorTok{$}\NormalTok{y) }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{px))} \NormalTok{D_res <-}\StringTok{ }\DecValTok{2} \OperatorTok{*}\StringTok{ }\KeywordTok{sum}\NormalTok{(}\KeywordTok{ifelse}\NormalTok{(} \NormalTok{ d132}\OperatorTok{$}\NormalTok{y }\OperatorTok{==}\StringTok{ }\DecValTok{1}\NormalTok{, d132}\OperatorTok{$}\NormalTok{y }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(d132}\OperatorTok{$}\NormalTok{y }\OperatorTok{/}\StringTok{ }\NormalTok{px), } \NormalTok{ (}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{d132}\OperatorTok{$}\NormalTok{y) }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{((}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{d132}\OperatorTok{$}\NormalTok{y)}\OperatorTok{/}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{px))))} \NormalTok{D_reg <-}\StringTok{ }\DecValTok{2} \OperatorTok{*}\StringTok{ }\NormalTok{(llx }\OperatorTok{-}\StringTok{ }\NormalTok{ll0)} \KeywordTok{print}\NormalTok{(D_res)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 15.85534 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{print}\NormalTok{(D_reg)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 4.335008 \end{verbatim} The residual and ``regression'' deviance are also given by the \texttt{anova} function. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{anova}\NormalTok{(mx)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Analysis of Deviance Table ## ## Model: binomial, link: logit ## ## Response: y ## ## Terms added sequentially (first to last) ## ## ## Df Deviance Resid. Df Resid. Dev ## NULL 14 20.190 ## x 1 4.335 13 15.855 \end{verbatim} There are multiple measures of deviance here that we care about. The NULL model takes a global average to get a constant \(\pi\). In this case, it has a deviance equal to 20.190. The addition of \(x\) as a regressor reduces deviance by 4.335. In the first test of goodness-of-fit, we say that the NULL hypothesis is that all \(\beta\) are zero, i.e., the NULL model is the correct one. The test statistic is 4.335 and is \(\chi^2\) with one degree of freedom. \begin{Shaded} \begin{Highlighting}[] \DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pchisq}\NormalTok{(}\FloatTok{4.335}\NormalTok{, }\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.03733642 \end{verbatim} With a \(p\)-value of 0.037 I would reject the NULL hypothesis. The second test is for \(H_0: \beta_1 \neq 0\) and \(H_1\) is that the saturated model is correct. Or in other words, do we have evidence to reject our model because it has too much residual deviance. The test statistic is 15.855 with 13 degrees of freedom. \begin{Shaded} \begin{Highlighting}[] \DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pchisq}\NormalTok{(}\FloatTok{15.855}\NormalTok{, }\DecValTok{13}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.2570485 \end{verbatim} Because we have a \(p\)-value \emph{greater than} a standard significance level we fail to reject \(H_0\). Or in other words, we fail to reject our fitted model in favor of a saturated model. \begin{tcolorbox} Residual deviance is 15.855 with 13 degrees of freedom and a $p$-value of 0.257. As such, we fail to reject our fitted model in favor of a saturated model. Or, in other words, our model "passes" the deviance goodness-of-fit test. \end{tcolorbox} \hypertarget{problem-1c}{% \subsection{Problem \#1c}\label{problem-1c}} The odds ratio is the multiplicative increase in probability of success when the regressor variable increases additively by 1.0. \[ O_R = e^{\beta_1} \] For this 15 rows of data: \begin{Shaded} \begin{Highlighting}[] \NormalTok{beta1 <-}\StringTok{ }\NormalTok{mx}\OperatorTok{$}\NormalTok{coefficients[}\DecValTok{2}\NormalTok{]} \KeywordTok{exp}\NormalTok{(beta1)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## x ## 1.000205 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{exp}\NormalTok{(beta1 }\OperatorTok{*}\StringTok{ }\DecValTok{100}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## x ## 1.020707 \end{verbatim} \begin{tcolorbox} For every \$100 increase in income, the probability of home ownership increases by 2.07\%. \end{tcolorbox} \hypertarget{problem-1d}{% \subsection{Problem \#1d}\label{problem-1d}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mx2 <-}\StringTok{ }\KeywordTok{glm}\NormalTok{(y }\OperatorTok{~}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\KeywordTok{I}\NormalTok{(x}\OperatorTok{^}\DecValTok{2}\NormalTok{), }\DataTypeTok{data =}\NormalTok{ d132, }\DataTypeTok{family =} \KeywordTok{binomial}\NormalTok{(}\DataTypeTok{link =} \StringTok{'logit'}\NormalTok{))} \KeywordTok{anova}\NormalTok{(mx2)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Analysis of Deviance Table ## ## Model: binomial, link: logit ## ## Response: y ## ## Terms added sequentially (first to last) ## ## ## Df Deviance Resid. Df Resid. Dev ## NULL 14 20.190 ## x 1 4.3350 13 15.855 ## I(x^2) 1 1.1741 12 14.681 \end{verbatim} The addition of \(x^2\) reduces residual deviance by 1.1741 at the cost of one degree of freedom. \begin{Shaded} \begin{Highlighting}[] \DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pchisq}\NormalTok{(}\FloatTok{1.1741}\NormalTok{, }\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.2785604 \end{verbatim} \begin{tcolorbox} With a $p$-value of 0.279, there is not enough evidence to say that the reduction in deviance is worth it. I would not reject the simpler model. \end{tcolorbox} \hypertarget{problem-2-13.4-in-the-textbook}{% \subsection{Problem \#2: 13.4 in the textbook}\label{problem-2-13.4-in-the-textbook}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1264}\NormalTok{)} \NormalTok{d134_all <-}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{'./m11/data-prob-13-4.csv'}\NormalTok{)} \NormalTok{d134_all}\OperatorTok{$}\NormalTok{ID <-}\StringTok{ }\DecValTok{1}\OperatorTok{:}\KeywordTok{nrow}\NormalTok{(d134_all)} \NormalTok{rows <-}\StringTok{ }\KeywordTok{sort}\NormalTok{(}\KeywordTok{sample}\NormalTok{(d134_all}\OperatorTok{$}\NormalTok{ID, }\DecValTok{9}\NormalTok{))} \NormalTok{d134 <-}\StringTok{ }\NormalTok{d134_all[rows, ]} \KeywordTok{print}\NormalTok{(d134)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## x n r ID ## 2 7 500 122 2 ## 3 9 500 147 3 ## 4 11 500 176 4 ## 5 13 500 211 5 ## 6 15 500 244 6 ## 8 19 500 310 8 ## 9 21 500 343 9 ## 10 23 500 372 10 ## 11 25 500 391 11 \end{verbatim} To make dealing with grouped data easier in R, I'm going to add two variables: \texttt{n1} is the number of successes and \texttt{n0} is the number of failures. Success here is defined as redemptions. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d134}\OperatorTok{$}\NormalTok{n1 <-}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{r} \NormalTok{d134}\OperatorTok{$}\NormalTok{n0 <-}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n }\OperatorTok{-}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n1} \NormalTok{mx <-}\StringTok{ }\KeywordTok{glm}\NormalTok{(}\KeywordTok{cbind}\NormalTok{(n1, n0) }\OperatorTok{~}\StringTok{ }\NormalTok{x, }\DataTypeTok{data =}\NormalTok{ d134, }\DataTypeTok{family =}\NormalTok{ binomial)} \KeywordTok{summary}\NormalTok{(mx)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## glm(formula = cbind(n1, n0) ~ x, family = binomial, data = d134) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -0.32552 -0.08348 0.02368 0.09159 0.26113 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.094422 0.093725 -22.35 <2e-16 *** ## x 0.136286 0.005609 24.30 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 671.12199 on 8 degrees of freedom ## Residual deviance: 0.24813 on 7 degrees of freedom ## AIC: 62.785 ## ## Number of Fisher Scoring iterations: 3 \end{verbatim} \begin{tcolorbox} $$ \hat{y} = \frac{1}{1 + \exp(2.0944 - 0.1363x)} $$ \end{tcolorbox} \hypertarget{problem-2b}{% \subsection{Problem \#2b}\label{problem-2b}} I'm going to calculate the deviance ``long form'' again just to make sure I understand what the \texttt{anova} function is doing. \begin{Shaded} \begin{Highlighting}[] \NormalTok{p0 <-}\StringTok{ }\KeywordTok{sum}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n1) }\OperatorTok{/}\StringTok{ }\KeywordTok{sum}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n)} \NormalTok{px <-}\StringTok{ }\KeywordTok{fitted}\NormalTok{(mx)} \NormalTok{ll0 <-}\StringTok{ }\KeywordTok{sum}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n1 }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(p0) }\OperatorTok{+}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n0 }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{p0))} \NormalTok{llx <-}\StringTok{ }\KeywordTok{sum}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n1 }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(px) }\OperatorTok{+}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n0 }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{px))} \NormalTok{d134}\OperatorTok{$}\NormalTok{ex1 <-}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n }\OperatorTok{*}\StringTok{ }\NormalTok{px} \NormalTok{d134}\OperatorTok{$}\NormalTok{ex0 <-}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n }\OperatorTok{*}\StringTok{ }\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\NormalTok{px)} \NormalTok{D_res <-}\StringTok{ }\DecValTok{2} \OperatorTok{*}\StringTok{ }\KeywordTok{sum}\NormalTok{(} \KeywordTok{ifelse}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n1 }\OperatorTok{==}\StringTok{ }\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, d134}\OperatorTok{$}\NormalTok{n1 }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n1 }\OperatorTok{/}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{ex1)) }\OperatorTok{+}\StringTok{ } \StringTok{ }\KeywordTok{ifelse}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n0 }\OperatorTok{==}\StringTok{ }\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, d134}\OperatorTok{$}\NormalTok{n0 }\OperatorTok{*}\StringTok{ }\KeywordTok{log}\NormalTok{(d134}\OperatorTok{$}\NormalTok{n0 }\OperatorTok{/}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{ex0)))} \NormalTok{D_reg <-}\StringTok{ }\DecValTok{2} \OperatorTok{*}\StringTok{ }\NormalTok{(llx }\OperatorTok{-}\StringTok{ }\NormalTok{ll0)} \KeywordTok{print}\NormalTok{(D_res)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.2481324 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{print}\NormalTok{(D_reg)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 670.8739 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{anova}\NormalTok{(mx)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Analysis of Deviance Table ## ## Model: binomial, link: logit ## ## Response: cbind(n1, n0) ## ## Terms added sequentially (first to last) ## ## ## Df Deviance Resid. Df Resid. Dev ## NULL 8 671.12 ## x 1 670.87 7 0.25 \end{verbatim} Test these deviances as in \#1b above. \begin{Shaded} \begin{Highlighting}[] \DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pchisq}\NormalTok{(}\FloatTok{670.87}\NormalTok{, }\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pchisq}\NormalTok{(}\FloatTok{0.24813}\NormalTok{, }\DecValTok{7}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.9999475 \end{verbatim} \begin{tcolorbox} The reduction of deviance by using $x$ has a $p$-value of effectively zero. The residual deviance has a $p$-value that is effectively one. Our fit passes with flying colors. \end{tcolorbox} \hypertarget{problem-2c}{% \subsection{Problem \#2c}\label{problem-2c}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(n1 }\OperatorTok{~}\StringTok{ }\NormalTok{x, }\DataTypeTok{data =}\NormalTok{ d134, }\DataTypeTok{pch =} \DecValTok{3}\NormalTok{, }\DataTypeTok{col =} \StringTok{'blue'}\NormalTok{, }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \KeywordTok{points}\NormalTok{(d134}\OperatorTok{$}\NormalTok{x, d134}\OperatorTok{$}\NormalTok{ex1, }\DataTypeTok{pch =} \DecValTok{23}\NormalTok{, }\DataTypeTok{col =} \StringTok{'red'}\NormalTok{, }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \KeywordTok{legend}\NormalTok{(} \StringTok{'topleft'}\NormalTok{, } \DataTypeTok{legend =} \KeywordTok{c}\NormalTok{(}\StringTok{'actual'}\NormalTok{, }\StringTok{'fitted'}\NormalTok{), } \DataTypeTok{pch =} \KeywordTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{23}\NormalTok{), }\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{'blue'}\NormalTok{, }\StringTok{'red'}\NormalTok{), }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{m11-hw_files/figure-latex/unnamed-chunk-13-1.pdf} \hypertarget{problem-2d}{% \subsection{Problem \#2d}\label{problem-2d}} I can say right away that the answer is no! There isn't enough deviance to remove to warrant the reduction in degrees of freedom. \begin{Shaded} \begin{Highlighting}[] \NormalTok{mx2 <-}\StringTok{ }\KeywordTok{glm}\NormalTok{(}\KeywordTok{cbind}\NormalTok{(n1, n0) }\OperatorTok{~}\StringTok{ }\NormalTok{x }\OperatorTok{+}\StringTok{ }\KeywordTok{I}\NormalTok{(x}\OperatorTok{^}\DecValTok{2}\NormalTok{), }\DataTypeTok{data =}\NormalTok{ d134, }\DataTypeTok{family =}\NormalTok{ binomial)} \KeywordTok{anova}\NormalTok{(mx2)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Analysis of Deviance Table ## ## Model: binomial, link: logit ## ## Response: cbind(n1, n0) ## ## Terms added sequentially (first to last) ## ## ## Df Deviance Resid. Df Resid. Dev ## NULL 8 671.12 ## x 1 670.87 7 0.25 ## I(x^2) 1 0.01 6 0.24 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pchisq}\NormalTok{(}\FloatTok{0.01}\NormalTok{, }\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.9203443 \end{verbatim} \begin{tcolorbox} The $p$-value is really high. I do not reject simpler model. \end{tcolorbox} \hypertarget{problem-2e}{% \subsection{Problem \#2e}\label{problem-2e}} It's visually indistiguishable. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d134}\OperatorTok{$}\NormalTok{ex1_x2 <-}\StringTok{ }\NormalTok{d134}\OperatorTok{$}\NormalTok{n }\OperatorTok{*}\StringTok{ }\KeywordTok{fitted}\NormalTok{(mx2)} \KeywordTok{plot}\NormalTok{(n1 }\OperatorTok{~}\StringTok{ }\NormalTok{x, }\DataTypeTok{data =}\NormalTok{ d134, }\DataTypeTok{pch =} \DecValTok{3}\NormalTok{, }\DataTypeTok{col =} \StringTok{'blue'}\NormalTok{, }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \KeywordTok{points}\NormalTok{(d134}\OperatorTok{$}\NormalTok{x, d134}\OperatorTok{$}\NormalTok{ex1_x2, }\DataTypeTok{pch =} \DecValTok{23}\NormalTok{, }\DataTypeTok{col =} \StringTok{'red'}\NormalTok{, }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \KeywordTok{legend}\NormalTok{(} \StringTok{'topleft'}\NormalTok{, } \DataTypeTok{legend =} \KeywordTok{c}\NormalTok{(}\StringTok{'actual'}\NormalTok{, }\StringTok{'fitted: x + x^2'}\NormalTok{), } \DataTypeTok{pch =} \KeywordTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{23}\NormalTok{), }\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{'blue'}\NormalTok{, }\StringTok{'red'}\NormalTok{), }\DataTypeTok{cex =} \FloatTok{1.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{m11-hw_files/figure-latex/unnamed-chunk-15-1.pdf} \hypertarget{problem-2f}{% \subsection{Problem \#2f}\label{problem-2f}} The Wald statistic is \(\hat\beta / se(\hat\beta)\) (page 437 of the text). This is calcualted by the \texttt{summary} function. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{summary}\NormalTok{(mx2)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## glm(formula = cbind(n1, n0) ~ x + I(x^2), family = binomial, ## data = d134) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -0.2725 -0.0910 -0.0197 0.1341 0.2799 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.1213906 0.2765495 -7.671 1.71e-14 *** ## x 0.1401566 0.0377508 3.713 0.000205 *** ## I(x^2) -0.0001209 0.0011661 -0.104 0.917419 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 671.12199 on 8 degrees of freedom ## Residual deviance: 0.23738 on 6 degrees of freedom ## AIC: 64.774 ## ## Number of Fisher Scoring iterations: 3 \end{verbatim} And more succintly by the \texttt{coefficients} object of the \texttt{summary}. It is labelled as ``z value''. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{summary}\NormalTok{(mx2)}\OperatorTok{$}\NormalTok{coefficients} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.1213905651 0.276549481 -7.6709259 1.707592e-14 ## x 0.1401565550 0.037750752 3.7126824 2.050742e-04 ## I(x^2) -0.0001209097 0.001166127 -0.1036849 9.174194e-01 \end{verbatim} \hypertarget{problem-2g}{% \subsection{Problem \#2g}\label{problem-2g}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{coef <-}\StringTok{ }\KeywordTok{as.data.frame}\NormalTok{(}\KeywordTok{summary}\NormalTok{(mx2)}\OperatorTok{$}\NormalTok{coefficients)} \NormalTok{coef}\OperatorTok{$}\NormalTok{lower_bound <-}\StringTok{ }\NormalTok{coef[, }\DecValTok{1}\NormalTok{] }\OperatorTok{+}\StringTok{ }\KeywordTok{qnorm}\NormalTok{(}\FloatTok{0.025}\NormalTok{) }\OperatorTok{*}\StringTok{ }\NormalTok{coef[, }\DecValTok{2}\NormalTok{]} \NormalTok{coef}\OperatorTok{$}\NormalTok{upper_bound <-}\StringTok{ }\NormalTok{coef[, }\DecValTok{1}\NormalTok{] }\OperatorTok{+}\StringTok{ }\KeywordTok{qnorm}\NormalTok{(}\FloatTok{0.975}\NormalTok{) }\OperatorTok{*}\StringTok{ }\NormalTok{coef[, }\DecValTok{2}\NormalTok{]} \KeywordTok{print}\NormalTok{(coef[, }\KeywordTok{c}\NormalTok{(}\DecValTok{5}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{6}\NormalTok{)])} \end{Highlighting} \end{Shaded} \begin{verbatim} ## lower_bound Estimate upper_bound ## (Intercept) -2.663417587 -2.1213905651 -1.579363543 ## x 0.066166440 0.1401565550 0.214146670 ## I(x^2) -0.002406476 -0.0001209097 0.002164657 \end{verbatim} \end{document}
{ "alphanum_fraction": 0.6770184173, "avg_line_length": 40.3412921348, "ext": "tex", "hexsha": "fade05819ff759fddb9402aeefc8837213bb5451", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f2b2307f79b0d3bf771266e007e49f938b2b532e", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "adamleerich/jhu-625-661-regression-linear", "max_forks_repo_path": "m11/m11-hw.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f2b2307f79b0d3bf771266e007e49f938b2b532e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "adamleerich/jhu-625-661-regression-linear", "max_issues_repo_path": "m11/m11-hw.tex", "max_line_length": 399, "max_stars_count": null, "max_stars_repo_head_hexsha": "f2b2307f79b0d3bf771266e007e49f938b2b532e", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "adamleerich/jhu-625-661-regression-linear", "max_stars_repo_path": "m11/m11-hw.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10017, "size": 28723 }
\section{Accessing the Hardware} \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{figures/accessingHardware.png} \caption{Access to the hardware} \end{figure} \textbf{Hardware looks like memory, but don't behaves like memory.} The following two chapters describe the two possibilities to access the hardware (via pointers or then via extern). \hypertarget{access-via-pointers}{% \subsection{Access via pointers}\label{access-via-pointers}} Following is described how the access to hardware works with pointers directly in the c file. \begin{lstlisting} volatile HWReg* const hwReg=(volatile HWReg* const) addr; hwReg->someReg=someValue; \end{lstlisting} \hypertarget{access-via-extern}{% \subsection{Access via extern}\label{access-via-extern}} Following is described how the access to the hardware works with extern management, like a linker script. \begin{lstlisting} // decleration static volatile HWReg hwReg; //access hwReg.someReg=someValue; \end{lstlisting} \clearpage
{ "alphanum_fraction": 0.7916251246, "avg_line_length": 27.8611111111, "ext": "tex", "hexsha": "a12a3518e175055151f65f1e100dff6944e8b3ff", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": [ "Beerware" ], "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_path": "TSM_EmbReal/02_Hardware.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Beerware" ], "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_path": "TSM_EmbReal/02_Hardware.tex", "max_line_length": 115, "max_stars_count": null, "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": [ "Beerware" ], "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_path": "TSM_EmbReal/02_Hardware.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 253, "size": 1003 }
\documentclass{subfile} \begin{document} \section{USAMO}\label{sec:usamo} \begin{problem}[USAMO $2020$, problem $6$] Let $n\geq2$ be an integer and $x_{1}\geq\ldots\geq x_{n},y_{1}\geq\ldots\geq y_{n}$ be $2n$ real numbers such that \begin{align*} 0 = x_{1}+\ldots+x_{n} & = y_{1}+\ldots+y_{n}\\ 1 = x_{1}^{2}+\ldots+x_{n}^{2} & = y_{1}^{2}+\ldots+y_{n}^{2} \end{align*} Prove that \begin{align*} \sum_{i=1}^{n}(x_{i}y_{i}-x_{i}y_{n+i-1})^{2} & \geq\dfrac{2}{\sqrt{n-1}} \end{align*} \end{problem} \begin{problem}[USAMO $2018$, problem $1$] Let $a,b,c$ be positive real numbers such that $a+b+c=4\sqrt[3]{abc}$. Prove that \begin{align*} 2(ab+bc+ca)+4\min\{a^{2},b^{2},c^{2}\} & \geq a^{2}+b^{2}+c^{2} \end{align*} \end{problem} \begin{problem}[USAMO $2017$, problem $6$] Find the minimum possible value of \begin{align*} \dfrac{a}{b^{3}+4}+\dfrac{b}{c^{3}+4}+\dfrac{c}{d^{3}+4}+\dfrac{d}{a^{3}+4} \end{align*} given that $a,b,c,d$ are non-negative real numbers such that $a+b+c+d=4$. \end{problem} \begin{problem}[USAMO $2013$, problem $4$] Find all real numbers $x,y,z\ge1$ such that \begin{align*} \min\{\sqrt{x+xyz},\sqrt{y+xyz},\sqrt{z+xyz}\} & = \sqrt{x-1}+\sqrt{y-1}+\sqrt{z-1} \end{align*} \end{problem} This is actually a special case of the following. \begin{problem} Prove that for real numbers $a,b,c\geq1$, the following inequality holds: \begin{align*} \sqrt{a-1}+\sqrt{b-1}+\sqrt{c-1} & \leq\sqrt{a(bc+1)} \end{align*} \end{problem} \begin{problem}[USAMO $2012$, problem $6$] Let $n\geq2$ be an integer and $x_{1}\geq\ldots\geq x_{n},y_{1}\geq\ldots\geq y_{n}$ be $2n$ real numbers such that \begin{align*} 0 = x_{1}+\ldots+x_{n} & = y_{1}+\ldots+y_{n}\\ 1 = x_{1}^{2}+\ldots+x_{n}^{2} & = y_{1}^{2}+\ldots+y_{n}^{2} \end{align*} For each subset $A$ of $\{1,2,\ldots,n\}$, define \begin{align*} S_{A} & = \sum_{i\in A}x_{i} \end{align*} $S_{A}=0$ if $A$ is empty. Prove that for any positive number $\lambda$, the number of sets $A$ satisfying $S_{A}>\lambda$ is at most \begin{align*} \dfrac{2^{n-3}}{\lambda^{2}} \end{align*} For which choices of $x_{1},\ldots,x_{n},\lambda$ does equality hold? \end{problem} \begin{problem}[USAMO $2011$, problem $1$] Let $a,b,c$ be positive real numbers such that \begin{align*} a^{2}+b^{2}+c^{2}+(a+b+c)^{2} & \leq 4 \end{align*} Prove that \begin{align*} \dfrac{ab+1}{(a+b)^{2}}+\dfrac{bc+1}{(b+c)^{2}}+\dfrac{ca+1}{(c+a)^{2}} & \geq3 \end{align*} \end{problem} \begin{problem}[USAMO $2009$, problem $4$] Let $n\geq2$ be an $a_{1},\ldots,a_{n}$ be positive real numbers such that \begin{align*} (a_{1}+\ldots+a_{n})\left(\dfrac{1}{a_{1}}+\ldots+\dfrac{1}{a_{n}}\right) & \leq \left(n+\dfrac{1}{2}\right)^{2} \end{align*} Prove that \begin{align*} \max\{a_{1},\ldots,a_{n}\} & \leq4\min\{a_{1},\ldots,a_{n}\} \end{align*} \end{problem} \begin{problem}[USAMO $2004$, problem $5$] Let $a,b,c$ be positive real numbers. Prove that \begin{align*} (a^{5}-a^{2}+3)(b^{5}-b^{2}+3)(c^{5}-c^{2}+3) & \geq (a+b+c)^{3} \end{align*} \end{problem} \begin{problem}[USAMO $2003$, problem $5$] Let $a,b,c$ be positive real numbers. Prove that \begin{align*} \dfrac{(2a+b+c)^{2}}{2a^{2}+(b+c)^{2}}+\dfrac{(2b+c+a)^{2}}{2b^{2}+(c+a)^{2}}+\dfrac{(2c+a+b)^{2}}{2c^{2}+(a+b)^{2}} & \leq 8 \end{align*} \begin{solution} We have already solved it in \autoref{prob:usamo2003}. \end{solution} \end{problem} \begin{problem}[USAMO $2001$, problem $3$] Let $a,b,c$ be positive real numbers such that $a^{2}+b^{2}+c^{2}+abc=4$. Prove that \begin{align*} 0 & \leq ab+bc+ca-abc\leq 2 \end{align*} \end{problem} \begin{problem}[USAMO $2000$, problem $6$] Let $a_{1},\ldots,a_{n},b_{1},\ldots,b_{n}$ be non-negative real numbers. Prove that \begin{align*} \sum_{i,j=1}^{n}\min\{a_{i}a_{j},b_{i}b_{j}\} & \leq\sum_{i,j=1}^{n}\min\{a_{i}b_{j},a_{j}b_{i}\} \end{align*} \end{problem} \begin{problem}[USAMO $1999$, problem $4$] Let $n>3$ be an integer and $a_{1},\ldots,a_{n}$ be positive real numbers such that \begin{align*} a_{1}+\ldots+a_{n} & \geq n\quad\mbox{and}\\ a_{1}^{2}+\ldots+a_{n}^{2} & \geq n^{2} \end{align*} Prove that $\max\{a_{1},\ldots,a_{n}\}\geq2$. \end{problem} \begin{problem}[USAMO $1996$, problem $3$] Let $a_{1},\ldots,a_{n}$ be real numbers in the interval $(0,\pi/2)$ such that \begin{align*} \tan\left(a_{0}-\dfrac{\pi}{4}\right)+\ldots+\tan\left(a_{1}-\dfrac{\pi}{4}\right) & \geq n-1 \end{align*} Prove that \begin{align*} \tan{a_{0}}\cdots\tan{a_{n}} & \geq n^{n+1} \end{align*} \begin{solution} See \eqref{prob:usa1996-3}. \end{solution} \end{problem} \begin{problem}[USAMO $1994$, problem $4$] Let $a_{1},\ldots,a_{n}$ be a sequence of positive real numbers such that \begin{align*} \sum_{i=1}^{n}a_{i}^{2} & \geq \sqrt{n} \end{align*} for all $n$. Prove that \begin{align*} \sum_{i=1}^{n}a_{i}^{2} & > \dfrac{1}{4}\left(1+\dfrac{1}{2}+\ldots+\dfrac{1}{n}\right) \end{align*} \end{problem} \begin{problem}[USAMO $1993$, problem $5$] Let $a_{0},\ldots,a_{n}$ be positive real numbers such that $a_{i-1}a_{i+1}\leq a_{i}^2$ (such a sequence is called \textit{log concave}). Show that for $n>1$, \begin{align*} \dfrac{a_{0}+\ldots+a_{n}}{n+1}\dfrac{a_{1}+\ldots+a_{n-1}}{n-1} & \geq\dfrac{a_{0}+\ldots+a_{n-1}}{n}\dfrac{a_{1}+\ldots+a_{n}}{n} \end{align*} \end{problem} \end{document}
{ "alphanum_fraction": 0.5565572376, "avg_line_length": 32.1256830601, "ext": "tex", "hexsha": "93de296f8fa80a7e0d58e93c35fc9059f848d72a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_path": "usamo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_path": "usamo.tex", "max_line_length": 162, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_path": "usamo.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "num_tokens": 2701, "size": 5879 }
% % CMPT 383: Comparative Programming Languages - A Course Overview % Section: Introduction % % Author: Jeffrey Leung % \section{Introduction} \label{sec:introduction} \begin{easylist} & Declarative vs. Imperative \end{easylist} \clearpage
{ "alphanum_fraction": 0.7581967213, "avg_line_length": 15.25, "ext": "tex", "hexsha": "ca5d7287cd58331f13705f907a64ca630c8d202a", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_path": "cmpt-383-comparative-programming-languages/tex/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_path": "cmpt-383-comparative-programming-languages/tex/introduction.tex", "max_line_length": 65, "max_stars_count": 25, "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_path": "cmpt-383-comparative-programming-languages/tex/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "num_tokens": 66, "size": 244 }
% \section{Introduction}% % \begin{frame}[t]% \frametitle{Optimization Algorithms}% \begin{itemize}% \item Many questions in the real world are actually optimization problems\only<2->{, e.g., \begin{itemize}% % \item \only<-6>{Find the \emph{shortest} tour for a salesman to visit certain set of cities\only<-2>{ in China and return to Hefei!}}\only<7->{\alert<7>{Traveling Salesman Problem}\scitep{ABCC2006TTSPACS,LLKS1985TTSPAGTOCO,GP2004TTSPAIV,L2011SGEFTSPIMSS}}% % \item<3-> \only<-6>{I need to transport $n$ items from here to Feixi\only<-3>{ but they are too big to transport them all at once. How can I load them best into my car so that I have to travel back and forth the least times?}}\only<7->{\alert<7>{Bin Packing Problem}\scitep{KSH1995EHFTBPP}}% % \item<4-> \only<-6>{Which setting of $x_1$, $x_2$, $x_3$, and $x_4$ can make $(x_1\lor\lnot x_2 \lor x_3)\land(\lnot x_2\lor\lnot x_3 \lor x_4) \land (\lnot x_1\lor\lnot x_3 \lor \lnot x_4)$ become true\only<-4>{ (or, at least, as \emph{many} of its terms as possible)?}}\only<7->{\alert<7>{Maximum (3-)Satisfiability Problem}\scitep{HS2000SAORFROS,TH2004UAIAEEFSAFSAMS,S1978TCOSP,RMK2000EASP}}% % \item<5-> \only<-6>{I want to build a large factory with $n$ workshops.\only<-5>{ I know the flow of material between each two workshops and now need to choose the locations of the workshops such that the overall running cost incurred by material transportation is \emph{minimized}.}}\only<7->{\alert<7>{Quadratic Assignment Problem}\scitep{MF1999ACOMATSAACFTQAP,GTD1999ACOQAP}}% \end{itemize}% }% % \item<6-> Many optimization problems are \alert<7>{\NPHard}, meaning that finding the best possible solution will usually not be possible in feasible time.% % \item<8-> We use metaheuristic optimization algorithms to give us good approximate solutions within acceptable runtime.% % \item<9-> Examples of such algorithms are\only<-19>{ % Evolutionary Algorithms\scitep{BFM1997EA,CWM2011VOEAFRWA,BFM2000EC1BAAO,BFM2000EC2BAAO,DLJD2000EC,EM1999EC,CDGDMPP1999NIIO,GT2002AIECTAA,WGOEB}\uncover<10->{, % Ant Colony Optimization\scitep{DMC1996ACO,DS2004ACO,GM2002APBATDOP,ZBMD2004MBSFCOACS,WGOEB}\uncover<11->{, % Evolution Strategies\scitep{R1965ES,R1973ES,R1994ES,S1965KYASDEFIDS,S1968EOEZDT1,S1975EUNO,WGOEB}\uncover<12->{, % Differential Evolution\scitep{WGOEB}\uncover<13->{, % Particle Swarm Optimization\scitep{WGOEB}\uncover<14->{, % Estimation of Distribution Algorithms\scitep{PSL2005DE,F2006DE,MMVRCC2006DE,BZSM2006DE,LZ2000DE,MM2005DE,BVPK2006DE,S2010DEFCFOAABKPARS}\uncover<15->{, % CMA-ES\scitep{HOG1995ESAD,HO1996AANMDIESTCMA,HO2001ESCMA,HMK2003RTTCOTDESWCMACE,HK2004ETCESOMTF,H2006TCESACR,AH2005ARCESWIPS,AH2005PEOAALSEA}\uncover<16->{, and % Local Search methods\scitep{HS2005SLSFAA,AL1997LSICO,DBSD2001DOILSA}\uncover<17->{ such as % Simulated Annealing\scitep{SSF2002FCAIFSA,LA1987SATAA,B1987GAASA,JCS2003HC,KGV1983SA,VC1985SA,DPSW1982MCTICO,DPSW1982MCTICO2,P1970AMCMFTASOCTOCOP,WGOEB}\uncover<18->{ or % Tabu Search\scitep{G1989TSPI,G1990TSPII,GL1993TABU,DWH1989TSTATAAATNN,BT1994TABU}\uncover<19->{, % as well as hybrids of local and global search, such as Memetic Algorithms\scitep{M1989MA,M2002MA,MC2003AGITMA,ES2003HWOTMA,HKS2005RAIMA,DM2004MA,RS1994FMA}% }}}}}}}}}}}\only<20->{\dots\ many}% % \item<20-> \alert<20>{Which of them is best (for my problem)?}% \item<21-> \alert<21>{How can I make a good algorithm better (for my problem)?}% % \end{itemize}% % \locate{2}{\includegraphics[width=0.6\paperwidth]{\sharedPath/graphics/optimization/tsp/tsp_example/tsp_example}}{0.2}{0.29}% \locate{3}{\includegraphics[width=0.875\paperwidth]{\sharedPath/graphics/optimization/bin_packing/bin_packing_example/bin_packing_example}}{0.0625}{0.495}% \locate{4}{\includegraphics[width=0.55\paperwidth]{\sharedPath/graphics/optimization/sat/sat_example/sat_example}}{0.225}{0.5}% \locate{5}{\includegraphics[width=0.85\paperwidth]{\sharedPath/graphics/optimization/qap/qap_example/qap_example}}{0.075}{0.625}% \locate{6-7}{\includegraphics[width=0.78\paperwidth]{\sharedPath/graphics/complexity/exponential_functions/exponential_functions}}{0.09}{0.55}% % \end{frame}% % \begin{frame}[t]% \frametitle{Algorithm Analysis and Comparison}% \begin{itemize}% \item \alert{Which of the algorithms is best (for my problem)?}% \item<2-> Traditional Approach {\`{a}} la \emph{\inQuotes{QuickSort is better than Bubble Sort because it needs \bigOOf{n \log n} while Bubble Sort needs \bigOOf{n^2} steps to sort $n$ elements in the average case.}}% % \item<3-> Complexity Analysis, Theoretical Bounds of Runtime and Solution Quality% \item<4-> \alert<-9>{Usually not feasible}\uncover<5->{% \begin{itemize}% \item analysis extremely complicated\uncover<6->{ since% \item<6-> algorithms are usually randomized\uncover<7->{ and% \item<7-> have many parameters (e.g., crossover rate, population size)\uncover<8->{ and% \item<8-> \inQuotes{sub-algorithms} (e.g., crossover operator, mutation operator, selection algorithm)% \item<9-> optimization problems also differ in many aspects% \item<10-> theoretical results only available for toy problems and extremely simplified algorithms.% \item<11-> \alert<10>{Currently, not mature enough to be an easy-to-use tool for practitioners}% }}}% \end{itemize}% }% % \item<12-> \alert{Experimental analysis and comparison only practical alternative.}% % \end{itemize}% \end{frame}% % % % \begin{frame}% \frametitle{Performance and Anytime Algorithms}% % \emph{\inQuotes{We use metaheuristic optimization algorithms to give us \alert<3->{good approximate solutions} within \alert<4->{acceptable runtime}.}}% % \uncover<2->{% \begin{itemize}% \item Algorithm performance has two dimensions\scitep{NAFR2010RPBBOB2ES,WCTLTCMY2014BOAAOSFFTTSP}:\uncover<3->{ solution quality\uncover<4->{ and required runtime}}% \item<5-> Anytime Algorithms\scitep{BD1989STDPP2} are optimization methods which maintain an approximate solution at \emph{any time} during their run and iteratively improve this guess.% \item<6-> All metaheuristics are Anytime Algorithms.% \item<7-> Several exact methods like Branch-and-Bound\scitep{LMSK1963AAFTTSP,Z1993TBABACSOTATSP,Z1999TAADFBABACSOTATSP} are Anytime Algorithms.% \item<8-> Consequence: Most optimization algorithms produce approximate solutions of different qualities at different points during their process.% \item<9-> Experiments must capture solution quality and runtime data.% \end{itemize}% }% % \locate{3}{\includegraphics[width=0.55\paperwidth]{\sharedPath/graphics/optimization/performance/performance_dimensions/performance_dimensions_1}}{0.225}{0.542}% \locate{4}{\includegraphics[width=0.55\paperwidth]{\sharedPath/graphics/optimization/performance/performance_dimensions/performance_dimensions_2}}{0.225}{0.542}% \locate{5}{\includegraphics[width=0.55\paperwidth]{\sharedPath/graphics/optimization/performance/performance_dimensions/performance_dimensions}}{0.225}{0.542}% % \end{frame}% % \begin{frame}[t]% \frametitle{Experimental Procedure}% \begin{itemize}% \item In optimization or Machine Learning, the following experimental procedure is often used\uncover<2->{% \begin{enumerate}% \item Select a \only<3->{\alert<3>{set of }}benchmark instance\only<3->{\alert<3>{s}}\only<3-7>{:% \begin{itemize}% \item multiple instances% \item<4-> which cover some different problem features% \item<5-> should be well-known to make results comparable% \only<-6>{% \item<6-> e.g., \tspLib\expandafter\scitep{\tspLibReferences} for the TSP has instances with different numbers of cities and geometries% }% \only<-7>{% \item<7-> e.g., \bbob\expandafter\scitep{\bbobReferences} offers different benchmark functions for numerical optimization problems% }% \end{itemize}% }% \item<8-> Do experiment\only<12->{\alert<12-13>{s}}\only<9-13>{:% \begin{itemize}% \item conduct several independent runs of algorithm for each benchmark instance% \item<10-> collect algorithm progress informatio, e.g., as \emph{\inQuotes{runtime bestObjectiveValue}} tuples% \item<11-> one log file per run, each log file has several such tuples% \item<12-> repeat for different algorithm parameter settings (e.g., different population sizes of an EA)% \item<13-> repeat with other algorithms for comparison purposes% \end{itemize}% }% \item<14-> Evaluate the gathered data\only<15->{:\alert<22>{% \begin{itemize}% \item draw diagrams of progress of solution quality over time% \item<16-> draw diagrams of advanced statistical parameters such as ECDF\scitep{HAFR2012RPBBOBES,HS1998ELVAPAR,TH2004UAIAEEFSAFSAMS,WCTLTCMY2014BOAAOSFFTTSP}\only<17->{ and ERT\scitep{HAFR2012RPBBOBES,WCTLTCMY2014BOAAOSFFTTSP}} (over time)% \item<18-> use statistical tests to compare results (at different points during the runs)% \item<19-> analyze the impact of benchmark features and algorithm parameters on the above% \end{itemize}% }}% % \item<20-> Draw conclusions about algorithm performance and parameter settings% \item<21-> But this is all \emph{very} cumbersome, involves much work and much data\dots \end{enumerate}% }% \item<22-> The \optimizationBenchmarking\ Evaluator can automatize much of this work% \end{itemize}% % \locateWithCaption{6}{% \includegraphics[width=0.925\paperwidth]{\sharedPath/graphics/optimization/tsp/tspLib_features/tspLib_features_symmetric}% }{% The relative amounts of the instances of the 110 symmetric instances of \tspLib\ according to their features (the 10 asymmetric instances are not plotted).% }{0.0375}{0.51}{0.925}% % \locateWithCaption{7}{% \pgfuseimage{bbob_features}% }{% The relative amounts of \bbob\ benchmark functions according to their features.% }{0.0375}{0.51}{0.925}% % \locateWithCaption{10}{% \includegraphics[width=0.8\paperwidth]{\sharedPath/graphics/optimization/tsp/tspSuite_logfile_example/tspSuite_logfile_example}% }{% Example for data collected in a log file by \tspSuite\scitep{\tspSuiteReferences}.% }{0.0375}{0.53}{0.925}% % \locateWithCaption{15-17}{ \strut% \includegraphics[width=0.28\paperwidth]{\sharedPath/graphics/optimization/performance/progress_example/progress_example}% \uncover<16->{% \strut\hfill\strut% \includegraphics[width=0.28\paperwidth]{\sharedPath/graphics/optimization/performance/ecdf_example/ecdf_example}% \uncover<17->{% \strut\hfill\strut% \includegraphics[width=0.28\paperwidth]{\sharedPath/graphics/optimization/performance/ert_example/ert_example}% }}\strut% }{% Examples for progress\only<16>{ and ERT}\only<17->{, ERT, and ECDF} diagrams for different algorithms (signified by different colors) over different sub-sets of the \tspLib\ data.% }{0.0375}{0.525}{0.925}% % \end{frame}% % %
{ "alphanum_fraction": 0.7736259578, "avg_line_length": 59.7231638418, "ext": "tex", "hexsha": "c870daa5d9584fce06b202c09d69f9475aeb7f71", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "41ac5357f443db152765e4bc309d462aff613224", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "optimizationBenchmarking/optimizationBenchmarkingDocu", "max_forks_repo_path": "documents/evaluatorSlides/part_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "41ac5357f443db152765e4bc309d462aff613224", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "optimizationBenchmarking/optimizationBenchmarkingDocu", "max_issues_repo_path": "documents/evaluatorSlides/part_introduction.tex", "max_line_length": 395, "max_stars_count": null, "max_stars_repo_head_hexsha": "41ac5357f443db152765e4bc309d462aff613224", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "optimizationBenchmarking/optimizationBenchmarkingDocu", "max_stars_repo_path": "documents/evaluatorSlides/part_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3406, "size": 10571 }
\chapter{Figures, Tables, Equations, Algorithms, etc}\label{chap:design} %Your design chapter. I probably should include some examples on inserting figures, tables, mathematical equations, etc. (This is supposed to be the design or methodology chapter. Instead, we include examples on inserting figures, tables, mathematical equations\ldots i.e.\ things that you might want to include in your thesis.) \section{Inserting Figures}\label{sec:figure} You can draw diagrams with special \LaTeX\ commands, but this may take some extra time to learn. I've had some forays into the \texttt{pgf} and \texttt{tikz} packages and must say I quite like the results; but as I said, they take time to learn. If you want a faster solution, you can draw your diagrams using other applications, and saving them as graphic files (EPS, PNG, JPG, PDF). \LaTeX{} requires EPS (encapsulated postscript) graphic files when generating DVI output, and PNG, JPG or PDF when generating PDF output. For exporting to EPS, try \url{http://www.cloudconvert.com}. It's like a Swiss knife for converting from almost any format, to almost any format. Do note that IPS \textbf{discourages} the use of colours in your thesis, including diagrams and figures. Phographs and colour plates are exceptions to this rule: see Section~\ref{sec:plate}. Here's how to insert a picture with the filename \verb|pythag.eps| or \verb|pythag.png|. I'm going to display it here with 5cm width, and the caption ``Pythagoras' Theorem''. \begin{figure}[hbt!] \begin{lstlisting} \begin{figure}[hbt!]\centering \includegraphics[width=50mm]{pythag} \caption{Pythagoras' Theorem}\label{fig:pythagoras} \end{figure} \end{lstlisting} \caption{Including a Graphics File}\label{fig:lst:graphics} \end{figure} The result would be: \begin{figure}[hbt!]\centering \includegraphics[width=50mm]{pythag} \caption{Pythagoras' Theroem} \label{fig:pythagoras} \end{figure} Don't specify the extension of the graphic file. The template will automatically look for the EPS or the PNG (or otherwise) versions, depending on whether \verb|latex| or \verb|pdflatex| was used. The \texttt{figure} environment will also ensure that that an entry is inserted into the \emph{List of Figures} automatically -- including the figure numbering, caption and page number. In addition, the width of the included graphics can also be specified as a percentage of the text width, e.g.~ \verb|width=.2\textwidth| would cause the graphics to occupy 20\% of the text width. Notice that I inserted a \verb|\label| just after the \verb|\caption|. This can be used for referencing the figure number, like this: \\ \verb|Figure \ref{fig:pythagoras}| $\to$ Figure \ref{fig:pythagoras} This works the same for chapters, sections, tables, equations too. In \verb|chap-intro.tex|, I labelled the Introduction chapter with \verb|\label{chap:intro}|. I also labelled the section on inserting figures, \verb|\label{sec:figure}|. So now I can do \\ \verb|Chapter \ref{chap:intro}| $\to$ Chapter \ref{chap:intro} \\ \verb|section \ref{sec:figure}| $\to$ section \ref{sec:figure} Everytime the numbering of the heading changes, the reference will change automatically as well. \textbf{This is another advantage of using \LaTeX{}}: you do not need to manually update the reference counters (nor the Table of Contents, List of Figures and Tables) whenever you add or remove figures, tables, sections or chapters. You might also want to try out \texttt{JpgfDraw}: it is a vector graphics and drawing application (requiring Java), and can export to \LaTeX{} code which you can paste into your \LaTeX{} source. \texttt{JpgfDraw} is available from \url{http://theoval.cmp.uea.ac.uk/~nlct/jpgfdraw/index.html}. \section{How Do I Do Subfigures?} Here's an example on how to do subfigures (and similarly subtables): \begin{figure}[hbt!] \begin{lstlisting} \begin{figure}[hbt!] \begin{minipage}{.49\textwidth} \centering \subfloat[First caption]{\includegraphics[width=3cm]{pythag}} \label{fig:sub1} \end{minipage} \hfill \begin{minipage}{.49\textwidth} \subfloat[Second caption]{\includegraphics[width=0.8\textwidth]{USMScience}}\label{fig:sub2} \end{minipage} \caption{This is the main caption of the figure.} \label{fig:main} \end{figure} \end{lstlisting} \caption{Creating subfigures within figures} \end{figure} \begin{figure}[hbt!] \begin{minipage}{.49\textwidth} \centering \subfloat[First caption]{\includegraphics[width=3cm]{pythag}} \label{fig:sub1} \end{minipage} \hfill \begin{minipage}{.49\textwidth} \subfloat[Second caption]{\includegraphics[width=0.8\textwidth]{USMScience}}\label{fig:sub2} \end{minipage} \caption{This is the main caption of the figure.} \label{fig:main} \end{figure} \section{Inserting Plates}\label{sec:plate} Colour photographs are now regarded as \emph{plates}. They must be listed in the \emph{List of Plates} instead of the List of Figures, and should be printed in colour on glossy photo paper \citep{ips:thesis:guideline:2007}. The \texttt{usmthesis} document class defines a new \texttt{plate} environment, as well as a corresponding \verb|\listofplates| command. (The \verb|\listofplates| command is already placed in the sample template file \verb|usmthesis.tex|.) In short, all you need to do to insert a photograph or plate (as a graphics file \verb|USMScience.{eps,png,jpg}|) is shown in Figure~\ref{fig:lst:plate}, and you will then get Plate~\ref{plate:ppsk:usm} as the result. \begin{figure}[hbt!] \begin{lstlisting} \begin{plate}[hbt!]\centering \includegraphics[width=.9\textwidth]{USMScience} \caption{School of Computer Sciences, USM}\label{plate:ppsk:usm} \end{plate} \end{lstlisting} \caption{Inserting a Plate}\label{fig:lst:plate} \end{figure} \begin{plate}[hbt!]\centering \includegraphics[width=.9\textwidth]{USMScience} \caption{School of Computer Sciences, USM}\label{plate:ppsk:usm} \end{plate} \section{Inserting Tables} Typesetting tables can be a little troublesome especially with complex layouts. Look up \citep{roberts} to learn about some tips, or you can use the \textrm{LaTable} program (\url{http://www.g32.org/latable/}) to help you. If using \textrm{LaTable}, when you're done designing the table, copy the whole table as \LaTeX\ code, and paste it in your source file. (You may add additional formatting commands, like bold, italics, etc.) If this is going to be a numbered table, remember to surround it with \verb|\begin{table}| and \verb|\end{table}|, and give it a caption, like this: \begin{figure}[hbt!] \begin{lstlisting} \begin{table}[hbt!]\centering \begin{tabular}{| l | c || r |} \hline \textbf{Name} & \textbf{Category} & \textbf{Quantity} \\ \hline\hline Apple & Fruit & 10 \\ \hline Cucumber & Vegetable & 25 \\ \hline Daisy & Flower & 5 \\ \hline \end{tabular} \caption{Sample Table Only} \label{table:sample} \end{table} \end{lstlisting} \caption{Typesetting Tables}\label{fig:lst:table} \end{figure} \begin{table}[hbt!]\centering \begin{tabular}{| l | c || r |} \hline \textbf{Name} & \textbf{Category} & \textbf{Quantity} \\ \hline\hline Apple & Fruit & 10 \\ \hline Cucumber & Vegetable & 25 \\ \hline Daisy & Flower & 5 \\ \hline \end{tabular} \caption{Sample Table Only} \label{table:sample} \end{table} Note also that \verb|usmthesis| is configured such that captions for figures are placed \emph{below} the figures, and captions for tables are placed \emph{above} them, in accordance with the formatting guidelines. Many of us would have had massive headaches about lining up decimal places in table columns (as mentioned in the IPS guidelines) if not for this tip from \citep[pp.~274--276]{latex:companion}. This method uses the \verb|dcolumn| package (already loaded by \verb|usmthesis.cls|). Instead of using \verb|l,c| or \verb|r| as the column type in the \verb|tabular| declaration, use\\ \texttt{D\{\textit{input sep}\}\{\textit{output sep}\}\{\textit{decimal places}\}}. \begin{figure}[htb!] \begin{lstlisting} \begin{table}[htb!]\centering \begin{tabular}{| c | D{.}{.}{2} |} \hline Item & \multicolumn{1}{c|}{Reading}\\\hline A & 1.11\\\hline B & 3.99\\\hline C & 2.27\\\hline \end{tabular} \caption{A table with decimal data} \end{table} \end{lstlisting} \caption{Aligning decimal data in tables}\label{fig:align:decimal} \end{figure} The \LaTeX\ code in Figure~\ref{fig:align:decimal} will give you Table~\ref{tab:align:decimal}. \begin{table}[htb!]\centering \begin{tabular}{| c | D{.}{.}{3} |} \hline Item & \multicolumn{1}{c|}{Reading}\\\hline A & 1.11\\\hline B & 3.999\\\hline C & 22.2\\\hline \end{tabular} \caption{A table with decimal data}\label{tab:align:decimal} \end{table} Without using \verb|dcolumn|, you'd get something like this: \begin{table}[htb!]\centering \begin{tabular}{| c | r |} \hline Item & \multicolumn{1}{c|}{Reading}\\\hline A & 1.11\\\hline B & 3.999\\\hline C & 22.2\\\hline \end{tabular} \caption{A table with decimal data (mis-aligned)} \end{table} \section{Full-paged, Sideways Figures and Tables} To make a figure appear on a landscape, full-page layout, put your \verb|\includegraphics| command in a \verb|sidewaysfigure| environment (Figure~\ref{fig:lst:sidewayfigure}). \begin{figure}[htb!] \begin{lstlisting} \begin{sidewaysfigure}\centering \includegraphics[width=\textheight]{latex-win-comp} \caption{A full-page, sideways figure}\label{fig:sidewaysfig} \end{sidewaysfigure} \end{lstlisting} \caption{Including a sideway, full-page graphic}\label{fig:lst:sidewayfigure} \end{figure} \begin{sidewaysfigure} \centering\includegraphics[width=\textheight]{latex-win-comp} \caption{A full-page, sideways figure}\label{fig:sidewaysfig} \end{sidewaysfigure} The resultant figure (Figure~\ref{fig:sidewaysfig}) should appear on the next page. For a sideways table, use the \verb|sidewaystable| environment instead around your usual \verb|tabular| material. \section{Mathematical Equations} %Oooh I love this one. After all, maths is the reason why Donald Knuth created \TeX{}! It would be quite impossible for me to list all the commands, so I'll just give some example here, and you're better off looking at the various online tutorials like \cite{roberts}. And TeXnicCenter certainly makes things much easier. Typesetting mathematical material is one of, if not \emph{the}, strongest capabilities of \LaTeX. After all, that was the Knuth's main motivation for creating \TeX{}. As it is impossible to enumerate all possible mathematically-related commands and macros here, we will just give some examples. The reader is directed to the many well-written online tutorials, such as \citep{roberts}, for more elaborate examples. TeXnicCenter also provides many shortcut buttons for inserting mathematical symbols. \begin{figure}[htb!] \begin{lstlisting} \begin{equation}\label{eq:pythagoras} z^2 = x^2 + y^2 \end{equation} \begin{equation}\label{eq:golden:ratio} \phi = \frac{1}{2} (1 + \sqrt{5}) \end{equation} \begin{equation}\label{eq:golden:ratio} \phi = \frac{1}{2} (1 + \sqrt{5}) \end{equation} \begin{equation}\label{eq:golden:ratio:fibonacci} \phi = 1 + \sum ^ {\infty} _ {n=1} \frac{ (-1) ^ {n+1} }{ F_n F_{n+1} } \end{equation} Equation~\ref{eq:pythagoras} is the Pythagoras Theorem. \eqref{eq:golden:ratio} gives the golden ratio $\phi$, and \eqref{eq:golden:ratio:fibonacci} relates it to the Fibonacci series. \end{lstlisting} \caption{Typesetting Mathematical Equations}\label{fig:lst:equation} \end{figure} \begin{equation}\label{eq:pythagoras} z^2 = x^2 + y^2 \end{equation} \begin{equation}\label{eq:golden:ratio} \phi = \frac{1}{2} (1 + \sqrt{5}) \end{equation} \begin{equation}\label{eq:golden:ratio:fibonacci} \phi = 1 + \sum ^ {\infty} _ {n=1} \frac{ (-1) ^ {n+1} }{ F_n F_{n+1} } \end{equation} Equation~\ref{eq:pythagoras} is the Pythagoras Theorem. \eqref{eq:golden:ratio} gives the golden ratio $\phi$, and \eqref{eq:golden:ratio:fibonacci} relates it to the Fibonacci series. The \LaTeX\ code to generate the above mathematics materials are shown in Figure~\ref{fig:lst:equation}. As you can see, references to equations can be achieved with either \verb|\ref| or \verb|\eqref|. A disclaimer: if you think the mathematic equations don't look as great as all those \LaTeX\ advocates make them out to be, that's because IPS requires Times to be used and the current offerings of free \LaTeX\ math fonts for Times don't look great. It would've been a different picture if we used Computer Modern. \section{Acronyms} \acresetall If you have a list of acronyms or symbols, edit the file \verb|loa.tex| as in Figure~\ref{fig:acronym}. \begin{figure}[hbt!] \begin{lstlisting} \begin{acronym}[UTMK] %% replace 'UTMK' with the longest acronym in your list \acro{IPS}{Institut Pengajian Siswazah} \acro{PPSK}{Pusat Pengajian Sains Komputer} \acro{USM}{Universiti Sains Malaysia} \acro{UTMK}{Unit Terjemahan Melalui Komputer} \end{acronym} \end{lstlisting} \caption{The template \texttt{loa.tex} for acronyms}\label{fig:acronym} \end{figure} You can also use this acronym list to help expand it the first time you mention it in your text. For example, the first time you use \verb|\ac{USM}|, `\ac{USM}' will be the output (without the quotes). After that, all calls to \verb|\ac{USM}| will give `\ac{USM}' (without the quotes). For more information, see the documentation for the \texttt{acronym} package. % Please feel free to use whatever package you like for typesetting algorithms! \section{Typesetting Algorithms} As computer scientists, it is quite common to include algorithms and/or pseudocode. There are a number of different packages available, but unfortunately they tend not to work well together! I'm using \texttt{algorithmicx} here. \begin{figure}[hbt!] \begin{lstlisting} \begin{algorithm}[hbt!] \begin{algorithmic} \Require $n \geq 0$ \Ensure $y = x^n$ \State $y \Leftarrow 1$ \State $X \Leftarrow x$ \State $N \Leftarrow n$ \While{$N \neq 0$} \If{$N$ is even} \State $X \Leftarrow X \times X$ \State $N \Leftarrow \frac{N}{2} $ \Comment{This is a comment} \ElsIf{$N$ is odd} \State $y \Leftarrow y \times X$ \State $N \Leftarrow N - 1$ \EndIf \EndWhile \end{algorithmic} \caption{Computing $x^n, n > 0$} \end{algorithm} \end{lstlisting} \caption{Typesetting Algorithms}\label{fig:lst:algo} \end{figure} \begin{algorithm}[hbt!] \begin{algorithmic} \Require $n \geq 0$ \Ensure $y = x^n$ \State $y \Leftarrow 1$ \State $X \Leftarrow x$ \State $N \Leftarrow n$ \While{$N \neq 0$} \If{$N$ is even} \State $X \Leftarrow X \times X$ \State $N \Leftarrow \frac{N}{2} $ \Comment{This is a comment} \ElsIf{$N$ is odd} \State $y \Leftarrow y \times X$ \State $N \Leftarrow N - 1$ \EndIf \EndWhile \end{algorithmic} \caption{Computing $x^n, n > 0$} \end{algorithm} \section{Program Listings} You may have noticed that I used the \verb|lstlisting| environment to typeset some of the \LaTeX{} examples -- with pretty-printing\footnote{Whether you agree that it \emph{is} pretty is another story altogether.}, too, including automatic line-breaking. For more information, see the documentation for the \verb|listings| package: it's available online at \url{http://www.texdoc.net/pkg/listings}. Just to give some simple example here. For example, to typeset a ``Hello World'' Java program with syntax highlighting, you can use the following code: \begin{figure}[hbt!] \begin{lstlisting}[escapechar=:,language={}] \lstset{basicstyle=\small\ttfamily, language=Java, breaklines=true, columns=fullflexible, tabsize=2} \begin{lstlisting} public class HelloWorld { public static void main( String arg[] ) { for (int i = 0; i < 10; i++) { System.out.println( "Hello World!" + i); } } } \end:\{:lstlisting:\}: \end{lstlisting} \caption{Typesetting a Java program listing}\label{fig:lst:syntax} \end{figure} \lstset{keywordstyle={\bfseries}} \begin{figure}[hbt!] \lstset{basicstyle=\small\ttfamily, language=Java, breaklines=true, columns=fullflexible, framesep=10pt, xleftmargin=16pt, tabsize=2} \begin{lstlisting} public class HelloWorld { public static void main( String arg[] ) { for (int i = 0; i < 10; i++) { System.out.println( "Hello World!" + i); } } } \end{lstlisting} \caption{A pretty-printed Java program listing with syntax highlighting} \end{figure} If you want to turn off the syntax highlighting, set \verb|language={}|. (See the \verb|listings| documentation for a list of programming languages for which syntax highlighting is supported.) You can also change the \verb|basicstyle| value to get different effects: e.g. a different font family, size or text formatting. Here's another example for a C program: \begin{figure}[hbt!] \begin{lstlisting}[escapechar={:}, texcl=false,language={}] \lstset{basicstyle=\sffamily, language=C, breaklines=true, columns=fullflexible, tabsize=2} \begin{lstlisting} int main() { int c = 0; c = c + 1; printf( "%d", c ); return 0; } \end:\{:lstlisting:\}: \end{lstlisting} \caption{Typesetting a C program listing}\label{fig:lst:c} \end{figure} \begin{figure}[hbt!] \lstset{basicstyle=\sffamily, language=C, breaklines=true, columns=fullflexible, framesep=10pt, xleftmargin=.4\textwidth, tabsize=4} \begin{lstlisting} int main() { int c = 0; c = c + 1; printf( "%d", c ); return 0; } \end{lstlisting} \caption{A pretty-printed C program listing with syntax highlighting} \end{figure} And here is the same C program listing \emph{without} syntax highlighting (by setting \verb|language={}|): \begin{figure}[hbt!] \lstset{basicstyle=\sffamily, language={}, breaklines=true, columns=fullflexible, framesep=10pt, xleftmargin=.4\textwidth, tabsize=4} \begin{lstlisting} int main() { int c = 0; c = c + 1; printf( "%d", c ); return 0; } \end{lstlisting} \caption{A C program listing without syntax highlighting} \end{figure}
{ "alphanum_fraction": 0.7362206927, "avg_line_length": 41.9507042254, "ext": "tex", "hexsha": "9dcbb3c23b3b3a55ae3e9167bed8b9d5ce7394aa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "647dc6a149e4d0f9bac665920a37c51ca9d22f04", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "liantze/usmthesis", "max_forks_repo_path": "chap-design.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "647dc6a149e4d0f9bac665920a37c51ca9d22f04", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "liantze/usmthesis", "max_issues_repo_path": "chap-design.tex", "max_line_length": 503, "max_stars_count": 3, "max_stars_repo_head_hexsha": "647dc6a149e4d0f9bac665920a37c51ca9d22f04", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "liantze/usmthesis", "max_stars_repo_path": "chap-design.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-19T14:31:45.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-29T12:13:36.000Z", "num_tokens": 5433, "size": 17871 }
\documentclass[12pt, titlepage]{article} \usepackage{booktabs} \usepackage{tabularx} \usepackage{hyperref} \hypersetup{ colorlinks, citecolor=black, filecolor=black, linkcolor=red, urlcolor=blue } \usepackage[usenames, dvipsnames]{color} \title{SE 3XA3: Test Plan\\GrateBox} \author{Team 8, Grate \\ Kelvin Lin (linkk4) \\ Eric Chaput (chaputem) \\ Jin Liu (liu456) } \usepackage[round]{natbib} \date{\today} \input{../Comments} \begin{document} \maketitle \pagenumbering{roman} \tableofcontents \listoftables \listoffigures \begin{table}[bph] \caption{\bf Revision History} \begin{tabularx}{\textwidth}{p{3cm}p{2cm}X} \toprule {\bf Date} & {\bf Version} & {\bf Notes}\\ \midrule Oct 24 & 1.0 & Imported Template and completed survey's and non-functional requirements testing\\ Oct 25 & 1.1 & Implemented non-functional requirements testing\\ Oct 30 & 1.2 & First test case additions\\ Oct 31 & 1.3 & Second test case additions\\ Oct 31 & 1.4 & Final edits\\ Dec 07 & 1.5 & Final edits for Rev 1\\ \bottomrule \end{tabularx} \end{table} \newpage \pagenumbering{arabic} \section{General Information} \subsection{Purpose} The purpose of this project's testing is to affirm that all requirements outlined in the Requirements Specifications document have been met and that the GrateBox software was implemented properly. \subsection{Scope} This test plan presents a basis for the testing of software functionality. It has the objectives of proving that the GrateBox project has met all the requirements outlined in the Requirements Specification document and of attaching metrics to those requirements for the sake of quantifying them. It also serves as a means to arrange testing activities. It will present what is to be tested and will act as an outline for testing methods and tools to be utilized. \subsection{Acronyms, Abbreviations, and Symbols} \begin{table}[hbp] \caption{\textbf{Table of Abbreviations}} \label{Table} \begin{tabularx}{\textwidth}{p{3cm}X} \toprule \textbf{Abbreviation} & \textbf{Definition} \\ \midrule PoC & Proof of Concept\\ SRS & Software Requirements Specification\\ GC & Genetic Cars\\ GUI & Graphical User Interface\\ GA & Genetic Algorithms\\ \bottomrule \end{tabularx} \end{table} \begin{table}[!htbp] \caption{\textbf{Table of Definitions}} \label{Table} \begin{tabularx}{\textwidth}{p{3cm}X} \toprule \textbf{Term} & \textbf{Definition}\\ \midrule Structural Testing & Testing derived from the internal structure of the software\\ Functional Testing & Testing derived from a description of how the program functions (most often drawn from requirements)\\ Dynamic Testing & Testing which includes having test cases run during execution\\ Static Testing & Testing that does not involve program execution\\ Manual Testing & Testing conducted by people\\ Automated Testing & Testing that is run automatically by software\\ A majority of tested users & Defined as 70 percent of the users tested\\ \bottomrule \end{tabularx} \end{table} \subsection{Overview of Document} The GrateBox project will re-implement the code of the open source project BoxCar2D. The software's requirements are outlined in the Requirements Specifications document. \section{Plan} \subsection{Software Description} The software will allow users to model and learn about genetic algorithms in a fun and educational way. The implementation will be completed in JavaScript. \subsection{Test Team} The individuals responsible for the testing of this project are Kelvin Lin, Eric Chaput, and Jin Liu. Jin Liu is in charge of testing for functional requirements. Eric Chaput is in charge of testing for non-functional requirements and for surveying representative users for feedback and testing purposes. Kelvin Lin as team leader shall oversee time management for testing but will otherwise take no direct role in the testing process. \subsection{Automated Testing Approach} QUnit will be the primary tool used for testing this project. It will be used to automate unit testing. JavaScript also supports the principle of self-testing so many automated tests through JavaScript itself will also be viable. \subsection{Testing Tools} QUnit will be the primary tool used for testing this project. It will be used to automate unit testing. All group members are familiar with QUnit's Java equivalent, JUnit, and so minimal instruction in QUnit will be necessary. Testing will also be conducted in JavaScript itself as there are many testing methods native to JavaScript itself. Expected code coverage is 70 percent for this project. Team members are expected to know how to write and execute simple Qunit commands such as assert.equal. FireFox, Chrome, Oprah, and Internet explorer will also be tested to test for browser compatibility. \subsection{Testing Schedule} See Gantt Chart at \href{https://gitlab.cas.mcmaster.ca/linkk4/GrateBox/tree/master/ProjectSchedule}{the GitLab Repository}. \section{System Test Description} \subsection{Tests for Functional Requirements} \subsubsection{Genetic Algorithm} \begin{enumerate} %1.0 = Mutations \item{GA-1.1\\} Type: Structural, Dynamic, Automated Initial State: Cars with a known standardized chromosome. Input: The chromosome of the cars with a mutation rate of 0\%. Output: The original chromosome of the car. How test will be performed: Automated testing with QUnit will be used in order to create cars with a standardized known chromosome. The unit test will send the car model through the GA module's mutate function, passing a mutation rate of 0\%, and it will assert that the chromosome going into the module is the same chromosome coming out of the module. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. \item{GA-1.2\\} Type: Structural, Dynamic, Automated Initial State: Cars with a known standardized chromosome. Input: The chromosome of the cars with a mutation rate of 100\%. Output: A chromosome that is completely modified. How test will be performed: Automated testing with QUnit will be used in order to create cars with a standardized known chromosome. The unit test will send the car model through the GA module's mutate function, passing a mutation rate of 100\%, and it will assert that every element in the output chromosome is different than the output from the input chromosome. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. \item{GA-1.3\\} Type: Structural, Dynamic, Automated Initial State: Cars with a known standardized chromosome. Input: The chromosome of the cars with a mutation rate of -1\%. Output: The module should produce an error. How test will be performed: Automated testing with QUnit will be used in order to create cars with a standardized known chromosome. The unit test will send the car model through the GA module's mutate function, passing a mutation rate of -1\%, and it will assert that an error is produced. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. %Crossover \item{GA-2.1\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that the top 3 cars should be used. Output: An array of cars with the first three cars of the input array as the first 3 elements, and the length of the array is the same as the input array. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's crossover function, passing a parameter of 3, and it will assert that the length of the array is the same as the input array and the first 3 cars of the output array is the same as the first 3 cars of the output array. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. \item{GA-2.2\\} Type: Structural, Dynamic, Manual Initial State: An array of 3 cars with known chromosomes is created. Input: An array of 3 cars with known chromosomes, and an integer parameter that specifies that the top 2 cars should be used. Output: An array of cars with the first two cars of the input array, and a third car that is a derivative of the first two cars. How test will be performed: This test will be manually tested by running a test driver that calls the crossover function. The output will be sent to a text file, where a tester will manually confirm that the chromosomes of the third car are indeed swapped. The tester will also verify that the output array has 3 elements. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. \item{GA-2.3\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that the top 1 cars should be used. Output: An error. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's crossover function, passing a parameter of 1, and it will assert that an error is produced. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. \item{GA-2.4\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that 0 cars should be used. Output: An error. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's crossover function, passing a parameter of 0, and it will assert that an error is produced. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. \item{GA-2.5\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that -1 cars should be used. Output: An error. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's crossover function, passing a parameter of -1, and it will assert that an error is produced. This test will be repeated a predetermined number of times in order to increase the confidence in the correctness of the module. %Selection \item{GA-3.1\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that the top 3 cars should be selected. Output: An array of 3 cars with the highest fitness functions. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's selection function, passing a parameter of 3. The unit test will also search for the top three cars using an external search module. The unit test will car the two values and assert that they are the same. \item{GA-3.2\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length \textit{n}, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that the top \textit{n}+1 cars should be selected. Output: An error. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's selection function, passing a parameter of \textit{n}+1. The unit test will assert that an error is thrown. \item{GA-3.3\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length \textit{n}, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that the top 0 cars should be selected. Output: An error How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's selection function, passing a parameter of 0. The unit test will assert that an error is thrown. \item{GA-3.4\\} Type: Structural, Dynamic, Automated Initial State: An array of cars with known length \textit{n}, and known chromosomes is created. Input: An array of cars with length known chromosomes, and an integer that specifies that the top -1 cars should be selected. Output: An array of 3 cars with the highest fitness functions. How test will be performed: Automated testing with QUnit will be used in order to create an array of cars with a standardized known chromosome. The unit test will send the cars array through the GA module's selection function, passing a parameter of -1. The unit test will assert that an error is thrown. \end{enumerate} \subsubsection{Car Model} \begin{enumerate} \item{CM-1.1\\} Type: Structural, Dynamic, Automated Initial State: No car generated. Input: Pre determined random seed and valid car generation module run. \textcolor{RoyalPurple}{Output: Set of 2 wheels for a car.} \textcolor{RoyalPurple}{How test will be performed: Car will be randomly created. Number of wheels will be found to be 2 for a successful test. Note that the output should be the same number of wheels as there are wheel radii as in 1.2.} \item{CM-1.2\\} Type: Structural, Dynamic, Automated Initial State: No car generated. Input: Pre determined random seed and valid car generation module run. Output: Set of wheel radius for a car. \textcolor{RoyalPurple}{How test will be performed: Car will be randomly created. All wheel radii will be found to be positive numbers within a certain range to be determined at a later date. Note that the output should be the same number of radii as there are wheels as in 1.1.} \item{CM-1.3\\} Type: Structural, Dynamic, Automated Initial State: No car generated. Input: Pre determined random seed and valid car generation module run. \textcolor{RoyalPurple}{Output: Set of eight vertex positions for a car.} \textcolor{RoyalPurple}{How test will be performed: Car will be randomly created. All vertex positions will be found to be positions in 2-D space within a certain area.} \item{CM-1.4\\} Type: Structural, Dynamic, Automated Initial State: No car generated Input: Pre determined random seed and valid car generation module run. \textcolor{RoyalPurple}{Output: Set of eight vector angles for a car.} \textcolor{RoyalPurple}{How test will be performed: Car will be randomly created. All vertex angles will be found to be positive values within a certain range (within 2*pi)} \item{CM-1.5\\} Type: Structural, Dynamic, Automated Initial State: No car generated Input: Pre determined random seed and faulty car generation module run for wheel number. Output: Error message displayed How test will be performed: Car will be randomly created. Error message should occur if test successful. \item{CM-1.6\\} Type: Structural, Dynamic, Automated Initial State: No car generated Input: Pre determined random seed and faulty car generation module run for wheel radiuses. Output: Error message displayed How test will be performed: Car will be randomly created. Error message should occur if test successful. \item{CM-1.7\\} Type: Structural, Dynamic, Automated Initial State: No car generated Input: Pre determined random seed and faulty car generation module run for vector angles. Output: Error message displayed How test will be performed: Car will be randomly created. Error message should occur if test successful. \item{CM-1.8\\} Type: Structural, Dynamic, Automated Initial State: No car generated \textcolor{RoyalPurple}{Input: Pre determined random seed and faulty car generation module run for vertex number.} Output: Error message displayed \textcolor{RoyalPurple}{How test will be performed: Car will be randomly created. Error message should occur if test successful.} \item{CM-2\\} Type: Structural, Dynamic, Manual Initial State: Car generated with values created as in CM-1. Input: Command to create car chromosome. Output: Text file containing all car values in format for car chromosome. How test will be performed: Given car will be run through chromosome script. Output text file will be manually examined by testing team to determine validity. Chromosome will be successfully created if resulting text file contains the five corresponding arrays for a car and a mass value. \end{enumerate} \subsubsection{Graphics} \begin{enumerate} \item{GR-1.1\\} Type: Structural, Dynamic, Manual Initial State: Nothing displayed on screen. Input: Correct car created from car model. (i.e. a car object) Output: Car created in graphics pane. Car's dimensions, values, etc. relative to numerical values given. How test will be performed: Car values will be entered into graphics modules. Graphics modules will create car given these values. Test is successful if generated car appears as expected given numerical values (this is manually predetermined). Cars also compared to those in the original BoxCar-2D application this project is based on. \item{GR-1.2\\} Type: Structural, Dynamic, Manual Initial State: Nothing displayed on screen (possibly road as defined in GR-2). Input: Incorrect car generated from car model. (i.e. a car object where values are unacceptable, for example 9 vector magnitudes) Output: Error message How test will be performed: Car values will be entered into graphics modules. Graphics modules will car given these values. Test is successful if error message displayed. \item{GR-1.3\\} Type: Structural, Dynamic, Manual Initial State: Nothing displayed on screen (possibly road as defined in GR-2). Input: Car generated from car model with invalid values (i.e a negative vector magnitude) Output: Error Message How test will be performed: How test will be performed: Car values will be entered into graphics modules. Graphics modules will car given these values. Test is successful if error message displayed. \item{GR-2.1\\} Type: Structural, Dynamic, Manual Initial State: Nothing displayed on screen. Input: Valid road algorithm. Output: A graphical representation of a rode that is generated by the algorithm How test will be performed: Road algorithm fed into graphical creation module. Generated road created by this equation. Test successful if given road corresponds to algorithm. \item{GR-2.2\\} Type: Structural, Dynamic, Manual Initial State: Nothing displayed on screen. Input: Given road algorithm invalid. Output: Error Message How test will be performed: Road algorithm fed into graphical creation module. Test successful if no road generated and error message displayed. \end{enumerate} \subsubsection{Fitness and Score} \begin{enumerate} \item{FI-1\\} Type: Structural, Static, Automatic Initial State: Program installed onto system and launched or open in a web browser. Generation of cars entered into program and simulation run to determine fitness. Input: Series of car locations from simulation (x,y coordinates as described in PoC demonstration) Output: Final fitness values that correspond to those coordinates and car seeds. How test will be performed: Fitness values from corresponding coordinates and car speeds from pre determined cars will be calculated outside of the program. The program will then be run with these pre determined cars to determine the accuracy of these values. \item{FI-2\\} Type: Structural, Static, Automatic Initial State: Program installed onto system and launched or open in a web browser. Generation of cars entered into program and simulation run to determine fitness. Input: Series of fitness values from pre determined cars (See FI-1). Output: Determination of highest fitness value from fitness values and display of said value. How test will be performed: Predetermined list of fitness values will be entered into the program. From these values the largest will determined (this value will have been calculated separately and entered into the unit test prior to this). If the value is the same the test is considered a success. A manual portion to this test will confirm that the value displays properly in the GUI. \end{enumerate} \subsubsection{Other GUI elements} \begin{enumerate} \item{GU-1\\} Type: Structural, Dynamic, Manual Initial State: Program installed onto system and launched or open in a web browser. Generation of cars entered into program and simulation running to determine fitness. Input: Running of simulation. Output: Movement of health bars in health bar GUI element relative to health of respective cars. How test will be performed: Fitness determining simulation will be run. Health bars and how they move relative to respective cars will be observed. For this test specifically, a method will be written to display the numerical health values next to the health bars to determine the accuracy of these. The test will be considered a success if health bars graphically correspond to the numerical values of the respective cars. \item{GU-2\\} Type: Structural, Dynamic, Automatic Initial State: Program installed onto system and launched or open in a web browser. Generation of cars entered into program and simulation run to determine fitness. Input: Running of simulation. Output: Text file that contains numerical values with labels that will be displayed to the user (generation, cars alive, distance, height) How test will be performed: A text file will be created based on the expected values of the generation, cars alive, distance, and height values after one minute with a particular random seed. This same seed will then be used in the program and the text of the above will be written to a text file after one minute of simulation. These text files will then be compared for testing purposes (i.e. are the values we expected the values we received). This will test for both the accuracy of these values and the accurate display of the values. The test will be considered a success if actual values match up 95 percent with what is expected. \end{enumerate} \subsection{Tests for Nonfunctional Requirements} \subsubsection{Look and Feel} \begin{enumerate} \item{LF-1\\} Type: Structural, Static, Manual Initial State: Program installed onto system and launched or open in a web browser. Input/Condition: Users asked to rate the visual aesthetic of the program. Output/Result: A majority of tested users shall agree that the visual aesthetic of the program is favourable. How test will be performed: A test group of representative users (as defined in the development document) will be given two minutes of time to explore the program, its functions, and its outputs. This sample of users will then be asked to fill out a survey (see section 7.2) asking for their input. Test results shall be determined from those responses (i.e. if a majority of representative users rated the visual aesthetic of the program favourably then this test would be a success). \item{LF-2\\} Type: Type: Structural, Static, Manual Initial State: Program installed onto system and launched or open in a web browser. Input: Users asked to rate the style of the program Output: A majority of tested users shall agree that the style of the program is favourable. How test will be performed: A test group of representative users (as defined in the development document) will be given two minutes of time to explore the program, its functions, and its outputs. This sample of users will then be asked to fill out a survey (see section 7.2) asking for their input. Test results shall be determined from those responses (i.e. if a majority of representative users rated the style of the program favourably then this test would be a success). \end{enumerate} \subsubsection{Usability} \begin{enumerate} \item{US-1\\} Type: Structural, Static, Manual Initial State: Program installed onto system and launched or open in a web browser. Input/Condition: Users given a list of tasks to accomplish. Output/Result: A majority of tested users shall complete the tasks given two minutes. How test will be performed: Users will be asked to accomplish the following: - Run and install the program - Identify the relevance of each of the elements of the GUI - Demonstrate an understanding of genetic algorithms to a reasonable extent \item{US-2\\} Type: Type: Structural, Static, Manual Initial State: Program files required to install the program are provided but uninstalled or web browser not yet directed to GC web page. Input: Users asked to install the program or navigate to the web page given the url but without further assistance. Output: A majority of tested users shall successfully install the program or navigate to the web page without assistance. How test will be performed: A test group of representative users (as defined in the development document) will be given two minutes of time to install the program or navigate to the GrateBox web page given the url. \item{US-3\\} Type: Type: Structural, Static, Manual Initial State: Previous two tests (US-1 and US-2) conducted. Input: Users asked to rate ease of installation and ease of use. Output: A majority of tested users shall agree that the program's usability is high. How test will be performed: How test will be performed: A test group of representative users (as defined in the development document) will be given two minutes of time to explore the program, its functions, and its outputs. This sample of users will then be asked to fill out a survey (see section 7.2) asking for their input. Test results shall be determined from those responses (i.e. if a majority of representative users agree that the program's usability is high then this test will be considered a success). \end{enumerate} \subsubsection{Performance} \begin{enumerate} \item{PF-1\\} Type: Structural, Dynamic, Automatic Initial State: Program installed onto system and launched or open in a web browser. Input/Condition: Program initiates one generation of genetic cars. Output/Result: Generation created, displayed, and mutated within 20 seconds. How test will be performed: Built in JavaScript timer method will be used with Q-Unit (see section on unit testing) to record the time it takes for a generation. If all fall below 20 seconds from beginning to end then the test will be considered a success. \item{PF-2\\} Type: Type: Functional, Static, Manual Initial State: Program installed onto system and launched or open in a web browser. Input: Users asked to rate the speed of the program to the best of their ability. Output: A majority of tested users shall agree that the speed of the program is favourable How test will be performed: A test group of representative users (as defined in the development document) will be given two minutes of time to explore the program, its functions, and its outputs. This sample of users will then be asked to fill out a survey (see section 7.2) asking for their input. Test results shall be determined from those responses (i.e. if a majority of representative users agree that the program's speed is favourable then this test will be considered a success, speed defined in the survey). \item{PF-3\\} Type: Structural, Dynamic, Automatic Initial State: Program installed onto system and launched or open in a web browser. Input/Condition: Program initiates one generation of genetic cars. Output/Result: All numerical values accurate to what they should be. How test will be performed: Unit testing through Q-Unit and native Java Script accuracy testing methods will be used to determine the validity of all numerical values and equations given. If all are valid then the test will be considered a success. \item{PF-4\\} Type: Type: Functional, Static, Manual Initial State: Program installed onto system and launched or open in a web browser. Input: Users asked to rate the accuracy of the program to the best of their ability. Output: A majority of tested users shall agree that the accuracy of the program is favourable How test will be performed: A test group of representative users (as defined in the development document) will be given two minutes of time to explore the program, its functions, and its outputs. This sample of users will then be asked to fill out a survey (see section 7.2) asking for their input. Test results shall be determined from those responses (i.e. if a majority of representative users agree that the program's accuracy is favourable then this test will be considered a success, accuracy defined in the survey). \end{enumerate} \section{Tests for Proof of Concept} Proof of Concept testing will be focused on verifying and validating the means by which automated testing will be performed. This will include automated testing of the genetic algorithm and then car model, as well as automated testing of graphical components to the extent possible. \subsection{Genetic algorithm and car model} \begin{enumerate} \item{UC-1\\} Type: Functional, Dynamic, Automated Initial State: Program has been run but no additional inputs have been made. Input: Creation of a car. Set random seed. Output: Complete car object with random values that correspond with the given random seed. 100 percent match with estimated car and car values. How test will be performed: Unit test will be set with a random seed and instructed to create 100 different cars given this seed. Final cars will then be compared to estimated cars. Test will be considered a success if generated cars are a 100 percent match to the 1st, 2nd, 3rd, 5th, 10th, and 50th cars estimated given the random seed. Test used to determine Q-Unit limitations on preset mathematical formulas. \item{UC-2\\} Type: Functional, Dynamic, Automated Initial State: Program has created several cars as in test case UC-1 but no additional instructions have been given. Input: Several pre made cars with non-random values. Output: A new generation of cars created by mutating and crossing the generation of cars given. How test will be performed: Mutation factor will be set to 10 percent. Final generation of cars will be estimated given the mutation factor and the pre generated cars. Final generation of cars will be generated using Q-Unit. Test will be considered a success if generated cars are a 100 percent match to the 1st, 2nd, 3rd, 5th, 10th, and 50th cars estimated. Test used to determine Q-Unit limitations on random generation. \end{enumerate} \subsection{Graphics} \begin{enumerate} \item{UG-1\\} Type: Functional, Dynamic Automated Initial State: Program open and running with generation of cars generated as in UC-1. Input: Run simulation for the generation of cars. Predetermined graphical representation of simulation for comparison. Output: Graphic representation of generation of cars for comparison to predetermined graphical representation. How test will be performed: Generation of cars given will give estimated graphical output for 30 frames. Generation of cars given will be run through the program's graphics engine. 1st, 2nd, 12th, and 30th, frames. Comparison will be drawn using JavaScript in built image comparison. Test will be considered a success if 95 percent similarity reached. Test used to see Q-Unit limitations on a graphical level. \end{enumerate} \section{Comparison to Existing Implementation} There are is one test that compares the program to the Existing Implementation of the program please refer to test GR 1.1 in Tests for Functional Requirements - Graphics. Non-functional comparison is to be conducted through the user survey. \section{Unit Testing Plan} Unit testing will be conducted using the QUnit software outlined in the development document. \subsection{Unit testing of internal functions} In order to create unit tests for the internal functions of the program certain methods that return values can be tested. This will involve taking the methods and giving them input values. Given what they are supposed to output and that they actually output a series of unit tests can be created. Unit tests will include tests that contain proper inputs and inputs that generate exceptions. Anything that needs to be imported will already be done by the individual classes. We will be using coverage metrics to determine how much of our code we have covered. Our goal is to cover as much as possible in order to make sure that we test all functions adequately. Our goal percentage to beat will be 85 percent. \subsection{Unit testing of output files} Our program generates two primary outputs, the technical output generated by the genetic algorithms, and the graphical output displayed to the user. The genetic algorithms can be unit tested as explained above in the "Unit testing of internal functions" section. Graphical output will be harder to unit test, however Grate is looking into the prospect of pre generating expected graphical outcomes with given inputs and comparing those to the graphical outputs generated by the GC project. These images can then be compared by a unit test by percentage similarity. Currently the threshold for percentage similarity stands at 95 percent. \newpage \subsection{Usability Survey Questions?} Note that many of the given survey questions may not pertain to a particular test case and are present in the survey so that users may give potentially valuable input that the testing team has not thought to request of them. This survey shall be reformatted and placed on Google Docs at a later date. 1. How would you rate the "visual aesthetic" of this program? (i.e. Were the various elements of the user interface understandable, was it visually appealing) - Favourable - No opinion - Unfavourable 2. How would you rate the "style" of this program? (i.e. Was the program professional enough, was it inviting enough, do you feel you can you trust the product) - Favourable - No opinion - Unfavourable 3. How would you rate the "usability" of this program? (i.e. Were the various functions obvious at first sight, did the program give the feedback you needed, was it a hassle to install/access online) - Favourable - No opinion - Unfavourable 4. How would you rate the "speed" of the program? (i.e Did you have to wait while the program loaded or performed functions, did the program run smoothly throughout, did you experience choppiness) - Favourable - No opinion - Unfavourable 5. How would you rate the "accuracy" of the program? (i.e Did you feel the program displayed a valid interpretation of genetic algorithms based on your understanding of them, was the program mathematically sound from what you could see) - Favourable - No opinion - Unfavourable 6. How would you rate your overall user experience with the GrateBox application? - Favourable - No opinion - Unfavourable 7. Did you find your experience with the GrateBox application educational? (i.e. do you feel you understand more about genetic algorithms as a result?) -Yes -Maybe -No 8. Would you recommend the GrateBox application to a friend or relative? -Yes -Maybe -No 9. Are there any suggestion or recommendations you would like to make towards the Grate development team to help us improve the application? (USER INPUT) \textcolor{RoyalPurple}{10. Did you feel like you were able to learn how GrateBox works without too much difficulty?} \textcolor{RoyalPurple}{-Yes} \textcolor{RoyalPurple}{-Maybe} \textcolor{RoyalPurple}{-No} \end{document}
{ "alphanum_fraction": 0.7627460343, "avg_line_length": 33.5391459075, "ext": "tex", "hexsha": "c2f878e7c74ddb97af6bc3df27e02cf1f4a7b6ec", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bfc50810b56804002a895675552529f34c159376", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "KelvinKKLin/GrateBox", "max_forks_repo_path": "Documentation/TestPlan/TestPlan.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "bfc50810b56804002a895675552529f34c159376", "max_issues_repo_issues_event_max_datetime": "2017-01-03T16:46:29.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-03T05:45:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "KelvinKKLin/GrateBox", "max_issues_repo_path": "Documentation/TestPlan/TestPlan.tex", "max_line_length": 86, "max_stars_count": 2, "max_stars_repo_head_hexsha": "bfc50810b56804002a895675552529f34c159376", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "KelvinKKLin/GrateBox", "max_stars_repo_path": "Documentation/TestPlan/TestPlan.tex", "max_stars_repo_stars_event_max_datetime": "2017-02-15T21:41:35.000Z", "max_stars_repo_stars_event_min_datetime": "2017-01-07T19:18:51.000Z", "num_tokens": 8496, "size": 37698 }
\section{Acknowledgments} This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. Sun Myung Park is supported by the SNRSI Postgraduate Scholarship program, a graduate fellowship program from the Singapore Nuclear Research \& Safety Initiative. Prof. Huff is supported by the Nuclear Regulatory Commission Faculty Development Program (award NRC-HQ-84-14-G-0054 Program B), the Blue Waters sustained-petascale computing project supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois, the DOE ARPA-E MEITNER Program (award DE-AR0000983), and the DOE H2@Scale Program (Award Number: DE-EE0008832)
{ "alphanum_fraction": 0.8200431034, "avg_line_length": 51.5555555556, "ext": "tex", "hexsha": "11bba012879f644e223abb198326d2f19faaad11", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-02-23T09:45:16.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-23T09:40:36.000Z", "max_forks_repo_head_hexsha": "824f791c70dba9b4e63ded1994c89ef193e76b8d", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "smpark7/2021-park-ans-summer", "max_forks_repo_path": "acknowledgements.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "824f791c70dba9b4e63ded1994c89ef193e76b8d", "max_issues_repo_issues_event_max_datetime": "2021-02-23T10:11:23.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-23T09:47:34.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "smpark7/2021-park-ans-summer", "max_issues_repo_path": "acknowledgements.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "824f791c70dba9b4e63ded1994c89ef193e76b8d", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "smpark7/2021-park-ans-summer", "max_stars_repo_path": "acknowledgements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 225, "size": 928 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{auswertung} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}487}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{ticker} \PY{k}{as} \PY{n+nn}{ticker} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{optimize} \PY{k}{import} \PY{n}{curve\PYZus{}fit} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{stats} \PY{k}{import} \PY{n}{chi2} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{odr} \PY{k}{import} \PY{n}{Data}\PY{p}{,} \PY{n}{RealData}\PY{p}{,} \PY{n}{Model}\PY{p}{,} \PY{n}{ODR} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}488}]:} \PY{n}{rho\PYZus{}f} \PY{o}{=} \PY{l+m+mf}{1.146}\PY{o}{*}\PY{l+m+mf}{1e\PYZhy{}3} \PY{n}{Drho\PYZus{}f} \PY{o}{=} \PY{l+m+mf}{0.0006e\PYZhy{}3} \PY{n}{Drho\PYZus{}k} \PY{o}{=} \PY{l+m+mf}{0.0025}\PY{o}{*}\PY{l+m+mf}{1e\PYZhy{}3} \PY{n}{g} \PY{o}{=} \PY{l+m+mf}{9.81} \PY{n}{R} \PY{o}{=} \PY{l+m+mf}{75e\PYZhy{}3}\PY{o}{/}\PY{l+m+mi}{2} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}489}]:} \PY{k}{def} \PY{n+nf}{get\PYZus{}rho\PYZus{}k}\PY{p}{(}\PY{n}{radius}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{radius} \PY{o}{\PYZlt{}} \PY{l+m+mf}{2e\PYZhy{}3}\PY{p}{:} \PY{k}{return} \PY{l+m+mf}{1.3925e\PYZhy{}3} \PY{k}{elif} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{radius} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mf}{7.144e\PYZhy{}3}\PY{p}{:} \PY{k}{return} \PY{l+m+mf}{1.3775e\PYZhy{}3} \PY{k}{elif} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{radius} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mf}{8e\PYZhy{}3}\PY{p}{:} \PY{k}{return} \PY{l+m+mf}{1.3575e\PYZhy{}3} \PY{k}{else}\PY{p}{:} \PY{k}{return} \PY{l+m+mf}{1.3625e\PYZhy{}3} \PY{n}{v\PYZus{}rho\PYZus{}k} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{vectorize}\PY{p}{(}\PY{n}{get\PYZus{}rho\PYZus{}k}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}490}]:} \PY{n}{diameter} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{1.5}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mf}{7.144}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{l+m+mf}{1e\PYZhy{}3} \PY{n}{r} \PY{o}{=} \PY{n}{diameter}\PY{o}{/}\PY{l+m+mi}{2} \PY{n}{Dr} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}2}\PY{o}{*}\PY{n}{r} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}491}]:} \PY{n}{distance} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{l+m+mf}{1e\PYZhy{}2} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}492}]:} \PY{n}{rho\PYZus{}k} \PY{o}{=} \PY{n}{v\PYZus{}rho\PYZus{}k}\PY{p}{(}\PY{n}{r}\PY{p}{)}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{rho\PYZus{}k}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [0.0013925 0.0013775 0.0013775 0.0013775 0.0013775 0.0013775 0.0013775 0.0013575 0.0013625] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}493}]:} \PY{n}{times} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{loadtxt}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{times.txt}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}494}]:} \PY{n}{avg\PYZus{}time} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{average}\PY{p}{(}\PY{n}{times}\PY{p}{,} \PY{n}{axis} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{avg\PYZus{}time}\PY{p}{)} \PY{n}{Davg\PYZus{}time\PYZus{}std} \PY{o}{=} \PY{l+m+mi}{1}\PY{o}{/}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{5}\PY{p}{)}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{times}\PY{p}{,} \PY{n}{axis} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{Davg\PYZus{}time} \PY{o}{=} \PY{n}{Davg\PYZus{}time\PYZus{}std}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Davg\PYZus{}time}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [19.144 19.458 19.504 22.58 22.74 16.374 12.108 10.94 8.774] [0.35912783 0.11549545 0.2527386 0.24646298 0.17022338 0.08042885 0.075387 0.0908185 0.03194996] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}495}]:} \PY{n}{v} \PY{o}{=} \PY{n}{distance}\PY{o}{/}\PY{n}{avg\PYZus{}time}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{v}\PY{p}{)} \PY{n}{Dv} \PY{o}{=} \PY{n}{v}\PY{o}{*}\PY{n}{Davg\PYZus{}time}\PY{o}{/}\PY{n}{avg\PYZus{}time}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Dv}\PY{o}{/}\PY{n}{v}\PY{o}{*}\PY{l+m+mi}{100}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [0.00156707 0.00256964 0.00512715 0.0088574 0.01319261 0.01832173 0.02477701 0.0274223 0.03419193] [1.87592892 0.59356282 1.29582957 1.09151009 0.74856369 0.49119855 0.62262142 0.83015083 0.36414362] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}496}]:} \PY{k}{def} \PY{n+nf}{get\PYZus{}ladenburg}\PY{p}{(}\PY{n}{radius}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{1} \PY{o}{+} \PY{l+m+mf}{2.1}\PY{o}{*}\PY{n}{radius}\PY{o}{/}\PY{n}{R} \PY{n}{ladenburg} \PY{o}{=} \PY{n}{get\PYZus{}ladenburg}\PY{p}{(}\PY{n}{r}\PY{p}{)}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{ladenburg}\PY{p}{)} \PY{n}{Dladenburg} \PY{o}{=} \PY{l+m+mf}{2.1}\PY{o}{*}\PY{n}{Dr}\PY{o}{/}\PY{n}{R}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Dladenburg}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [1.042 1.056 1.084 1.112 1.14 1.168 1.200032 1.224 1.252 ] [0.00042 0.00056 0.00084 0.00112 0.0014 0.00168 0.00200032 0.00224 0.00252 ] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}497}]:} \PY{n}{Dr\PYZus{}sq} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{r}\PY{o}{*}\PY{n}{Dr} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}498}]:} \PY{n}{Dy} \PY{o}{=} \PY{n}{v}\PY{o}{/}\PY{p}{(}\PY{n}{rho\PYZus{}k} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{n}{Dv}\PY{o}{/}\PY{n}{v}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Drho\PYZus{}k}\PY{o}{/}\PY{p}{(}\PY{n}{rho\PYZus{}k} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}499}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{v}\PY{o}{/}\PY{p}{(}\PY{n}{rho\PYZus{}k} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dr\PYZus{}sq}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{Dy}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{markersize} \PY{o}{=} \PY{l+m+mi}{3}\PY{p}{,} \PY{n}{elinewidth} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{capsize} \PY{o}{=} \PY{l+m+mi}{2}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{ladenburg}\PY{o}{*}\PY{n}{v}\PY{o}{/}\PY{p}{(}\PY{n}{rho\PYZus{}k} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dr\PYZus{}sq}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{ladenburg}\PY{o}{*}\PY{n}{Dy}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{markersize} \PY{o}{=} \PY{l+m+mi}{3}\PY{p}{,} \PY{n}{elinewidth} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{capsize} \PY{o}{=} \PY{l+m+mi}{2}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}r\PYZca{}2\PYZdl{} [\PYZdl{}}\PY{l+s+si}{\PYZob{}mm\PYZcb{}}\PY{l+s+s2}{\PYZca{}2\PYZdl{}]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{frac}\PY{l+s+si}{\PYZob{}v\PYZcb{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{rho\PYZus{}k \PYZhy{} }\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{rho\PYZus{}f\PYZcb{}\PYZdl{} [\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{frac}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{m\PYZca{}4\PYZcb{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{kg }\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{cdot s\PYZcb{}\PYZdl{}]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlim}\PY{p}{(}\PY{n}{left} \PY{o}{=} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylim}\PY{p}{(}\PY{n}{bottom} \PY{o}{=} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/v\PYZus{}for\PYZus{}radii.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n+nb}{format} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_12_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}500}]:} \PY{k}{def} \PY{n+nf}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{radius}\PY{p}{,} \PY{n}{visc}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{1}\PY{o}{/}\PY{n}{get\PYZus{}ladenburg}\PY{p}{(}\PY{n}{radius}\PY{p}{)}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{l+m+mi}{9}\PY{o}{*}\PY{n}{g}\PY{o}{*}\PY{p}{(}\PY{n}{v\PYZus{}rho\PYZus{}k}\PY{p}{(}\PY{n}{radius}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{o}{/}\PY{n}{visc}\PY{o}{*}\PY{n}{radius}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}501}]:} \PY{c+c1}{\PYZsh{} number of radii for which Stoke\PYZsq{}s law seems to hold:} \PY{n}{num\PYZus{}linear} \PY{o}{=} \PY{l+m+mi}{5} \PY{n}{popt}\PY{p}{,} \PY{n}{pcov} \PY{o}{=} \PY{n}{curve\PYZus{}fit}\PY{p}{(}\PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{,} \PY{n}{r}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{n}{v}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{Dv}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{n}{p0} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n}{eta} \PY{o}{=} \PY{n}{popt}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Viskosität: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{eta}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{pcov}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Viskosität: 1.9746391918639603e-07 +- 5.484805774227591e-09 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}502}]:} \PY{n}{chi2\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{r}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{o}{*}\PY{n}{popt}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{v}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{Dv}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{dof} \PY{o}{=} \PY{n}{num\PYZus{}linear} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1} \PY{n}{chi2\PYZus{}red} \PY{o}{=} \PY{n}{chi2\PYZus{}}\PY{o}{/}\PY{n}{dof} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2 = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2\PYZus{}red = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}red}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] chi2 = 195.12026802382564 chi2\_red = 48.78006700595641 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}503}]:} \PY{k}{def} \PY{n+nf}{fit\PYZus{}func\PYZus{}odr}\PY{p}{(}\PY{n}{visc}\PY{p}{,} \PY{n}{radius}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{1}\PY{o}{/}\PY{n}{get\PYZus{}ladenburg}\PY{p}{(}\PY{n}{radius}\PY{p}{)}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{l+m+mi}{9}\PY{o}{*}\PY{n}{g}\PY{o}{*}\PY{p}{(}\PY{n}{v\PYZus{}rho\PYZus{}k}\PY{p}{(}\PY{n}{radius}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{o}{/}\PY{n}{visc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{*}\PY{n}{radius}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}504}]:} \PY{n}{stokes} \PY{o}{=} \PY{n}{Model}\PY{p}{(}\PY{n}{fit\PYZus{}func\PYZus{}odr}\PY{p}{)} \PY{n}{data} \PY{o}{=} \PY{n}{RealData}\PY{p}{(}\PY{n}{r}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{n}{y} \PY{o}{=} \PY{n}{v}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{n}{sx} \PY{o}{=} \PY{n}{Dr}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{,} \PY{n}{sy} \PY{o}{=} \PY{n}{Dv}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{num\PYZus{}linear}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}505}]:} \PY{n}{odr} \PY{o}{=} \PY{n}{ODR}\PY{p}{(}\PY{n}{data}\PY{p}{,} \PY{n}{stokes}\PY{p}{,} \PY{n}{beta0} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n}{output} \PY{o}{=} \PY{n}{odr}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{p}{)} \PY{n}{output}\PY{o}{.}\PY{n}{pprint}\PY{p}{(}\PY{p}{)} \PY{n}{eta\PYZus{}odr} \PY{o}{=} \PY{n}{output}\PY{o}{.}\PY{n}{beta}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{Deta\PYZus{}odr} \PY{o}{=} \PY{n}{output}\PY{o}{.}\PY{n}{sd\PYZus{}beta}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Beta: [1.98335577e-07] Beta Std Error: [5.21668657e-09] Beta Covariance: [[3.88695176e-18]] Residual Variance: 7.001326601059896 Inverse Condition \#: 1.0000000000000002 Reason(s) for Halting: Sum of squares convergence \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}506}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{ladenburg}\PY{o}{*}\PY{n}{v}\PY{o}{/}\PY{p}{(}\PY{n}{rho\PYZus{}k} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dr\PYZus{}sq}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{ladenburg}\PY{o}{*}\PY{n}{Dy}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{markersize} \PY{o}{=} \PY{l+m+mi}{3}\PY{p}{,} \PY{n}{elinewidth} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{capsize} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{r\PYZus{}cont} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mf}{4.5e\PYZhy{}3}\PY{p}{,} \PY{l+m+mi}{100}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{r\PYZus{}cont}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{l+m+mf}{1e6}\PY{p}{,} \PY{n}{get\PYZus{}ladenburg}\PY{p}{(}\PY{n}{r\PYZus{}cont}\PY{p}{)}\PY{o}{*}\PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{r\PYZus{}cont}\PY{p}{,} \PY{n}{eta\PYZus{}odr}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n}{v\PYZus{}rho\PYZus{}k}\PY{p}{(}\PY{n}{r\PYZus{}cont}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{p}{,} \PY{n}{linewidth} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}r\PYZca{}2\PYZdl{} [\PYZdl{}}\PY{l+s+si}{\PYZob{}mm\PYZcb{}}\PY{l+s+s2}{\PYZca{}2\PYZdl{}]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{frac}\PY{l+s+si}{\PYZob{}v\PYZcb{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{rho\PYZus{}k \PYZhy{} }\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{rho\PYZus{}f\PYZcb{}\PYZdl{} [\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{frac}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{m\PYZca{}4\PYZcb{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{kg }\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{cdot s\PYZcb{}\PYZdl{}]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlim}\PY{p}{(}\PY{n}{left} \PY{o}{=} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylim}\PY{p}{(}\PY{n}{bottom} \PY{o}{=} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/v\PYZus{}for\PYZus{}radii\PYZus{}fit.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n+nb}{format} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_19_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}507}]:} \PY{n}{Re} \PY{o}{=} \PY{n}{rho\PYZus{}f}\PY{o}{*}\PY{n}{v}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{r}\PY{o}{/}\PY{n}{eta\PYZus{}odr}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Re}\PY{p}{)} \PY{n}{DRe} \PY{o}{=} \PY{n}{Re}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{n}{Dr}\PY{o}{/}\PY{n}{r}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Dv}\PY{o}{/}\PY{n}{v}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Deta\PYZus{}odr}\PY{o}{/}\PY{n}{eta\PYZus{}odr}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{;} \PY{n+nb}{print}\PY{p}{(}\PY{n}{DRe}\PY{o}{/}\PY{n}{Re}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [0.013582 0.02969517 0.0888754 0.20471518 0.38114023 0.63518716 1.02276129 1.2675874 1.77807521] [0.03381898 0.02875837 0.0309795 0.03018198 0.02911781 0.02856466 0.02881975 0.02933815 0.0283738 ] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}508}]:} \PY{n}{v\PYZus{}lam} \PY{o}{=} \PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{r}\PY{p}{,} \PY{n}{eta\PYZus{}odr}\PY{p}{)} \PY{n}{Dv\PYZus{}lam} \PY{o}{=} \PY{n}{v\PYZus{}lam}\PY{o}{*}\PY{n}{Deta\PYZus{}odr}\PY{o}{/}\PY{n}{eta\PYZus{}odr} \PY{n}{ratio} \PY{o}{=} \PY{n}{v}\PY{o}{/}\PY{n}{v\PYZus{}lam} \PY{n}{Dratio} \PY{o}{=} \PY{n}{ratio}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{n}{Dv}\PY{o}{/}\PY{n}{v}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Dv\PYZus{}lam}\PY{o}{/}\PY{n}{v\PYZus{}lam}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{fig}\PY{p}{,} \PY{n}{ax} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{subplots}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{ratio}\PY{p}{,} \PY{n}{Re}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dratio}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{DRe}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{markersize} \PY{o}{=} \PY{l+m+mi}{3}\PY{p}{,} \PY{n}{elinewidth} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{capsize} \PY{o}{=} \PY{l+m+mi}{2}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{frac}\PY{l+s+si}{\PYZob{}v\PYZcb{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{v\PYZus{}}\PY{l+s+si}{\PYZob{}lam\PYZcb{}}\PY{l+s+s2}{\PYZcb{}\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Re}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{fig}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/reynolds.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n+nb}{format} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_21_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}509}]:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Kritische Reynoldszahl liegt zwischen }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{Re}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ und }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{Re}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Re\PYZus{}krit = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{p}{(}\PY{n}{Re}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{+} \PY{n}{Re}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{p}{(}\PY{n}{Re}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{Re}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Kritische Reynoldszahl liegt zwischen 0.029695168527127985 und 0.08887539825689961 Re\_krit = 0.0592852833920138 +- 0.02959011486488581 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}510}]:} \PY{n}{table\PYZus{}data} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{vstack}\PY{p}{(}\PY{p}{(}\PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{r}\PY{p}{,} \PY{n}{eta\PYZus{}odr}\PY{p}{)}\PY{o}{*}\PY{l+m+mf}{1e2}\PY{p}{,} \PY{n}{v}\PY{o}{*}\PY{l+m+mf}{1e2}\PY{p}{,} \PY{n}{v}\PY{o}{/}\PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{r}\PY{p}{,} \PY{n}{eta\PYZus{}odr}\PY{p}{)}\PY{p}{,} \PY{n}{Re}\PY{p}{)}\PY{p}{)} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{table\PYZus{}data}\PY{p}{,} \PY{n}{index} \PY{o}{=} \PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}v\PYZus{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+se}{\PYZbs{}t}\PY{l+s+s2}{ext}\PY{l+s+si}{\PYZob{}lam\PYZcb{}}\PY{l+s+s2}{\PYZcb{}\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}v\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{frac}\PY{l+s+si}{\PYZob{}v\PYZcb{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{v\PYZus{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{text}\PY{l+s+si}{\PYZob{}lam\PYZcb{}}\PY{l+s+s2}{\PYZcb{}\PYZcb{}\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdl{}Re\PYZdl{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}510}]:} 0 1 2 3 4 \textbackslash{} \$v\_\{\textbackslash{}text\{lam\}\}\$ 0.146261 0.240959 0.528153 0.915297 1.395025 \$v\$ 0.156707 0.256964 0.512715 0.885740 1.319261 \$\textbackslash{}frac\{v\}\{v\_\{\textbackslash{}text\{lam\}\}\}\$ 1.071423 1.066421 0.970770 0.967707 0.945690 \$Re\$ 0.013582 0.029695 0.088875 0.204715 0.381140 5 6 7 8 \$v\_\{\textbackslash{}text\{lam\}\}\$ 1.960679 2.705434 3.038819 3.848881 \$v\$ 1.832173 2.477701 2.742230 3.419193 \$\textbackslash{}frac\{v\}\{v\_\{\textbackslash{}text\{lam\}\}\}\$ 0.934458 0.915824 0.902400 0.888360 \$Re\$ 0.635187 1.022761 1.267587 1.778075 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}511}]:} \PY{n}{table\PYZus{}err} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{vstack}\PY{p}{(}\PY{p}{(}\PY{n}{fit\PYZus{}func\PYZus{}v}\PY{p}{(}\PY{n}{r}\PY{p}{,} \PY{n}{eta\PYZus{}odr}\PY{p}{)}\PY{o}{*}\PY{n}{Deta\PYZus{}odr}\PY{o}{/}\PY{n}{eta\PYZus{}odr}\PY{o}{*}\PY{l+m+mf}{1e2}\PY{p}{,} \PY{n}{Dv}\PY{o}{*}\PY{l+m+mf}{1e2}\PY{p}{,} \PY{n}{DRe}\PY{p}{)}\PY{p}{)} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{table\PYZus{}err}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}511}]:} 0 1 2 3 4 5 6 \textbackslash{} 0 0.003847 0.006338 0.013892 0.024074 0.036692 0.051570 0.071159 1 0.002940 0.001525 0.006644 0.009668 0.009876 0.009000 0.015427 2 0.000459 0.000854 0.002753 0.006179 0.011098 0.018144 0.029476 7 8 0 0.079928 0.101235 1 0.022765 0.012451 2 0.037189 0.050451 \end{Verbatim} \hypertarget{hagen-poiseuille}{% \subsection{Hagen-Poiseuille}\label{hagen-poiseuille}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}512}]:} \PY{n}{h\PYZus{}A} \PY{o}{=} \PY{l+m+mf}{546e\PYZhy{}3} \PY{n}{h\PYZus{}E} \PY{o}{=} \PY{l+m+mf}{540e\PYZhy{}3} \PY{n}{h} \PY{o}{=} \PY{p}{(}\PY{n}{h\PYZus{}A} \PY{o}{+} \PY{n}{h\PYZus{}E}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2} \PY{n}{Dh} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}3}\PY{o}{/}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{;} \PY{n}{Dh}\PY{o}{/}\PY{n}{h} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}512}]:} 0.0013022224331243968 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}513}]:} \PY{n}{rho\PYZus{}f} \PY{o}{=} \PY{l+m+mf}{1.1454e\PYZhy{}3} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}514}]:} \PY{c+c1}{\PYZsh{} pressure difference:} \PY{n}{p} \PY{o}{=} \PY{n}{h}\PY{o}{*}\PY{n}{rho\PYZus{}f}\PY{o}{*}\PY{n}{g} \PY{n}{Dp} \PY{o}{=} \PY{n}{p}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{n}{Dh}\PY{o}{/}\PY{n}{h}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Drho\PYZus{}f}\PY{o}{/}\PY{n}{rho\PYZus{}f}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{;} \PY{n}{Dp}\PY{o}{/}\PY{n}{p} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}514}]:} 0.0014036330772391923 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}515}]:} \PY{c+c1}{\PYZsh{} capillary:} \PY{n}{L} \PY{o}{=} \PY{l+m+mf}{100e\PYZhy{}3} \PY{n}{DL} \PY{o}{=} \PY{l+m+mf}{0.5e\PYZhy{}3} \PY{n}{R} \PY{o}{=} \PY{l+m+mf}{1.5e\PYZhy{}3}\PY{o}{/}\PY{l+m+mi}{2} \PY{n}{DR} \PY{o}{=} \PY{l+m+mf}{0.01e\PYZhy{}3}\PY{o}{/}\PY{l+m+mi}{2} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}516}]:} \PY{n}{vol} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{10}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{,} \PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{25}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{l+m+mf}{1e\PYZhy{}6} \PY{n}{time} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mf}{72.58}\PY{p}{,} \PY{l+m+mf}{196.44}\PY{p}{,} \PY{l+m+mf}{336.88}\PY{p}{,} \PY{l+m+mf}{441.34}\PY{p}{,} \PY{l+m+mf}{566.31}\PY{p}{,} \PY{l+m+mf}{692.99}\PY{p}{]}\PY{p}{)} \PY{n}{Dvol} \PY{o}{=} \PY{l+m+mf}{0.5e\PYZhy{}6} \PY{n}{Dtime} \PY{o}{=} \PY{l+m+mf}{0.3} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}517}]:} \PY{c+c1}{\PYZsh{} check for errors:} \PY{n}{flow} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{concatenate}\PY{p}{(}\PY{p}{(}\PY{n}{vol}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{concatenate}\PY{p}{(}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{n}{vol}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{concatenate}\PY{p}{(}\PY{p}{(}\PY{n}{time}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{concatenate}\PY{p}{(}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{n}{time}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{flow}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [ nan 6.88895012e-08 4.03681576e-08 4.27228710e-08 3.82921693e-08 4.00096023e-08 3.94695295e-08 4.32906680e-08] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] /home/erik/.miniconda3/envs/science/lib/python3.6/site-packages/ipykernel/\_\_main\_\_.py:2: RuntimeWarning: invalid value encountered in true\_divide from ipykernel import kernelapp as app \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}518}]:} \PY{n}{total\PYZus{}time} \PY{o}{=} \PY{n}{time}\PY{p}{[}\PY{l+m+mi}{6}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{time}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{total\PYZus{}vol} \PY{o}{=} \PY{n}{vol}\PY{p}{[}\PY{l+m+mi}{6}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{vol}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{avg\PYZus{}flow} \PY{o}{=} \PY{n}{total\PYZus{}vol}\PY{o}{/}\PY{n}{total\PYZus{}time} \PY{n+nb}{print}\PY{p}{(}\PY{n}{avg\PYZus{}flow}\PY{p}{)} \PY{n}{Dtotal\PYZus{}time} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{n}{Dtime} \PY{n}{Dtotal\PYZus{}vol} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{n}{Dvol} \PY{n}{Davg\PYZus{}flow} \PY{o}{=} \PY{n}{avg\PYZus{}flow}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{n}{Dtotal\PYZus{}time}\PY{o}{/}\PY{n}{total\PYZus{}time}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Dtotal\PYZus{}vol}\PY{o}{/}\PY{n}{total\PYZus{}vol}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Davg\PYZus{}flow}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 4.029593333440789e-08 1.1400741802767856e-09 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}519}]:} \PY{n}{eta\PYZus{}hp} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{p}\PY{o}{*}\PY{n}{R}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{4} \PY{o}{/} \PY{p}{(}\PY{l+m+mi}{8}\PY{o}{*}\PY{n}{avg\PYZus{}flow}\PY{o}{*}\PY{n}{L}\PY{p}{)} \PY{n}{Deta\PYZus{}hp} \PY{o}{=} \PY{n}{eta\PYZus{}hp} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{4}\PY{o}{*}\PY{n}{DR}\PY{o}{/}\PY{n}{R}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Davg\PYZus{}flow}\PY{o}{/}\PY{n}{avg\PYZus{}flow}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{DL}\PY{o}{/}\PY{n}{L}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Vsicosity: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{eta\PYZus{}hp}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{Deta\PYZus{}hp}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Vsicosity: 1.8813505974491522e-07 +- 7.3747473711485494e-09 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}520}]:} \PY{c+c1}{\PYZsh{} Reynolds number:} \PY{n}{Re\PYZus{}hp} \PY{o}{=} \PY{n}{rho\PYZus{}f}\PY{o}{*}\PY{n}{avg\PYZus{}flow}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{R} \PY{o}{/} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{R}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{eta\PYZus{}hp}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Re\PYZus{}hp}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0.20824161442431943 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}521}]:} \PY{c+c1}{\PYZsh{} deviation:} \PY{n}{combined\PYZus{}error} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{Deta\PYZus{}odr}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{n}{Deta\PYZus{}hp}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{sigma\PYZus{}val} \PY{o}{=} \PY{n+nb}{abs}\PY{p}{(}\PY{n}{eta\PYZus{}hp} \PY{o}{\PYZhy{}} \PY{n}{eta\PYZus{}odr}\PY{p}{)} \PY{o}{/} \PY{n}{combined\PYZus{}error} \PY{n+nb}{print}\PY{p}{(}\PY{n}{sigma\PYZus{}val}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 1.1292112880406595 \end{Verbatim} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5556994557, "avg_line_length": 69.1146853147, "ext": "tex", "hexsha": "90f1a6efd589180879ffb620862f9b730faeb9d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0aa48867e24a89dd297a70381dcb1180b973e8cf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JanJakob1/PAP2", "max_forks_repo_path": "versuch_212/notebook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0aa48867e24a89dd297a70381dcb1180b973e8cf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JanJakob1/PAP2", "max_issues_repo_path": "versuch_212/notebook.tex", "max_line_length": 824, "max_stars_count": null, "max_stars_repo_head_hexsha": "0aa48867e24a89dd297a70381dcb1180b973e8cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JanJakob1/PAP2", "max_stars_repo_path": "versuch_212/notebook.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 23052, "size": 49417 }
\section{OLS} Given two random variables $X$ and $Y$, how can we predict the values of $Y$ given $X$? Let us consider $(X_1, Y_1), \ldots, (X_n, Y_n) \sim^{iid } \mathbb{P}$ where $\mathbb{P}$ is an unknown joint distribution. $\mathbb{P}$ can be described entirely by: \begin{align*} g(X) = \int f(X, y)dy\\ h(Y|X=x) = \frac{f(x, Y)}{g(x)} \end{align*} where $f$ is the joint PDF, $g$ the marginal density of $X$ and $h$ the conditional density. What we are interested in is $h(Y|X)$. \textbf{Regression function:} For a partial description, we can consider instead the conditional expection of $Y$ given $X=x$: \begin{align*} x \mapsto f(x) = \mathbb{E}[Y | X=x] = \int yh(y|x)dy \end{align*} We can also consider different descriptions of the distribution, like the median, quantiles or the variance.\\ \textbf{Linear regression:} trying to fit any function to $\mathbb{E}[Y | X=x]$ is a nonparametric problem; therefore, we restrict the problem to the tractable one of linear function: \begin{align*} f: x \mapsto a + bx \end{align*} \textbf{Theoretical linear regression:} let $X, Y$ be two random variables with two moments such as $\mathbb{V}[X] > 0$. The theoretical linear regression of $Y$ on $X$ is the line $a^{*} + b^{*}x$ where \begin{align*} (a^{*}, b^{*}) = argmin_{(a, b) \in \mathbb{R}^2}\mathbb{E}\left[(Y - a - bX)^2\right] \end{align*} Which gives: \begin{align*} b^{*} = \frac{Cov(X, Y)}{\mathbb{V}[X]}, \quad a^{*} = \mathbb{E}[Y] - b^{*} \mathbb{E}[X] \end{align*} \textbf{Noise:} we model the noise of $Y$ around the regression line by a random variable $\varepsilon = Y - a^{*} - b^{*} X$, such as: \begin{align*} \mathbb{E}[\varepsilon] = 0, \quad Cov(X, \varepsilon) = 0 \end{align*} We have to estimate $a^{*}$ and $b^{*}$ from the data. We have $n$ random pairs $(X_1, Y_1), \ldots, (X_n, Y_n) \sim_{iid} (X, Y)$ such as: \begin{align*} Y_i = a^{*} + b^{*} X_i + \varepsilon_i \end{align*} The \textbf{Least Squares Estimator (LSE)} of $(a^{*}, b^{*})$ is the minimizer of the squared sum: \begin{align*} (\hat{a}_n, \hat{b}_n) = argmin_{(a, b) \in \mathbb{R}^2}\sum_{i=1}^n(Y_i - a - bX_i)^2 \end{align*} The estimators are given by: \begin{align*} \hat{b}_n = \frac{\overline{XY} - \bar{X}\bar{Y}}{\overline{X^2} - \bar{X}^2}, \quad \hat{a}_n = \bar{Y} - \hat{b}_n \bar{X} \end{align*} The \textbf{Multivariate Regression} is given by: \begin{align*} Y_i = \sum_{j=1}^pX_i^{(j)}\beta_j^{*} + \varepsilon_i= \underbrace{X_i^\top}_{1 \times p}\underbrace{\beta^{*}}_{p \times 1} + \varepsilon_i \end{align*} We can assuming that the $X_i^{(1)}$ are 1 for the intercept. \begin{itemize} \item If $\beta^{*} = (a^{*}, b^{*}\top)^\top$, $\beta_1^{*} = a^{*}$ is the intercept. \item the $\varepsilon_i$ is the noise, satisfying $Cov(X_i, \varepsilon_i) = 0$ \end{itemize} The \textbf{Multivariate Least Squares Estimator (LSE)} of $\beta^{*}$ is the minimizer of the sum of square errors: \begin{align*} \hat{\beta} = argmin_{\beta \in \mathbb{R}^p}\sum_{i=1}^n(Y_i - X_i^\top\beta)^2 \end{align*} \textbf{Matrix form:} we can rewrite these expressions. Let $Y = (Y_1, \ldots, Y_n)^\top \in \mathbb{R}^n$, and $\epsilon = (\varepsilon_1, \ldots, \varepsilon_n)^\top$. Let \begin{align*} X = \begin{pmatrix} X_1^\top \\ \vdots \\ X_n^\top \end{pmatrix} \in \mathbb{R}^{n \times p} \end{align*} $X$ is called the **design matrix**. The regression is given by: \begin{align*} Y = X\beta^{*} + \epsilon \end{align*} and the LSE is given by: \begin{align*} \hat{\beta} = argmin_{\beta \in \mathbb{R}^p} \|Y - X\beta\|^2_2 \end{align*} Let us suppose $n \geq p$ and $rank(X) = p$. If we write: \begin{align*} F(\beta) = \|Y - X\beta\|^2_2 = (Y - X\beta)^\top(Y - X\beta) \end{align*} Then: \begin{align*} \nabla F(\beta) = 2 X^\top(Y - X\beta) \end{align*} \textbf{Least squares estimator}: setting $\nabla F(\beta) = 0$ gives us the expression of $\hat{\beta}$: \begin{align*} \hat{\beta} = (X^\top X)^{-1}X^\top Y \end{align*} **Geometric interpretation**: $X\hat{\beta}$ is the orthogonal projection of $Y$ onto the subspace spanned by the columns of $X$: $$ X\hat{\beta} = PY$$ where $P = X(X^\top X)^{-1}X^\top$ is the expression of the projector. **Statistic inference**: let us suppose that: * The design matrix $X$ is deterministic and $rank(X) = p$. * The model is **homoscedastic**: $\varepsilon_1, \ldots, \varepsilon_n$ are i.i.d. * The noise is Gaussian: $\epsilon \sim N_n(0, \sigma^2I_n)$. We therefore have: $$Y \sim N_n(X\beta^{*}, \sigma^2 I_n)$$ Properties of the LSE: $$\hat{\beta} \sim N_p(\beta^{*}, \sigma^2(X^\top X)^{-1})$$ The quadratic risk of $\hat{\beta}$ is given by: $$\mathbb{E}\left[\|\hat{\beta} - \beta^{*}\|^2_2\right] = \sigma^2 Tr \left((X^\top X)^{-1}\right)$$ The prediction error is given by: $$\mathbb{E}\left[\|Y - X\hat{\beta}\|^2_2\right] = \sigma^2(n - p)$$ The unbiased estimator of $\sigma^2$ is: $$\hat{\sigma^2} = \frac{1}{n-p}\|Y - X\hat{\beta}\|^2_2 = \frac1{n-p}\sum_{i=1}^n\hat{\varepsilon}_i^2$$ By **Cochran's Theorem**: $$ (n-p)\frac{\hat{\sigma^2}}{\sigma^2} \sim \chi^2_{n-p}, \quad \hat\beta \perp \hat{\sigma^2}$$ **Significance test**: let us test $H_0: \beta_j = 0$ against $H_1: \beta_j \neq 0$. Let us call $$\gamma_j = \left((X^\top X)^{-1}\right)_{jj} > 0$$ then: $$\frac{\hat{\beta}_j- \beta_j}{\sqrt{\hat{\sigma^2}\gamma_j}} \sim t_{n-p}$$ We can define the test statistic for our test: $$T_n^{(j)} = \frac{\hat{\beta}_j}{\sqrt{\hat{\sigma^2}\gamma_j}}$$ The test with non-asymptotic level $\alpha$ is given by: $$\psi_\alpha^{(j)} = \textbf{1}\{|T_n^{(j)}| > q_{\alpha/2}(t_{n-p})\}$$ **Bonferroni's test**: if we want to test the significance level of multiple tests at the same time, we cannot use the same level $\alpha$ for each of them. We must use a stricter test for each of them. Let us consider $S \subseteq \{1, \ldots, p\}$. Let us consider $$H_0: \forall j \in S, \beta_j = 0, \quad H_1: \exists j \in S, \beta_j \neq 0$$ The *Bonferroni's test* with significance level $\alpha$ is given by: $$\psi_\alpha^{(S)} = \max_{j \in S}\psi_{\alpha/K}^{(j)}$$ where $K = |S|$. The rejection region therefore is the union of all rejection regions: $$R_\alpha^{(S)} = \bigcup_{j \in S}R_{\alpha/K}^{(j)}$$ This test has nonasymptotic level at most $\alpha$: $$\P_{H_0}\left[R_\alpha^{(S)}\right] \leq \sum_{j\in S}\P_{H_0}\left[R_{\alpha/K}^{(j)}\right] = \alpha$$ This test also works for implicit testing (for example, $\beta_1 \geq \beta_2$).
{ "alphanum_fraction": 0.6324838809, "avg_line_length": 34.4656084656, "ext": "tex", "hexsha": "7324546f34157800deeb260664d29271b2fc89ba", "lang": "TeX", "max_forks_count": 16, "max_forks_repo_forks_event_max_datetime": "2022-03-20T16:31:34.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-11T14:20:15.000Z", "max_forks_repo_head_hexsha": "9ffbd54a0489edc2214e52bd65a65d4c92793971", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kpsunkara/MITx_capstone_2", "max_forks_repo_path": "content/OLS.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "9ffbd54a0489edc2214e52bd65a65d4c92793971", "max_issues_repo_issues_event_max_datetime": "2021-07-06T08:24:47.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-07T20:24:38.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kpsunkara/MITx_capstone_2", "max_issues_repo_path": "content/OLS.tex", "max_line_length": 266, "max_stars_count": 32, "max_stars_repo_head_hexsha": "9ffbd54a0489edc2214e52bd65a65d4c92793971", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kpsunkara/MITx_capstone_2", "max_stars_repo_path": "content/OLS.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-12T10:26:41.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-24T02:24:22.000Z", "num_tokens": 2498, "size": 6514 }
\documentclass[11pt,english,letterpaper,oneside]{article} \usepackage{amsmath,amssymb,amsfonts,amsthm,enumerate,enumitem} \usepackage[top=3cm,bottom=3cm,left=2.5cm,right=2.5cm]{geometry} \usepackage{graphicx,microtype} \usepackage{textcomp} \usepackage[T1]{fontenc} \DisableLigatures[f]{encoding=T1} \usepackage[skip=2pt]{caption} \usepackage{lineno} \usepackage{hyperref} \linenumbers \usepackage{setspace,array,float} \usepackage{bm,upgreek} \usepackage[compact]{titlesec} \usepackage[none]{hyphenat} \setlength{\parskip}{0cm} \usepackage{natbib} \setcitestyle{citesep={;},aysep={}} \usepackage{babel} \usepackage{datetime} \pagestyle{plain} \frenchspacing %\doublespacing \usepackage[table]{xcolor} \setcounter{secnumdepth}{0} \makeatletter \g@addto@macro\normalsize{% \setlength\belowdisplayskip{15pt} \setlength\belowdisplayshortskip{10pt} } \makeatother %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \newcommand{\tmat}{$\bm{T}$} \newcommand{\rmat}{$\bm{R}$} \newcommand{\etal}{\textit{et al}.} \newdateformat{mydate}{\THEDAY{} \monthname[\THEMONTH], \THEYEAR} \hfill Version dated: \mydate\today \vspace{0.25in} \begin{center} {\LARGE \bfseries Accounting for genotype uncertainty in the estimation of allele frequencies in autopolyploids} \vspace{0.45in} Paul D. Blischak$^{1,*}$, Laura S. Kubatko$^{1,2}$ and Andrea D. Wolfe$^1$ \vspace{0.45in} \textit{$^1$Department of Evolution, Ecology and Organismal Biology, Ohio State University,} \textit{318 W. 12th Avenue, Columbus, OH 43210, USA.} \bigskip \bigskip \textit{$^2$Department of Statistics, Ohio State University,} \textit{1958 Neil Avenue, Columbus, OH 43210, USA.} \end{center} \vspace{0.45in} \noindent $^*$\textbf{Corresponding author}: Paul Blischak, Ohio State University, Dept. of Evolution, Ecology and Organismal Biology, 318 W. 12th Avenue, Columbus, OH 43210. E-mail: [email protected]. \vspace{0.45in} \noindent \textbf{Running title}: Genotype uncertainty in autopolyploids \vspace{.45in} %%%%%%%%%%%%%%%%%%%% \section{Abstract} %% %%%%%%%%%%%%%%%%%%%% Despite the increasing opportunity to collect large-scale data sets for population genomic analyses, the use of high throughput sequencing to study populations of polyploids has seen little application. This is due in large part to problems associated with determining allele copy number in the genotypes of polyploid individuals (allelic dosage uncertainty--ADU), which complicates the calculation of important quantities such as allele frequencies. Here we describe a statistical model to estimate biallelic SNP frequencies in a population of autopolyploids using high throughput sequencing data in the form of read counts.We bridge the gap from data collection (using restriction enzyme based techniques [e.g., GBS, RADseq]) to allele frequency estimation in a unified inferential framework using a hierarchical Bayesian model to sum over genotype uncertainty. Simulated data sets were generated under various conditions for tetraploid, hexaploid and octoploid populations to evaluate the model's performance and to help guide the collection of empirical data. We also provide an implementation of our model in the R package \textsc{polyfreqs} and demonstrate its use with two example analyses that investigate (i) levels of expected and observed heterozygosity and (ii) model adequacy. Our simulations show that the number of individuals sampled from a population has a greater impact on estimation error than sequencing coverage. The example analyses also show that our model and software can be used to make inferences beyond the estimation of allele frequencies for autopolyploids by providing assessments of model adequacy and estimates of heterozygosity. \vspace{0.25in} \noindent (\textbf{Keywords}: allelic dosage uncertainty, genotyping by sequencing, hierarchical Bayesian modeling, polyploidy, population genomics, RADseq) \vspace{0.25in} %%%%%%%%%%%%%%%%%%% \section{Introduction} %% %%%%%%%%%%%%%%%%%%% Biologists have long been fascinated by the occurrence of whole genome duplication (WGD) in natural populations and have recognized its role in the generation of biodiversity \citep{ClausKeckHies1940,StebbinsVariationEvolution,GrantPlantSpeciation,otto2000polyploidy}. Though WGD is thought to have occurred at some point in nearly every major group of eukaryotes, it is a particularly common phenomenon in plants and is regarded by many to be an important factor in plant diversification \citep{wood2009polyploid,soltisd2009diversification,scarpino2014polyploid}. The role of polyploidy in plant evolution was originally considered by some to be a ``dead-end'' \citep{StebbinsVariationEvolution,wagner1970noise,soltisd2014stebbins} but, since its first discovery in the early twentieth century, polyploidy has been continually studied in nearly all areas of botany \citep{winge1917polyploidy,Winkler1916polyploidy,ClausKeckHies1945polyploidy,GrantPlantSpeciation,StebbinsVariationEvolution,soltisD2003polyploid,soltisd2010polyploidUnknowns,soltai2009roleOfHybridization,ramsey2014polEcoProcRoySoc}. Though fewer examples of WGD are currently known for animal systems, groups such as amphibians, fish, and reptiles all exhibit polyploidy \citep{allendorf1984tetraploidFish,gregory2005polyploidyAnimals}. Ancient genome duplications are also thought to have played an important role in the evolution of both plants and animals, occurring in the lineages preceeding the seed plants, angiosperms and vertebrates \citep{ohno1970geneDuplication,otto2000polyploidy,furlong2001animalOctoploid,jiao2011ancientWGD}. These ancient WGD events during the early history of seed plants and angiosperms have been followed by several more WGDs in all major plant groups \citep{cui2006genomeDuplication,scarpino2014polyploid,canon2014polyploidyLegumes}. Recent experimental evidence has also demonstrated increased survivorship and adaptability to foreign environments of polyploid taxa when compared with their lower ploidy relatives \citep{ramsey2011polyploidEcology,Selmecki2015yeastAdaptation}. \medskip Polyploids are generally divided into two types based on how they are formed: auto- and allopolyploids. Autopolyploids form when a WGD event occurs within a single evolutionary lineage and typically have polysomic inheritance. Allopolyploids are formed by hybridization between two separately evolving lineages followed by WGD and are thought to have mostly disomic inheritance. Multivalent chromosome pairing during meiosis can occur in allopolyploids, however, resulting in mixed inheritance patterns across loci in the genome [segmental allopolyploids] \citep{StebbinsVariationEvolution}. Autopolyploids can also undergo double reduction, a product of multivalent chromosome pairing wherein segments from sister chromatids move together during meiosis---resulting in allelic inheritance that breaks away from a strict pattern of polysomy \citep{haldane1930autopolyploids}. Autopolyploidy was also thought to be far less common than allopolyploidy, but recent studies have concluded that autopolyploidy occurs much more frequently than originally proposed \citep{soltis2007autopolyploidy,parisod2010autopolyploidy}. \medskip The theoretical treatment of population genetic models in polyploids has it origins in the Modern Synthesis with Fisher, Haldane and Wright each contributing to the development of some of the earliest mathematical models for understanding the genetic patterns of inheritance in polyploids \citep{haldane1930autopolyploids,wright1938polyploid,fisher1943doublereduction}. Early empirical work on polyploids that influenced Fisher, Haldane and Wright include studies on \textit{Lythrum salicaria} by N. Barlow (\citeyear{barlow1913heterostylism}, \citeyear{barlow1923trimorphic}), \textit{Dahlia} by W. J. C. Lawrence (\citeyear{lawrence1929dahlia}) and \textit{Primula} by H. J. Muller (\citeyear{muller1914primula}). The foundation laid down by these early papers has led to the continuing development of population genetic models for polyploids, including models for understanding the rate of loss of genetic diversity and extensions of the coalescent in autotetraploids, as well as modifications of the multispecies coalescent for the inference of species networks containing allotetraploids \citep{moody1993autopolyploids,arnold2012autotetraploidCoal,jones2013allopolyploid}. Much of this progress was described in a review by \cite{dufresne2014polyPopGen}, who outlined the current state of population genetics in polyploids regarding both molecular techniques and statistical models. Not surprisingly, one of the most promising developments for the future of population genetics in polyploids is the advancement of sequencing technologies. A particularly common method of gathering large data sets for genome scale inferences are restriction enzyme based techniques (e.g., RADseq, ddRAD, GBS, etc.), which we will refer to generally as RADseq \citep{miller2007gbs,baird2008radTags,peterson2012ddrad,puritz2014demystifyingRAD}. However, despite its popularity for population genetic inferences at the diploid level, there are many fewer examples of RADseq experiments conducted on polyploid taxa \citep[but see][]{ogden2013sturgeonRADseq,wang2013birchRADseq,logan-young2015polyploidSNP}. \medskip Among the primary reasons for the dearth in applying RADseq to polyploids is the issue of allelic dosage uncertainty (ADU), or the inability to fully determine the genotype of a polyploid organism when it is partially heterozygous at a given locus. This is the same problem that has been encountered by other codominant markers such as microsatellites, which have been commonly used for population genetic analyses in polyploids. One way of dealing with allelic dosage that has been used for multi-allelic microsatellite markers has been to code alleles as either present or absent based on electropherogram readings (allelic phenotypes) and to analyze the resulting dominant data using a program such as \textsc{polysat} \citep{clark2007polysat,dufresne2014polyPopGen}. \cite{deSilva2005alleleFreqs} developed a method for inferring allele frequencies using observed allelic phenotype data and used an expectation-maximization algorithm to deal with the incomplete genotype data resulting from ADU. Attempts to directly infer the genotype of polyploid microsatellite loci have also been successfully completed in some cases by using the relative electropherogram peak heights of the alleles in the genotypes \citep{esselink2004polyploidSSR}. The estimation problem would be similar for biallelic SNP data collected using RADseq, where a partially heterozygous polyploid will have high throughput sequencing reads containing both alleles. For a tetraploid, the possible genotypes for a partial heterozygote (alleles A and B) would be AAAB, AABB and ABBB. For a hexaploid they are AAAAAB, AAAABB, AAABBB, AABBBB and ABBBBB. In general, the number of possible genotypes for a biallelic locus of a partially heterozygous $K$-ploid ($K=3,4,5,\ldots$) is $K-1$. A possible solution to this problem for SNPs would be to try to use existing genotype callers and to rely on the relative number of sequencing reads containing the two alleles (similar to what was done for microsatellites). However, this could lead to erroneous inferences when genotypes are simply fixed at point estimates based on read proportions without considering estimation error. Furthermore, when sequencing coverage is low, the number of genotypes that will appear to be equally probable increases with ploidy, making it difficult to distinguish among the possible partially heterozygous genotypes. \medskip In this paper we describe a model that aims to address the problems associated with ADU by treating genotypes as a latent variable in a hierarchical Bayesian model and using high throughput sequencing read counts as data. In this way we preserve the uncertainty that is inherent in polyploid genotypes by inferring a probability distribution across all possible values of the genotype, rather than treating them as being directly observed. This approach has been used by \cite{buerkle2013popModels} to deal with uncertainty in calling genotypes in diploids and the work we present here builds off of their earlier models. Our model assumes that the ploidy level of the population is known and that the genotypes of individuals in the population are drawn from a single underlying allele frequency for each locus. These assumptions imply that alleles in the population are undergoing polysomic inheritance without double reduction, which most closely adheres to the inheritance patterns of an autopolyploid. We acknowledge that the model in its current form is an oversimplification of biological reality and realize that it does not apply to a large portion of polyploid taxa. Nevertheless, we believe that accounting for ADU by modeling genotype uncertainty has the potential to be applied more broadly via modifications of the probability model used for the inheritance of alleles, which could lead to more generalized population genetic models for polyploids (see the \textbf{Extensibility} section of the \textbf{Discussion}). \medskip %%%%%%%%%%%%%%%%%%%%%%%% \section{Materials and Methods} %% %%%%%%%%%%%%%%%%%%%%%%%% \noindent Our goal is to estimate the frequency of a reference allele for each locus sampled from a population of known ploidy ($\psi$), where the reference allele can be chosen arbitrarily between the two alleles at a given biallelic SNP. To do this we extend the population genomic models of \cite{buerkle2013popModels}, which employ a Bayesian framework to model high throughput sequencing reads ($\bm{T},\bm{R}$), genotypes ($\bm{G}$) and allele frequencies ($\bm{p}$), to the case of arbitrary ploidy. The idea behind the model is to view the sequencing reads gathered for an individual as a random sample from the unobserved genotype at each locus. Genotypes can then be treated as a parameter in a probability model that governs how likely it is that we see a particular number of sequencing reads carrying the reference allele. Similarly, we can treat genotypes as a random sample from the underlying allele frequency in the population (assuming Hardy-Weinberg equilibrium). For our model, a genotype is simply a count of the number of reference alleles at a locus which can range from 0 (a homozygote with no reference alleles in the genotype) to $\psi$ (a homozygote with only reference alleles in the genotype). All whole numbers in between 0 and $\psi$ represent partially heterozygous genotypes. This hierarchical setup addresses the problems associated with ADU by treating genotypes as a latent variable that can be integrated out using Markov chain Monte Carlo (MCMC). \medskip \subsection{Model setup} %%%%%%%%%%%%%%%%%%%%%%%%%% \medskip Here we consider a sample of $N$ individuals from a single population of ploidy level $\psi$ sequenced at $L$ unlinked SNPs. The data for the model consist of two matrices containing counts of high throughput sequencing reads mapping to each locus for each individual: \rmat{} and \tmat. The $N \times L$ matrix \tmat{} contains the total number of reads sampled at each locus for each individual. Similarly, \rmat{} is an $N \times L$ matrix containing the number of sampled reads with the reference allele at each locus for each individual. Then for individual $i$ at locus $\ell$, we model the number of sequencing reads containing the reference allele ($r_{i\ell}$) as a Binomial random variable conditional on the total number of sequencing reads ($t_{i\ell} $), the underlying genotype ($g_{i\ell}$) and a constant level of sequencing error ($\epsilon$) \begin{equation}\label{likelihood} P(r_{i \ell}|t_{i\ell}, g_{i \ell},\epsilon) = \binom{t_{i \ell}}{r_{i \ell}} g_\epsilon^{r_{i \ell}}(1-g_\epsilon)^{t_{i \ell}-r_{i \ell}}\,. \end{equation} \noindent Here $g_\epsilon$ is the probability of observing a read containing the reference allele corrected for sequencing error \begin{equation}\label{g_error} g_\epsilon = \left(\frac{g_{i \ell}}{\psi}\right)(1-\epsilon) + \left(1-\frac{g_{i \ell}}{\psi}\right)\epsilon \,. \end{equation} \noindent The intuition behind including error is that we want to calculate the probability that we observe a read containing the reference allele. There are two ways that this can happen. (1) Reads are drawn from the reference allele(s) in the genotype with probability $\frac{g_{i\ell}}{\psi}$ but are only observed as reference reads if they are not errors (probability $1-\epsilon$). (2) Similarly, reads from the non-reference allele(s) in the genotype are drawn with probability $1-\frac{g_{i\ell}}{\psi}$ but can be mistakenly read as a coming from a reference allele if an error occurs (probability $\epsilon$). The sum across these two possibilities gives the overall probability of observing a read containing the reference allele. If we also assume conditional independence of the sequencing reads given the genotypes, the joint probability distribution for sequencing reads is given by \begin{equation}\label{factored_lik} P(\bm{R}|\bm{T},\bm{G}, \epsilon) = \displaystyle\prod_{\ell=1}^L\displaystyle\prod_{i=1}^N P(r_{i \ell}|t_{i \ell},g_{i \ell}, \epsilon)\,. \end{equation} \noindent Since the $r_{i \ell}$'s are the data that we observe, the product of $P(r_{i \ell}|t_{i\ell}, g_{i \ell},\epsilon)$ across loci and individuals will form the likelihood in the model. \medskip The next level in the hierarchy is the conditional prior for genotypes. We model each $g_{i \ell}$ as a Binomial random variable conditional on the ploidy level of the population and the frequency of the reference allele for locus $\ell$ ($p_{\ell}$): \begin{equation*} P(g_{i \ell}|\psi,p_{\ell}) = \binom{\psi}{g_{i \ell}}\,p_{\ell}^{\,g_{i \ell}}(1-p_{\ell})^{\psi-g_{i \ell}}\,. \end{equation*} \noindent We also assume that the genotypes of the sampled individuals are conditionally independent given the allele frequencies, which is equivalent to taking a random sample from a population in Hardy-Weinberg equilibrium. Factoring the distribution for genotypes and taking the product across loci and individuals gives us the joint probability distribution of genotypes given the ploidy level of the population and the vector of allele frequencies at each locus ($\bm{p}=\{p_1,\ldots,p_L\}$): \begin{equation}\label{condl_prior} P(\bm{G}|\psi, \bm{p}) = \displaystyle\prod_{\ell=1}^L\displaystyle\prod_{i=1}^N P(g_{i \ell}|\psi, p_{\ell})\,. \end{equation} \noindent We choose here to ignore other factors that may be influencing the distribution of genotypes such as double reduction. In general, double reduction will act to increase homozygosity \citep{hardy2015autopolyploids}. However, it is more prevalent for loci that are farther away from the centromere, which makes the estimation of a global double reduction parameter (typically denoted $\alpha$) inappropriate for the thousands of loci gathered from across the genome using techniques such as RADseq. It might be possible to estimate a per locus rate of double reduction ($\alpha_{\ell}$) but this would add an additional parameter that would need to be estimated for each locus, perhaps unnecessarily if the majority end up being equal, or close, to 0. \medskip The final level of the model is the prior distribution on allele frequencies. Assuming \textit{a priori} independence across loci, we use a Beta distribution with parameters $\alpha$ and $\beta$ both equal to $1$ as our prior distribution for each locus. A Beta(1,1) is equivalent to a Uniform distribution over the interval $[0,1]$, making our choice of prior uninformative. The joint posterior distribution of allele frequencies and genotypes is then equal to the product across all loci and all individuals of the likelihood, the conditional prior on genotypes and the prior distribution on allele frequencies up to a constant of proportionality \begin{align}\label{posterior} P(\,\bm{p},\bm{G}|\bm{T}, \bm{R},\epsilon) &\propto P(\bm{R}|\bm{T},\bm{G}, \epsilon)P(\bm{G}|\psi,\bm{p})P(\bm{p}) \nonumber \\[0.05in] &= \displaystyle\prod_{\ell=1}^L\displaystyle\prod_{i=1}^N P(r_{i \ell}|t_{i\ell}, g_{i \ell},\epsilon)P(g_{i \ell}|\psi, p_{\ell})P(p_{\ell})\,. \end{align} \noindent The marginal posterior distribution for allele frequencies can be obtained by summing over genotypes \begin{equation}\label{marg_post_p} P(\,\bm{p}|\bm{T}, \bm{R},\epsilon) \propto \displaystyle\sum_{\bm{G}} P(\,\bm{p},\bm{G}|\bm{T}, \bm{R},\epsilon)\,. \end{equation} \noindent It would also be possible to examine the marginal posterior distribution of genotypes but here we will focus primarily on allele frequencies. \medskip \subsection{Full conditionals and MCMC using Gibbs sampling} %%%%%%%%%%%%%%%%%%%%%%%% \medskip \noindent We estimate the joint posterior distribution for allele frequencies and genotypes in Eq. \ref{posterior} using MCMC. This is done using Gibbs sampling of the states $(\,\bm{p},\bm{G})$ in a Markov chain by alternating samples from the full conditional distributions of $\bm{p}$ and $\bm{G}$. Given the setup for our model using Binomial and Beta distributions (which form a conjugate family), analytical solutions for these distributions can be readily acquired \citep{gelman2014bayesian}. The full conditional distribution for allele frequencies is Beta distributed and is given by Eq. \ref{p-full} below: \begin{equation}\label{p-full} p_{\ell}\,|\,g_{i \ell},r_{i \ell},\epsilon \: \sim \: \text{Beta}\left(\alpha= \sum_{i=1}^N g_{i \ell} +1,\; \beta = \sum_{i=1}^N (\psi-g_{i \ell})+1\right),\quad \text{for } \ell = 1,\ldots,L. \end{equation} \noindent This full conditional distribution for $p_{\ell}$ has a natural interpretation as it is roughly centered at the proportion of sampled alleles carrying the reference allele divided by the total number of alleles sampled. The ``$+1$'' comes from the prior distribution and will not have a strong influence on the posterior when the sample size is large. \medskip The full conditional distribution for genotypes is a discrete categorical distribution over the possible values for the genotypes $(0,\ldots,\psi)$. The distribution for individual $i$ at locus $\ell$ is \begin{align}\label{G-full} P(g_{i \ell}|g_{(\text{-}i) \ell},p_{\ell},r_{i \ell},\epsilon) = &\binom{t_{i\ell}}{r_{i\ell}} g_\epsilon^{r_{i \ell}}(1-g_\epsilon)^{t_{i \ell}-r_{i \ell}} \nonumber \\[0.05in] &\quad \times \binom{\psi}{g_{i \ell}}p_{\ell}^{\,g_{i \ell}}\,(1-p_{\ell})^{\psi-g_{i \ell}}\,, \end{align} \noindent where $g_{(\text{-}i) \ell}$ is the value of the genotypes for all sampled individuals excluding individual $i$ and $g_\epsilon$ is the same as Eq. \ref{g_error}. The full conditional distribution for genotypes can be seen as the product of two quantities: (1) the probability of each of the possible genotypes based on the observed reference reads and (2) the probability of drawing each genotype given the allele frequency for that locus in the population. \medskip We begin our Gibbs sampling algorithm in a random position in parameter space through the use of uniform probability distributions. The genotype matrix is initialized with random draws from a Discrete Uniform distribution ranging from $0$ to $\psi$ and the initial allele frequencies are drawn from a Uniform distribution on the interval [0,1]. \medskip \subsection{Simulation study} %%%%%%%%%%%%%%%%%%%%%%% \medskip Simulations were performed to assess error rates in allele frequency estimation for tetraploid, hexaploid and octoploid populations ($\psi$ = 4, 6 and 8, respectively). Data were generated under the model by sampling genotypes from a Binomial distribution conditional on a fixed, known allele frequency $(\,p_{\ell} = 0.01, 0.05, 0.1, 0.2, 0.4)$. Total read counts were simulated for a single locus using a Poisson distribution with mean coverage equal to 5, 10, 20, 50 or 100 reads per individual. We then sampled the number of sequencing reads containing the reference allele from a Binomial distribution conditional on the number of total reads, the genotype and sequencing error (Eq. \ref{likelihood}; $\epsilon$ fixed to 0.01). Finally, we varied the number of individuals sampled per population $(N = 5, 10, 20, 30)$ and ran all possible combinations of the simulation settings. Our choice for the number of individuals to simulate was intended to reflect sampling within a \textit{single} population/locality and not that of an entire population genetics study. Furthermore, RAD sequencing is used at various taxonomic levels from population genetics to phylogenetics \citep[e.g.,][]{rheindt2013zimmerius,eaton2015oaks}, and we wanted our simulations to be informative across these applications. Each combination of sequencing coverage, individuals sampled and allele frequency was analyzed using 100 replicates for tetraploid, hexaploid and octoploid populations for a total of 30,000 simulation runs. MCMC analyses using Gibbs sampling were run for 100,000 generations with parameter values stored every 100th generation. The first 25\% of the sample was discarded as burn-in, resulting in 750 posterior samples for each replicate. Convergence on the stationary distribution, $P(\,\bm{p},\bm{G}|\bm{T},\bm{R},\epsilon)$, was assessed by examining trace plots for a subset of runs for each combination of settings and ensuring that the effective sample sizes (ESS) were greater than 200. Deviations from the known underlying allele frequency used to simulate each data set were assessed by taking the posterior mean of each replicate and calculating the root mean squared error (RMSE) based on the true underlying value. We also compared the posterior mean as an estimate of the allele frequency at a locus to a more simple estimate calculated directly from the read counts (mean read ratio): $\frac{1}{N}\sum_i\frac{r_{i\ell}}{t_{i\ell}}$. Comparisons between estimates were again made using the RMSE. \medskip All simulations were performed using the R statistical programming language \citep{r2014} on the Oakley cluster at the Ohio Supercomputer Center (\url{https://osc.edu}). Figures were generated using the R packages \textsc{ggplot2} \citep{wickham2009ggplot2} and \textsc{reshape} \citep{wickham2007reshape}, with additional figure manipulation completed using Inkscape (\url{https://inkscape.org}). MCMC diagnostics were done using the \textsc{coda} package \citep{plummer2006coda}. All scripts are available on GitHub (\url{https://github.com/pblischak/polyfreqs-ms-data}) in the \texttt{`code/'} folder and all simulated data sets are in the \texttt{`raw\_data/'} folder. \medskip \subsection{Example analyses of autotetraploid potato (\textit{Solanum tuberosum})} %%%%%%%%%%%%%%%%%%%%%%%%%%% \medskip To further evaluate the model and to demonstrate its use we present an example analysis using an empirical data set collected for autotetraploid potato (\textit{Solanum tuberosum}) using the Illumina GoldenGate platform \citep{anithakumari2010goldenGate,voorrips2011fitTetra}. Though these data aren't the typical reads returned by RADseq experiments, they still represent the same type of binary response data that our model uses to get a probability distribution for biallelic SNP genotypes. A detailed walkthrough with the code used for each step is provided as Supplemental Material. The data set and output are also available on GitHub (\url{https://github.com/pblischak/polyfreqs-ms-data}) in the \texttt{`example/'} folder. \medskip \subsubsection{\itshape Calculating expected and observed heterozygosity} \medskip One advantage of using a Bayesian framework for our model is that we can approximate a posterior distribution for any quantity that is a functional transformation of the parameters that we are estimating without doing any additional MCMC simulation \citep{gelman2014bayesian}. Two such quantities that are often used in population genetics are the observed and expected heterozygosity, which are in turn used for calculating the various fixation indices ($F_{IS}$, $F_{IT}$, $F_{ST}$) introduced by \cite{wright1951Fstats}. To analyze levels of heterozygosity in this way, we used the estimators of \cite{hardy2015autopolyploids} to calculate the per locus observed ($\mathcal{H}_o$) and expected ($\mathcal{H}_e$) heterozygosity for each stored sample of the joint posterior distribution in Eq. \ref{posterior}. This procedure is especially useful because it estimates heterozygosity while taking into account ADU by utilizing the marginal posterior distribution of genotypes. Given a total of $M$ posterior samples of genotypes and allele frequencies, we calculate the $m^\text{th}$ ($m=1,\dots,M$) estimate of the observed heterozygosity using Eq. \ref{het-obs} [numerator of Eq. 7 in \cite{hardy2015autopolyploids}]: \begin{equation}\label{het-obs} \mathcal{H}^{[m]}_o = \frac{1}{N} \sum_i h_{i}^{[m]} = \frac{1}{N} \sum_i \frac{g_{i\ell}^{[m]}(\psi-g_{i\ell}^{[m]})}{\binom{\psi}{2}}\, . \end{equation} \noindent Similarly, the $m^\text{th}$ estimate of the expected heterozygosity is calculated using Eq. \ref{het-exp} [denominator of Eq. 8 in \cite{hardy2015autopolyploids}]: \begin{equation}\label{het-exp} \mathcal{H}^{[m]}_e = \frac{N}{N-1} \left[1 - (p_{\ell}^{[m]})^2 - (1-p_{\ell}^{[m]})^2 - \frac{\psi-1}{\psi N^2}\sum_i h_{i}^{[m]}\right]\,. \end{equation} \noindent The posterior distribution of a multi-locus estimate of heterozygosity can then be approximated by taking the average across loci for each of the per locus posterior samples. \medskip To evaluate levels of heterozygosity in autotetraploid potato, we obtained biallelic count data for 224 accessions collected at 384 loci using the Illumina GoldenGate platform from the R package \textsc{fitTetra} \citep{voorrips2011fitTetra}, which provides the data set as part of the package. We chose the `X' reading to be the count data for the reference allele and added the `X' and `Y' readings together to get the total read counts (`X' and `Y' represent the counts of the two alternative alleles). Initial attempts to analyze the data set using our Gibbs sampling algorithm were unsuccessful due to arithmetic underflow. This was due to the fact that the counts/intensities returned by the Illumina GoldenGate platform are on a different scale ($\sim$10,000-20,000+) than the read counts that would be expected from a RADseq experiment. To alleviate this problem, we rescaled the data set while preserving the relative dosage information by dividing the GoldenGate count readings by 100 and rounding to the nearest whole number. We then analyzed the rescaled count data using 100,000 MCMC generations, sampling every 100 generations and using the stored samples of the allele frequencies and genotypes to calculate the observed and expected heterozygosity for a total of 1,000 posterior samples of the per locus observed and expected heterozygosity. We also compared post burn-in (25\%) allele frequency estimates based on the posterior mean to the simple allele frequency estimate based directly on read counts used previously (mean read ratio). Posterior distributions for multi-locus estimates of observed and expected heterozygosity were obtained by taking the average across loci for each posterior sample of the per locus estimates using a burn-in of 25\%. \medskip \subsubsection{{\itshape Evaluating model adequacy}} \medskip As noted earlier, the probability model that we use for the inheritance of alleles is one of polysomy without double reduction. In some cases, this model may be inappropriate. Therefore, it can be informative to check for loci that do not follow the model that we assume. Below we describe a procedure for rejecting our model of inheritance on a per locus basis using comparisons with the posterior predictive distribution of sequencing reads. Model checking is an important part of making statistical inferences and can play a role in understanding when a model adequately describes the data being analyzed. In the case of our model, it can serve as a basis for understanding the inheritance patterns of the organism being studied by determining which loci adhere to a simple pattern of polysomic inheritance. Other sources of disequilibrium that could indicate poor model fit include inbreeding, null alleles and allele drop out \citep[\textit{sensu}][]{arnold2013RADseq}, making this posterior predictive model check more broadly applicable for RADseq data. \medskip Given $M$ posterior samples for the allele frequencies at locus $\ell$, $\left\{p_{\ell}^{[1]},p_{\ell}^{[2]},\ldots,p_{\ell}^{[M]} \right\}$, we simulate new values for the genotypes ($\tilde{g}_{i \ell}$) and reference read counts ($\tilde{r}_{i \ell}$) for all individuals and use the ratio of simulated reference read counts to observed total read counts $\left( \frac{\tilde{r}_{i \ell}}{t_{i \ell}} \right) $ as a summary statistic for comparing the observed read count ratios to the distribution of the predicted read count ratios. The use of the likelihood (or similar quantities) as a summary statistic has been a common practice in posterior predictive comparisons of nucleotide substitution models, and more recently for comparative phylogenetics \citep{ripplinger2010DNAmodels,reid2014poorfit,pennell2015adequacy}. We use the ratio of reference to total read counts here because it is the maximum likelihood estimate of the probability of success for a Binomial random variable and because it is a simple quantity to calculate. The use of other summary statistics, or a combination of multiple summary statistics, would also possible. The procedure for our posterior predictive model check is as follows: \medskip \begin{enumerate} \item For locus $\ell = 1,\ldots,L$: \begin{enumerate}[label={\arabic{enumi}.\arabic*.}] \item For posterior sample $m = 1,\ldots,M$: \begin{enumerate}[label={\arabic{enumi}.\arabic{enumi}.\arabic*.}] \item Simulate new genotype values $\left( \tilde{g}_{i \ell}^{[m]}\right)$ for all individuals ($i = 1,\ldots,N$) by drawing from a $\text{Binomial}\left( \psi,p_{\ell}^{[m]} \right)$. \item Simulate new reference read counts $\left( \tilde{r}_{i \ell}^{[m]} \right)$ from each new genotype for all individuals by drawing from Eq. \ref{likelihood}. \item Calculate the reference read ratio for the simulated data for sample $m$ and sum across individuals: $\mathcal{\tilde{S}}_{\ell}^{[m]} = \sum_{i=1}^{N} \left(\frac{\tilde{r}_{i \ell}^{[m]}}{t_{i \ell}} \right)$. \item Calculate the reference read ratio for the observed data and sum across individuals: $\mathcal{S}_{\ell} = \sum_{i=1}^{N} \left(\frac{r_{i \ell}}{t_{i \ell}} \right)$. \end{enumerate} \item Calculate the difference between the observed reference read ratio and the $M$ simulated reference read ratios: $\left\{ \mathcal{S}_{\ell}-\mathcal{\tilde{S}}_{\ell}^{[1]},\ldots,\mathcal{S}_{\ell}-\mathcal{\tilde{S}}_{\ell}^{[M]}\right\}$. \end{enumerate} \item Determine if the 95\% highest posterior density (HPD) interval of the distribution of re-centered reference read ratios contains 0. \end{enumerate} \medskip When the distribution of the differences in ratios between the observed and simulated data sets does not contain 0 in the 95\% HPD interval, it provides evidence that the locus being examined does not follow a pattern of strict polysomic inheritance. A similar approach could be used on an individual basis by comparing the observed ratio of reference reads to the predicted ratios for each individual at each locus. We used this posterior predictive model checking procedure to assess model adequacy in the potato data set using the posterior distribution of allele frequencies estimated in the previous section with 25\% of the samples discarded as burn-in. \medskip %%%%%%%%%%%%%%% \section{Results} %% %%%%%%%%%%%%%%% Our Gibbs sampling algorithm was able to accurately estimate allele frequencies for a number of simulation settings while simultaneously allowing for genotype uncertainty. There were no indications of a lack of convergence (ESS values > 200) for any of the simulation replicates and all trace plots examined also indicated that the Markov chain had reached stationarity. Running the MCMC for 100,000 generations and sampling every 100th generation appeared to be suitable for our analyses and we recommend it as a starting point for running most data sets. Reducing the number of generations and sampling more frequently (e.g., 50,000 generations sampled every 50 generations) could be a potential work around for larger data sets. When doing test runs we went as low as 20,000 generations sampled every 20th generation, which still passed our diagnostic tests for convergence. This is likely because the parameter space of our model is not overly difficult to navigate so stationarity is reached rather quickly. Ultimately, the deciding factor on how long to run the analysis and how frequently to sample the chain will come down to assessing convergence. \medskip \subsection{Simulation study} \medskip Increasing the number of individuals sampled had the largest effect on the accuracy of allele frequency estimation (Figure \ref{fig1:rmse}). Since allele frequencies are population parameters, it is not surprising that sampling more individuals from the population leads to better estimates. This appears to be the case even when sequencing coverage is quite low (5x, 10x), which corroborates the observations made by \cite{buerkle2013popModels}. This is not to say, however, that sequencing coverage has no effect on the posterior distribution of allele frequencies. Lower sequencing coverage affects the posterior distribution by increasing the posterior standard deviation (Figure \ref{fig2:coverage-sd}). An interesting pattern that emerged during the simulation study is the observation that the allele frequencies closer to 0.5 tend to have higher error rates, which is to be expected given that the variance of a Binomial random variable is highest when the probability of success is 0.5. We also observed small differences in the RMSE between ploidy levels, with estimates increasing in accuracy with increasing ploidy. Comparisons between the posterior mean and mean read ratio estimates of allele frequencies (Figure S1) show that the estimate based on read ratios has a lower RMSE than the posterior mean when the true allele frequency is low ($p_\ell=0.01, 0.05$) but has higher error rates than the posterior mean for allele frequencies closer to 0.5. When sequencing coverage is greater than 10x and the number of individuals sampled is greater than 20, the two estimates are almost indistinguishable. \medskip \subsection{Example analyses} \medskip Our analyses of \textit{Solanum tuberosum} tetraploids showed levels of heterozygosity consistent with a pattern of excess outbreeding ($\mathcal{H}_o > \mathcal{H}_e$). In fact, the posterior distributions of the multi-locus estimates of observed and expected heterozygosity do not overlap at all (Figure \ref{fig3:het}). The assessment of model adequacy also showed that 49 out of the 384 loci ($\sim$13\%) were a poor fit to the model of polysomic inheritance that we assume. The allele frequency estimates using the posterior mean and the mean read ratio provided similar estimates and were comparable for most loci. For loci in which the frequency of the reference allele is very low, the read ratio estimate tends to be higher than the posterior mean. However, the overall pattern does not indicate over or under estimation for most allele frequencies (Figure S2). When we took the difference between the estimates at each locus, the distribution was centered near 0 (Figure S3). \medskip %%%%%%%%%%%%%%%%% \section{Discussion} %% %%%%%%%%%%%%%%%%% The inference of population genetic parameters and the demographic history of non-model polyploid organisms has consistently lagged behind that of diploids. The difficulties associated with these inferences present themselves at two levels. The first of these is the widely known inability to determine the genotypes of polyploids due to ADU. Even though there have been theoretical developments in the description of models for polyploid taxa as early as the 1930s, a large portion of this population genetic theory relies on knowledge about individuals' genotypes \citep[e.g.,][]{haldane1930autopolyploids,wright1938polyploid}. The second complicating factor is the complexity of inheritance patterns and changes in mating systems that often accompany WGD events. Polyploid organisms can sometimes mate by both outcrossing or selfing, and can display mixed inheritance patterns at different loci in the genome \citep{dufresne2014polyPopGen}. If genotypes were known, then it might be easier to develop and test models for dealing with and inferring rates of selfing versus outcrossing, as well as understanding inheritance patterns across the genome. However, ADU only compounds the problems associated with these inferences, making the development and application of appropriate models far more difficult \citep[but see list of software in][]{dufresne2014polyPopGen}. The model we have presented here deals with the first of these two issues by not treating genotypes as observed quantities. Almost all other methods of genotype estimation for polyploids treat the genotype as the primary parameter of interest. Our model is different in that we still use the read counts generated by high throughput sequencing platforms as our observed data but instead integrate across genotype uncertainty when inferring other parameters, thus bypassing the problems caused by ADU. \medskip Despite our focus on bypassing ADU, an important consideration for the model we present here is that, because it approximates the joint posterior distribution of allele frequencies and genotypes, it would also be possible to use the marginal posterior distribution of genotypes to make inferences using existing methods. This could be done using the posterior mode as a maximum \textit{a posteriori} (MAP) estimate of the genotype for downstream analyses, followed by analyzing the samples taken from the marginal posterior distribution of genotypes. The resulting set of estimates would not constitute a ``true'' posterior distribution of downstream parameters but would allow researchers to interpret their results based on the MAP estimate of the genotypes while still getting a sense for the amount of variation in their estimates. Using the marginal posterior distribution of genotypes in this way could technically be applied to any type of polyploid, but is only really appropriate for autopolyploids due to the model of inheritance that is used. Other methods for estimating SNP genotypes from high throughput sequencing data include the program \textsc{SuperMASSA}, which models the relative intensity of the two alternative alleles using Normal densities \citep{serang2012supermassa}. \medskip A second important factor for using our model is that, although estimates of allele frequencies can be accurate when sequencing coverage is low and sample sizes are large (see Figure S4 for a direct comparison between sample size and coverage), the resulting distribution for genotypes is likely going to be quite diffuse. For analyses that treat genotypes as a nuisance parameter, this is not an issue since we can integrate across genotype uncertainty. However, if the genotype \textit{is} of primary interest, then the experimental design of the study will need to change to acquire higher coverage at each locus for more accurate genotype estimation. Therefore, the decision between sequencing more individuals with lower average coverage versus sequencing fewer individuals with higher average coverage depends primarily on whether the genotypes will be used or not. \medskip \subsection{Extensibility} \medskip The modular nature of our hierarchical model can allow for the addition and modification of levels in the hierarchy. One of the simplest extensions to the model that can build directly on the current setup would be to consider loci with more than two alleles. This can be done using Multinomial distributions for sequencing reads and genotypes and a Dirichlet prior on allele frequencies \citep[the Multinomial and Dirichlet distributions form a conjugate family;][]{gelman2014bayesian}. We could also model populations of mixed ploidy by using a vector of individually assigned ploidy levels instead of assuming a single value for the whole population $(\bm{\psi} = \{\psi_1,\ldots,\psi_N\})$. However, this would assume random mating among ploidy levels. \medskip \subsubsection{{\itshape Double reduction}} \medskip The inclusion of double reduction into the model is a difficult consideration for genome wide data collected using high throughput sequencing platforms. The number of parameters estimated by our model is $L\times(N+1)$ and including double reduction would add an additional $L$ parameters, bringing the total to $L\times(N+2)$. Though the addition of these parameters would not prohibit an analysis using Gibbs sampling, we chose to implement the simpler equilibrium model. We hope to include double reduction in future models but feel that our posterior predictive model checking procedure will prove sufficient for identifying loci in disequilibrium with our current implementation. Another concern that we had regarding double reduction is that it can be confounded with the overall signal of inbreeding, making it especially difficult to tease apart the specific effects of double reduction alone \citep{hardy2015autopolyploids}. However, because the probability of double reduction at a locus ($\alpha_\ell$) depends on its distance from the centromere (call it $x_\ell$), a potential way to estimate $\alpha_\ell$ would be to use the $x_\ell$'s as predictor variables in a linear model: $\alpha_{\ell} = \beta_0 + \beta_1 x_{\ell}$. This would only add two additional parameters ($\beta_0$ and $\beta_1$) that would need to be estimated and would be completely independent of the number of loci analyzed. The downside to this approach is that it would only be applicable for polyploid organisms with sequenced genomes (or the genome of a diploid progenitor), making the use of such a model impractical for the time being. \medskip \subsubsection{{\itshape Additional levels in the hierarchical model}} \medskip The place where we believe our model could have the greatest impact is through modifications and extensions of the probability model used for the inheritance of alleles. These models have been difficult to apply in the past as a result of genotype uncertainty. However, using our model as a starting point, it could be possible to infer patterns of inheritance (polysomy, disomy, heterosomy) and other demographic parameters (e.g., effective population size, population differentiation) without requiring direct knowledge about the genotypes of the individuals in the population. For example, Haldane's (\citeyear{haldane1930autopolyploids}) model of genotype frequencies for autopolyploids that are partially selfing could be used to infer the prevalence of self-fertilization within a population. Another possible approach would be to use general disequilibrium coefficients ($D_A$) to model departures from Hardy-Weinberg equilibrium \citep{hernandez1989disequilibrium,weir1996GeneticDataAnalysisII}. A more recent model described by \cite{stift2008polyploidInheritance} used microsatellites to infer the different inheritance patterns (disomic, tetrasomic, intermediate) for tetraploids in the genus \textit{Rorippa} (Brassicaceae) following crossing experiments. The reformulation of such a model for biallelic SNPs gathered using high throughput sequencing could provide a suitable framework for understanding inheritance patterns across the genome. An ideal model would be one that could help to understand genome-wide inheritance patterns for a polyploid of arbitrary formation pathway (autopolyploid $\leftrightarrow$ allopolyploid) without the need conduct additional experiments. However, to our knowledge, such a model does not currently exist. \medskip %%%%%%%%%%%%%%%%% \section{Conclusions} %% %%%%%%%%%%%%%%%%% The recent emergence of models for genotype uncertainty in diploids has introduced a theoretical framework for dealing with the fact that genotypes are unobserved quantities \citep{gompert2012bgc,buerkle2013popModels}. Our extension of this theory to cases of higher ploidy (specifically to autopolyploids) progresses naturally from the original work but also serves to alleviate the deeper issue of ADU. The power and flexibility of these models as applied at the diploid level has the potential to be replicated for polyploid organisms with the addition of suitable models for allelic inheritance. The construction of hierarchical models containing probability models for ADU, allelic inheritance and perhaps even additional levels for important parameters such as F-statistics or the allele frequency spectrum also have the potential to provide key insights into the population genetics of polyploids \citep{gompert2011bamova,buerkle2013popModels}. Future work on such models will help to progress the study of polyploid taxa and could eventually lead to more generalized models for understanding the processes that have shaped their evolutionary histories. \medskip %%%%%%%%%%%%%%%% \section{Software note} %% %%%%%%%%%%%%%%%% We have combined the scripts for our Gibbs sampler as an R package---\textsc{polyfreqs}---which is available on GitHub (\url{https://github.com/pblischak/polyfreqs}). Though \textsc{polyfreqs} is written in R, it deals with the large data sets that are generated by high throughput sequencing platforms in two ways. First, it takes advantage of R's ability to incorporate C++ code via the \textsc{Rcpp} and \textsc{RcppArmadillo} packages, allowing for a faster implementation of our MCMC algorithm \citep{eddelbuettel2011rcpp,eddelbuettel2013rcppBook,eddelbuettel2014rcpparmadillo}. Second, since the model assumes independence between loci, \textsc{polyfreqs} can facilitate the process of parallelizing analyses by splitting the total read count and reference read count matrices into subsets of loci which can be analyzed at the same time on separate nodes of a computing cluster. Additional features of the program include: \medskip \begin{itemize} \item Estimation of posterior distributions of per locus observed and expected heterozygosity (\texttt{het\_obs} and \texttt{het\_exp}, respectively). \item Maximum \textit{a posteriori} (posterior mode) estimation of genotypes using the \texttt{get\_map\_genotypes()} function. \item Posterior predictive model checking using the \texttt{polyfreqs\_pps()} function. \item Simulation of high throughput sequencing read counts and genotypes from user specified allele frequencies using the \texttt{sim\_reads()} function. \item Options for controlling program output such as writing genotype samples to file, printing MCMC updates to the R console, etc. \item Simple input format using tab delimited text files that can be directly imported into R using the \texttt{read.table()} function. The format is as follows: \begin{enumerate} \item An optional row of locus names (use \texttt{header=TRUE} to specify this in \texttt{read.table()}). \item One row for each individual. \item First column contains individual names (use \texttt{row.names=1} to specify this in \texttt{read.table()}). \item One column for each locus. \end{enumerate} \end{itemize} \medskip %%%%%%%%%%%%%%%%%%%%%%% \section{Acknowledgements} %% %%%%%%%%%%%%%%%%%%%%%%% The authors would like to thank the Ohio Supercomputer Center for access to computing resources and Nick Skomrock for assistance with deriving the full conditional distributions of the model in the diploid case. We would also like to thank Frederic Austerlitz, Aaron Wenzel, members of the Wolfe and Kubatko labs and 3 anonymous reviewers for their helpful comments on the manuscript. This work was partially funded through a grant from the National Science Foundation (DEB-1455399) to ADW and LSK. \medskip %%%%%%%%%%%%%%%%% % References %% %%%%%%%%%%%%%%%%% \singlespacing \bibliographystyle{MolEcol} \bibliography{refs} %\doublespacing %%%%%%%%%%%%%%%%%%%%%%% \section{Author Contributions} %% %%%%%%%%%%%%%%%%%%%%%%% Conceived of the study: PDB, LSK and ADW. PDB derived the polyploid model, ran the simulations and other analyses, coded the R package and wrote the initial draft of the manuscript. PDB, LSK and ADW reviewed all parts of the manuscript and all authors approved of the final version. \medskip %%%%%%%%%%%%%%%%%%%%%% \section{Data Accessibility} %% %%%%%%%%%%%%%%%%%%%%%% Scripts for simulating the data sets, analyzing them using Gibbs sampling and producing the figures from the resulting output can all be found on GitHub, along with the original simulated data sets and autotetraploid potato data (\url{https://github.com/pblischak/polyfreqs-ms-data}). We also provide an implementation of the Gibbs sampler for estimating allele frequencies in the R package \textsc{polyfreqs} (\url{https://github.com/pblischak/polyfreqs}). See the package vignette or GitHub wiki for more details (\url{https://github.com/pblischak/polyfreqs/wiki}). %\newpage %\vspace{2in} % Only way we could get Data Accessibility section to fit before Table 1 w/o using a linebreak. %%%%%%%%%% % Tables %% %%%%%%%%%% \begin{table} \centering \rowcolors{1}{white}{gray!25} \caption{Notation and symbols used in the description of the model for estimating allele frequencies in polyploids. Vector and matrix forms of the variables are also provided when appropriate.} \vspace{0.25in} \bgroup \def\arraystretch{1.45} \begin{tabular}[l]{l | l} \hline \textbf{Symbol} & \textbf{Description}\\ \hline $L$ & The number of loci. \\ $\ell$ & Index for loci ($\ell\; \in \{1,\ldots,L\}$). \\ $N$ & Total number of individuals sequenced. \\ $i$ & Index for individuals ($i\; \in \{1,\ldots,N\}$). \\ $\psi$ & The ploidy level of individuals in the population (e.g., tetraploid: $\psi$=4). \\ $p_{\ell}$ & Frequency of the reference allele at locus $\ell$. [$\bm{p}$] \\ $g_{i \ell}$ & The number of copies of the reference allele for individual $i$ at locus $\ell$. [$\bm{G}$] \\ $\tilde{g}_{i \ell}$ & Simulated genotype for posterior predictive model checking. \\ $g_\epsilon$ & The probability of observing a reference read corrected for sequencing error. \\ $t_{i \ell}$ & The total number of reads for individual $i$ at locus $\ell$. [$\bm{T}$] \\ $r_{i \ell}$ & The number of reads with the reference allele for individual $i$ at locus $\ell$. [$\bm{R}$] \\ $\tilde{r}_{i \ell}$ & Simulated reference read count for posterior predictive model checking. \\ $\epsilon$ & Sequencing error. \\ $\mathcal{H}_e, \mathcal{H}_o$ & Expected and observed heterozygosity.\\ \hline \end{tabular} \egroup \label{table1} \vspace{0.25in} \end{table} %%%%%%%%%%% % Figures %% %%%%%%%%%%% \begin{figure} \centering \caption{Error in allele frequency estimation as measured by the RMSE of posterior means. Columns represent the different allele frequencies used to simulate read data (0.01, 0.05, 0.1, 0.2, 0.4), rows represent the number of individuals samples from the population (5, 10, 20, 30). Each individual plot shows the RMSE of the estimates for each ploidy level (tetra, hex, octo) across the different levels of coverage (5x, 10x, 20x, 50x 100x). The best scenario is in the bottom left with 30 individuals sampled and an allele frequency of 0.01. The worst scenario is in the upper right corner with 5 individuals sampled and an allele frequency of 0.4. Looking across rows shows that error increases as allele frequencies get closer to 0.5. Looking up and down columns shows that error increases as the number of individuals decreases. Within each plot, increasing sequence coverage does not have as large of an effect on error, and differences in ploidy show that error decreases as ploidy increases.} \vspace{0.25in} \includegraphics{pdf/fig1} \label{fig1:rmse} \end{figure} \begin{figure} \centering \caption{The posterior standard deviation for allele frequencies decreases compared across levels of sequencing coverage. This plot provides a comparison of the distribution of the posterior standard deviations of the 100 replicates performed for each level of sequencing coverage (5x, 10x, 20x, 50x, 100x) for the hexaploid simulation with 30 individuals sampled from the population and an allele frequency of 0.2.} \vspace{0.25in} \includegraphics{pdf/fig2} \label{fig2:coverage-sd} \end{figure} \begin{figure} \centering \caption{Posterior distributions of the multi-locus estimates of expected and observed heterozygosity in \textit{Solanum tuberosum}. The observed heterozygosity is higher than the expected, consistent with a pattern of excess outbreeding.} \vspace{0.25in} \includegraphics{pdf/fig3} \label{fig3:het} \end{figure} \end{document}
{ "alphanum_fraction": 0.7859759927, "avg_line_length": 131.1578947368, "ext": "tex", "hexsha": "a9133fa1c356fb0d157de9f0f146c1eb15a8ad16", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "63beb1e3eb0b0af35060040cb8e9590b7ddb51a0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pblischak/polyfreqs-ms-data", "max_forks_repo_path": "doc/polyfreqs-ms.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "63beb1e3eb0b0af35060040cb8e9590b7ddb51a0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pblischak/polyfreqs-ms-data", "max_issues_repo_path": "doc/polyfreqs-ms.tex", "max_line_length": 2511, "max_stars_count": null, "max_stars_repo_head_hexsha": "63beb1e3eb0b0af35060040cb8e9590b7ddb51a0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pblischak/polyfreqs-ms-data", "max_stars_repo_path": "doc/polyfreqs-ms.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13807, "size": 57316 }
% !TeX root = beamer.tex %!TeX spellcheck = en-US \documentclass[10pt]{beamer} \usepackage{fontspec} \usepackage{iftex} % \ifPDFTeX{\PackageError{Latexmk}{Not in XeLatex}{Should be in XeLaTeX for proper font rendering }}\else\fi \newcommand\newlist{} \newcommand\renewlist{} \usepackage{amsmath,amssymb} \usepackage{supertabular} % \usepackage[ % includeheadfoot, head=13pt, foot=2pc, % paperwidth=6.75in, paperheight=10in, % top=58pt, bottom=44pt, inner=46pt, outer=46pt, % marginparwidth=2pc,heightrounded % ]{geometry} \usepackage{geometry} \usepackage{ifthen} % \usepackage{pdflscape} % \usepackage{alltt}%hack % \geometry{a4paper,dvips,twoside,left=22.5mm,right=22.5mm,top=20mm,bottom=30mm} \usepackage{color} \usepackage{mathpartir} \usepackage{stmaryrd} % \usepackage{libertinust1math} \usepackage{keyval} \usepackage{ifthen} % \usepackage{enumitem} \usepackage{amsthm} \usepackage{hyperref} % \newcommand*{\lemmaautorefname}{Lemma} \newcommand{\abe}{\ensuremath{\alpha\beta\eta}} \usepackage[implicitPremiseBreaks]{ottalt} \inputott{GTFL_defns} \newcommand{\rrule}[1]{\rref*{#1}} \usetheme[progressbar=frametitle]{metropolis} \definecolor{ubcBlue}{RGB}{12,35,68} \definecolor{ubcBlue1}{RGB}{0,85,183} \definecolor{ubcBlue2}{RGB}{0,167,225} \definecolor{ubcBlue3}{RGB}{64,180,229} \definecolor{ubcBlue4}{RGB}{110,196,232} \definecolor{ubcBlue5}{RGB}{151,212,223} % \setbeamercolor{normal text}{bg=ubcBlue1} \setbeamercolor{alerted text}{bg=ubcBlue1, fg = ubcBlue} \setbeamercolor{example text}{fg=ubcBlue1, bg=ubcBlue1} \setbeamercolor{title separator}{fg = ubcBlue, bg=ubcBlue} \setbeamercolor{progress bar}{bg=ubcBlue4, fg=ubcBlue1} \setbeamercolor{progress bar in head/foot}{bg=ubcBlue4, fg=ubcBlue1} \setbeamercolor{progress bar in section page}{bg=ubcBlue4, fg=ubcBlue1} \setbeamercolor{frametitle}{bg=ubcBlue} \usepackage{appendixnumberbeamer} \usepackage{booktabs} \usepackage[scale=2]{ccicons} \usepackage{pgfplots} \usepgfplotslibrary{dateplot} \usepackage{xspace} \newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace} \makeatletter \newsavebox{\mybox} \setbeamertemplate{frametitle}{% \nointerlineskip% \savebox{\mybox}{% \begin{beamercolorbox}[% wd=\paperwidth,% sep=0pt,% leftskip=\metropolis@frametitle@padding,% rightskip=\metropolis@frametitle@padding,% ]{frametitle}% \metropolis@frametitlestrut@start\insertframetitle\metropolis@frametitlestrut@end% \end{beamercolorbox}% } \begin{beamercolorbox}[% wd=\paperwidth,% sep=0pt,% leftskip=\metropolis@frametitle@padding,% rightskip=\metropolis@frametitle@padding,% ]{frametitle}% \metropolis@frametitlestrut@start\insertframetitle\metropolis@frametitlestrut@end% \hfill% \raisebox{-\metropolis@frametitle@padding}{\includegraphics[height=\dimexpr\ht\mybox+\metropolis@frametitle@padding\relax]{2_2016_UBCNarrow_Signature_ReverseCMYK}}% \hspace*{-\metropolis@frametitle@padding} \end{beamercolorbox}% } \makeatother \title{A CPS Transformation for Gradual Programs with Evidence} \subtitle{CPSC 539B Final Project} % \date{\today} \date{} \author{Joey Eremondi} % \institute{Center for modern beamer themes} % \titlegraphic{\hfill\includegraphics[height=1.5cm]{logo.pdf}} \setbeamertemplate{itemize items}[circle] \begin{document} \maketitle % \begin{frame}{Table of contents} % \setbeamertemplate{section in toc}[sections numbered] % \tableofcontents[hideallsubsections] % \end{frame} \section{Source Language} \begin{frame}{Term Syntax} \nonterms{e,} \end{frame} \begin{frame}{Type Syntax} \nonterms{T} \nonterms{ep} \end{frame} \begin{frame}{Type Rules} \begin{mathpar} \ottdruleHastypeAscr{} \\ \ottdruleConsistentEv{} \end{mathpar} \end{frame} \begin{frame}{Combining Evidence} \ottdefnMeet{} \end{frame} \begin{frame}{Semantics} \begin{mathpar} \ottdruleRedAscr{} \qquad \ottdruleRedAscrFail{}\\ \ottdruleRedApp{}\\ \ottdruleRedAppEv{}\\ \ottdruleRedAppEvFail{} \end{mathpar} \end{frame} \begin{frame}{Examples} \begin{itemize} \item $[[ <<Nat>>(<<Bool>>true) + 0 ]]$ typechecks! \item $[[<<Bool>> |- Bool ~=~ ?]]$ and $[[<<Nat>> |- ? ~=~ Bool]]$ \item But: fails at runtime! \item $[[ Nat ]] \sqcap [[ Bool ]]$ undefined \end{itemize} \end{frame} \section{The Target} \begin{frame}{Simplified $\lambda^K$} \begin{minipage}{0.45\textwidth} \nonterms{u} \nonterms{d} \end{minipage} \begin{minipage}{0.45\textwidth} \nonterms{t} \nonterms{arg} \end{minipage} \end{frame} \section{The Translation} \begin{frame}{Translating Evidence} Integer constants $[[BOOL]],[[NAT]],[[ARR]],[[PROD]],[[DYN]]$ \ottdefnEvTransform{} \end{frame} \begin{frame}{Evidence Operations} \begin{itemize} \item $[[MEET[u1, u2, k] ]]$ \item Combines evidence representation $[[u1]]$ and $[[u2]]$, gives result to $[[k]]$ \item Passes control $[[error]]$ continuation if meet undefined \item Similar for $[[DOM]]$, $[[COD]]$ to decompose function types \end{itemize} \end{frame} \begin{frame}{Translating Values} \ottdefnValTransform{} \end{frame} \begin{frame}{Translating Terms} \begin{itemize} \item $[[ [|e|]k ==== t ]]$ \item Translates $[[e]]$ into CPS term $[[t]]$ \item Gives result to $[[k]]$ \end{itemize} \end{frame} \begin{frame}{E.g. Translating Applications} \begin{mathpar} \ottdruleTransformApp{} \end{mathpar} \end{frame} \section{Correctness} \begin{frame}{Key Lemmas} \begin{itemize} \item $[[ MEET[ [|ep1|], [|ep2|], k ] -->* k[ [|ep1 /\ ep2|] ] ]]$ \item If $[[ep1 /\ ep2]]$ undefined, then $[[ MEET[ [|ep1|], [|ep2|], k ] -->* error ]]$ \item $[[ [|v|]k -->* k[ [|v|] ] ]]$ \item $[[ [|[x |=> v]e|]k -->* [x |=> [|v|] ][|e|]k ]]$ \end{itemize} \end{frame} \begin{frame}{Whole Program Correctness} \begin{itemize} \item If $[[e1 --> e2]]$, then for any $[[k]]$, $[[ [|e1|]k ]] \equiv [[ [|e2|]k ]]$ \item Defined in terms of equivalence $\equiv$, symmetric closure of $[[-->*]]$ \item Simulation proved by induction on derivation of $[[e1 --> e2]]$ \item Corollary: if $eval([[e1]])= [[v]]$, and $[[ [|v|] ==== ([|ep|], u) ]]$ then $eval([[ [|e|](\ x . halt[x]) ]]) = [[halt[ ([|ep'|], u) ] ]]$ for some $[[ep']]$ \item Preserves observations modulo evidence \end{itemize} \end{frame} \section{Incorrectness} \begin{frame}{Type Preservation } \begin{itemize} \item No notion of consistency in target language \item $[[(\ x : ? . x x)]]$ typeable in source \item Translation has no type in target \item Possibly solved by combination of sum and recursive types \end{itemize} \end{frame} \begin{frame}{Full Abstraction } \begin{itemize} \item $[[(\ x : Nat . x + 0 )]] \approx [[(\ x : Nat . (<<Nat>>x) + 0 )]]$ \item Distinguish by target context $([[ __ (n,0) ]])$ where $[[n]]\neq[[DYN]], [[n]] \neq [[NAT]]$ \item Only causes $[[error]]$ in second case \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6473878052, "avg_line_length": 25.9045936396, "ext": "tex", "hexsha": "20f689db1c21674c8052debdeb40a1951a67ef7b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fa174acc65f2b957b9cebe5d19499a7abc36d955", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "JoeyEremondi/gtfp-cps", "max_forks_repo_path": "beamer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fa174acc65f2b957b9cebe5d19499a7abc36d955", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "JoeyEremondi/gtfp-cps", "max_issues_repo_path": "beamer.tex", "max_line_length": 166, "max_stars_count": null, "max_stars_repo_head_hexsha": "fa174acc65f2b957b9cebe5d19499a7abc36d955", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "JoeyEremondi/gtfp-cps", "max_stars_repo_path": "beamer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2553, "size": 7331 }
\section{Introduction} \label{sec:GEVD_overview} In this chapter, we search for a linear combination of channels that yields an output signal $y_t$ useful for sharp wave-ripple (SWR) detection. More precisely, we search for a vector $\w \in \reals^C$ in channel (or electrode) space to project the samples $\z_t \in \reals^C$ on, so that the signal % \begin{equation} \label{eq:linear} y_t = \w^T \z_t \end{equation} % has high variance (or power) during SWR events, and low variance outside them. This principle is illustrated with a two-dimensional toy dataset in \cref{fig:GEVD_principle}. (We can then detect SWR events using threshold crossings of the envelope of $y_t$, as discussed in \cref{ch:BPF}). \begin{figure} \includegraphics[width=0.6\textwidth]{GEVD_principle_scatter} \includegraphics[width=0.6\textwidth]{GEVD_principle_strips} \captionn{Signal-to-noise maximisation via the GEVD}{Toy example to illustrate the generalised eigenvalue decomposition (GEVD) principle for signal detection. \emph{Left}: multi-channel time-series data plotted in `phase space' (meaning without time axis), with blue dots representing samples where the signal was present, and orange dots representing samples where it was not. Actually toy data drawn from two 2-dimensional Gaussian distributions with different covariance matrices. Red vector: first eigenvector of the signal covariance matrix (also known as the first principal component). Green vector: first generalised eigenvector of the signal and noise covariance matrices. \emph{Right}: Projection of both data sets on both the ordinary eigenvector (``PCA'') and the generalised eigenvector (``GEVD''). The ratio of the projected signal data variance versus the projected noise data variance is maximised for the GEVD case.} \label{fig:GEVD_principle} \end{figure} We assume the input signal $\z_t$ to be zero mean. This is achieved with a straightforward preprocessing step in offline SWR detection, and can be approximated in online SWR detection by keeping track of a running mean of $\z_t$, such as an exponentially weighted moving average. This is a supervised (or data-driven) method: we need to have access to training data $\z\train_t$, and an associated labelling $x\train_t$ which marks when there is an SWR event present in $\z\train_t$, and when there is not. These can then be used to find a good weight vector $\w$, that can then be applied to detect SWR events in unlabelled data $\z\test_t$. The algorithm as described above is also a purely spatial filtering method (at each timestep $t$, only current information from the different channels is used in calculating the output $y_t$, without incorporating temporal information from previous timesteps $t_p < t$). The algorithm can be adapted to also incorporate temporal information, by defining a vector $\z^\text{stack}_t \in \reals^{C \cdot P}$, which consists of stacked sample vectors (each consisting of $C$ channels) from $P$ different timesteps $t_p \leq t$. The linear weights $\w^\text{stack} \in \reals^{C \cdot P}$ are then calculated in the same way as for $\w$. The following sections describes how the vector $\w$ can be found.
{ "alphanum_fraction": 0.7818411097, "avg_line_length": 50.3492063492, "ext": "tex", "hexsha": "4a597750a0d07854d49298583cdca5c3b9193d26", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tfiers/master-thesis", "max_forks_repo_path": "modules/Scraps/GEVD/Intro.tex", "max_issues_count": 46, "max_issues_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_issues_repo_issues_event_max_datetime": "2018-12-10T22:37:35.000Z", "max_issues_repo_issues_event_min_datetime": "2018-09-18T16:38:12.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tfiers/master-thesis", "max_issues_repo_path": "modules/Scraps/GEVD/Intro.tex", "max_line_length": 77, "max_stars_count": 1, "max_stars_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tfiers/master-thesis", "max_stars_repo_path": "modules/Scraps/GEVD/Intro.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-23T01:39:24.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-23T01:39:24.000Z", "num_tokens": 809, "size": 3172 }
\documentclass[]{book} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={BASS4 - ADMINISTRATOR'S MANUAL}, pdfauthor={Louise Serenhov, Jenny-Li Örsell \& Brjánn Ljótsson}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{natbib} \bibliographystyle{apalike} \usepackage{longtable,booktabs} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \providecommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{BASS4 - ADMINISTRATOR'S MANUAL} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Louise Serenhov, Jenny-Li Örsell \& Brjánn Ljótsson} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{Last updated 2019-08-29} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{introduction}{% \chapter{Introduction}\label{introduction}} BASS is a flexible tool for creating online psychological treatment programs. In this manual you will learn how to manage participants, combine self-help material into treatments, keep track on events during an ongoing study/program, manage security and privacy settings, collect and export data and communicate with participants through the administration interface of BASS. \hypertarget{dictionary}{% \chapter{Dictionary}\label{dictionary}} These are recurrent concepts in the manual: \textbf{Instrument} An instrument is an electronic version of a paper form used during psychological assessment. Some examples of digitalized instruments are VAS (visual analogue scale), MADRS (Montgomery Åsberg Depression Rating Scale), SWLS (Satisfaction With Life Scale) and LSAS (Liebowitz Social Anxiety Scale). \textbf{Assessment} An assessment is a set of instruments, given in a specific order and at a specific occasion or for a specific number of occasions. A pre- and post-treatment assessment often consist of the same instruments with the afterward addition of one instrument measuring treatment satisfaction. \textbf{Type} A type represents the time-aspect of an assessment. Each assessment is linked to a type, typically SCREEN, PRE, POST or FOLLOW-UP or a customized type. \textbf{Project} A project is the administrative concept that connects a set of assessments to a set of participants. \textbf{Participants} A participant need to be assigned to a project to be able to fill in instruments and follow an assessment. \textbf{Group} A project can be divided into groups, and participants of the same group in a project can be managed collectively. \hypertarget{login}{% \chapter{Login}\label{login}} As soon as your database setup is ready, you can login to the administrator's interface. The interface is found at an URL of the format ``\url{https://webcbt.se/NameOfYourDatabase}''. Enter your credentials in the login box and press the Login button. \includegraphics{images/login.png} \hypertarget{the-main-menu}{% \chapter{The main menu}\label{the-main-menu}} All functionality in the BASS administration interface can be accessed from the main menu to the left of your screen. Which options are visible in the main menu depends on your authorization level. A usual setup is that one administrator manages the available instruments and assessments, while several therapists manage their own participants and individual treatments. \includegraphics{images/main-menu.png} \hypertarget{search-participants}{% \chapter{Search participants}\label{search-participants}} The ``Participant search'' is located at the top of the main menu. This is where you can search for and list participants by specific variables such as groups or projects. \includegraphics{images/search-participants-menu.png} When you press ``Participant search'' you will see a view with four tabs: \includegraphics{images/search-participants-tab.png} \begin{itemize} \tightlist \item Selection -- Add a filter to your search \item Search -- Perform your search using text strings or other identifiers \item Hidden columns -- View and show columns that are hidden \item Save/load settings -- Save your recurrent searches for convenience \end{itemize} \hypertarget{selectionfilter}{% \section{Selection/filter}\label{selectionfilter}} If you press Selection, you can add a filter to your participant search. \includegraphics{images/selection-filter.png} Here you can choose if you want to search all participants, your own participants (that you treat) or participants whose treatments you supervise. You can also choose which project(s) or group(s) to search. The top two checkboxes can be used to quickly either mark or unmark all the below listed projects or groups. If no specific project or group is checked, all of them will be included in the search. This also means that unchecking all projects/groups won't return participants without a project/group. \textbf{Hint:} If you want to search a specific group you should only mark that group, and not the corresponding project, as this will return all the participants belonging to the project and not only those in the group. Your chosen selection of participants is shown in the participant list below the tab. \hypertarget{search}{% \section{Search}\label{search}} The actual search is done in the \textbf{Search} tab. If you previously added search filters in the Selection tab they will now be active and delimit your search. \includegraphics{images/search.png} Here you can use many different variables to search for one or several participants. The search is executed either automatically when you leave a filled-in search box or when you hit the Enter-key on your keyboard. Your search results are shown in the participant list below the tab. Note that there is a discrepancy when searching by numbers or by text strings: \begin{itemize} \item Searching for the number ``12'' will only show the exact hit, while adding a \% sign to the search as in ``12\%'' will return both ``12'', ``123'' and ``012''. \item Searching for the text string ``my'' will return both ``My'', ``Myra'' and ``Amy''. You don't need to add any \% sign for text string searches. \end{itemize} To search for several participants at the same time, you add a space between each corresponding search term in the search box. \textbf{Hint:} This is useful if you want to search for participants whose IDs are listed on different rows in an Excel-file. Just copy the ID containing rows in Excel and directly paste them into the search box in BASS and they automatically receive a space between them. \hypertarget{hide-show-and-sort-columns}{% \section{Hide, show and sort columns}\label{hide-show-and-sort-columns}} If you want to hide a column, you hover the mouse over the column header until a red X shows up. By pressing the X, the column will be hidden. \includegraphics{images/hide-show-sort.png} To show/unhide a column, press the ``Hidden columns'' tab. This tab shows all hidden columns as buttons. Press the button with the column you want back and it will show up in the search results again. \includegraphics{images/hidden-columns.png} Most columns can be sorted alphabetically or by number. To sort a column, press the small up/down arrows that show when you hover over the column header. \hypertarget{column-explanations}{% \section{Column explanations}\label{column-explanations}} There are a number of columns showing information, status or possible actions for a participant. Some are explained in the table below. Column Description Pen symbol Edit the participant Participant Id A unique identifier for a participant within the study/project. ``ANX-001'' Internal Id A unique and technical identifier for a participant within the database. Flag symbol Shows if the participant is flagged for something. Message symbol Shows if there are unread messages from the participant. Chat symbol (For supervised therapists) Shows if there are unread messages from the supervisor. Approval symbol (For supervisors) Shows if there are messages sent from a supervised therapist to a participant that might need approval. Superv Mess (For supervisors) Total number of messages in supervisory correspondence. This is a useful way to see how much guidance was needed from the supervisor. Last message Last date when a participant sent a message (was active). Sort on this column and you'll find participants that are lagging behind. Weeks The number of weeks left of the treatment. Treatments without end date are marked with the eternity symbol. Module The latest module the participant got access to. This column also shows for how many days the participant has had access to the module. Homework Shows if there is an unread homework sent in by the participant. Group Shows which group a participant belongs to. You can change the group here, but you need to save the update with the Save button below the list. Heart symbol By pressing the heart, you add the participant to your participants. Trash symbol By pressing the trash symbol, you delete the participant. Be careful as the participant and all its corresponding data then will be lost. \hypertarget{saveload-search-settings}{% \section{Save/load search settings}\label{saveload-search-settings}} To save your current search settings, including both filters and search parameters, press the Save/load settings-tab. First ensure that the current search result for the settings you want to save are shown in the table below. Then write a name for your settings in the Currently loaded settings box and press ``Save as new''. \textbf{Hint:} Be careful to not use the ``Save'' button instead, because this will overwrite any currently loaded settings including its name. \includegraphics{images/save-load1.png} The text ``saved!'' appears to the right of the buttons and your search is now saved and available in the dropdown below. \includegraphics{images/save-load2.png} The dropdown ``Saved settings'' is where you access all your previously saved search settings. \textbf{Hint:} If you make a new search, the ``Currently loaded settings'' box may no longer reflect the content of the search result list below. To be sure that the list matches the settings you want to load, first select ``Choose'' in the dropdown menu and then reselect the settings you want. \hypertarget{add-new-participant-to-group-and-change-group}{% \subsection{ADD NEW PARTICIPANT TO GROUP AND CHANGE GROUP}\label{add-new-participant-to-group-and-change-group}} It is possible to directly create a new participant within a specific project. This function is found below the table of participants. Just choose which project you want to add the new participant to, and you will be redirected to the ``New participant''-view with this project pre-filled. \includegraphics{images/add-new-participants.png} It is also easy to change which group a participant belongs to. For each participant in the table, you can choose a project in the dropdown in the Group column. Don't forget to save all changes by pressing the ``Save'' button below the table afterwards. \hypertarget{assessments}{% \chapter{Assessments}\label{assessments}} Assessments are accessed from the ``Assessments'' option in the main menu. Note that you first have to choose a project in the dropdown in the main menu to make the Assessments option for that project visible. When you press ``Assessments'' you will see a view showing the existing assessments of the chosen project. All assessments that are listed in this view can be manually sorted with the upwards pointing arrow symbols to the right of each assessment name. \includegraphics{images/assessement.png} You can show or hide the expanded overview by pressing the Show assessments overview-button. Here you can get a quick review of all the included assessments and their corresponding attributes. \textbf{Hint:} Among other things, the assessment overview shows the order of each instrument in all assessments. This is a good place to ensure that the instrument order is kept from one assessment to another throughout the project. It also enables you to easily see if you somewhere have missed to include an instrument that should appear in several, similar assessments. \hypertarget{create-or-edit-assessments}{% \section{Create or edit assessments}\label{create-or-edit-assessments}} Add a new assessment to your project by pressing ``Create new assessment'' at the bottom of the Assessment view. To instead edit an existing assessment, press the pencil symbol to the right of the name of the assessment you want to edit. This opens up the assessment panel where you can set a number of variables that define the assessment: \hypertarget{name}{% \subsection{Name}\label{name}} Here you can fill in a name for your assessment, for example \textbf{\emph{Screening}}. \hypertarget{labelcustom-label}{% \subsection{Label/Custom label}\label{labelcustom-label}} You can either select one of the predefined labels in the drop-down, or write your own label in the Custom label textbox. Adding a custom label will surpass any predefined label that is selected from the drop-down. Note that the assessment label will be visible in reports when you export your data. \textbf{Hint:} By selecting Weekly-assessment or Point-assessment some stats for Repetition (below) are preset. \hypertarget{managed}{% \subsection{Managed}\label{managed}} This option sets whether data-gathering is managed individually or in groups. \textbf{Hint:} If you have different cohorts, you may want to choose In group. Screening assessments are usually managed In group and these can be activated or deactivated for a certain group and date under Participants -\textgreater{} Groups -\textgreater{} screening group name -\textgreater{} Show -\textgreater{} Assessments. \textbf{Hint:} If your participants start their treatments at different times, you usually choose Individually. The Individually option is also more flexible for long-term studies spanning over months when participants go for vacation and need some individual adjustment to the timing of assessments. \hypertarget{repitition}{% \subsection{Repitition}\label{repitition}} The Repetition option sets if the assessment is to be done once or repeatedly, and if so at what intervals and for how many times. Assessments with the predefined label ``Weekly'' have repetition set to Weekly and the interval to 7 days. Assessments with the label ``Point-assessment'' have repetition set to Manual. This means that the next assessment can be set manually to occur at an arbitrary date, independent of the time of the previous assessment. This is useful for assessments that are triggered by irregular events, for example a major flair of symptoms. \hypertarget{time-limit}{% \subsection{Time limit}\label{time-limit}} Here you can set if participants have to fill out the assessment within a certain time limit. \textbf{Important note:} Setting a time limit for an assessment is extremely important to prevent the results being mixed up with those from similar, subsequent assessments. For example, if an ongoing POST assessment is still accessible when the FOLLOW UP assessment is activated, the results of any of them is duplicated to the other. This results in data reports where no change seems to have occurred between the assessments. An assessment with the time limit of 7 days that starts on a Monday will be available for the rest of that week but not for the next. \textbf{Hint:} Keeping the time limit short, or shorter than the repetition interval, has the effect that participants fill in correct data corresponding to the set time-frame, but sometimes will miss the window when they can report. This is useful in assessments where accurate and time-dependent data is more important than full attendance. \hypertarget{dependence}{% \subsection{Dependence}\label{dependence}} The Dependence option sets when the assessment is to be activated, in relation to the date of a previous assessment. The relationship is kept even if you change the date of the previous assessment. Date offset from is where you select the previous assessment from which the date/delay is to be calculated. \textbf{Note:} Setting Date offset from a reoccurring assessment (i.e.~WEEKLY) will count the delay from the date of the last assessment and not the first. If this is not what you want, consider creating a dummy assessment without instruments to hold the start/dependence date. Checking Dynamic means that the delay is calculated from the time when the previous assessment was filled out instead of the time when it was scheduled. Note that this setting only can be done on individually managed assessments. Delay is the number of days to wait before activation. \textbf{Hint:} If you can't see the calculated date of your assessment in the view under Participants -\textgreater{} Groups -\textgreater{} group name -\textgreater{} Show -\textgreater{} Assessments, try to set the date of the previous interrelated assessment again and press the Save button. \hypertarget{clinician-rated}{% \subsection{Clinician rated}\label{clinician-rated}} This option hides all instruments in the assessment for participants and instead enable clinicians to fill in the associated `clinician rated' instruments via the administration interface. This setting allows a clinician to fill in the instrument(s) for a specific patient via Main menu -\textgreater{} Participants -\textgreater{} Groups -\textgreater{} specific group -\textgreater{} specific participant -\textgreater{} Assessments -\textgreater{} specific assessment -\textgreater{} specific instrument -\textgreater{} pen on document symbol \textbf{Note:} Clinician rated instruments should not be added to self-assessments. Clinician-rated instruments are hidden for participants which makes it impossible for the participant to complete an assessment containing such an instrument. \textbf{Hint:} Clinician rated assessments won't send automatic reminders. An option is to use flags instead to mark undone tasks. \hypertarget{randomize-instrument-order}{% \subsection{Randomize instrument order}\label{randomize-instrument-order}} With this option you set the order of the included instruments to be randomized. If not set, the order in which the instruments appear in the assessment will be the same as the order they are presented in the box Assessment Instruments shown to the right. \hypertarget{welcomethank-you-text}{% \subsection{Welcome/Thank you text}\label{welcomethank-you-text}} Here you write messages formatted in either Markdown or HTML that you want to show to participants before (welcome) and after (thank you) they fill in the assessment. \hypertarget{concurrent-and-merged-assessments}{% \subsection{Concurrent and merged assessments}\label{concurrent-and-merged-assessments}} With this option you can set the order in which coinciding assessments appear to participants. If two or more assessments can coincide, you may want to set in which order they appear to participants. This also affects the order of the Welcome/Thank you-messages. The assessment with the lowest number has the highest priority and is shown first. The other assessments and their Welcome/Thank you-messages will follow corresponding to their respective priority order. The \textbf{\emph{Merge assessment\ldots{}}} box sets if an assessment is to be integrated as a part of (after the Welcome text and before the Thank you text) a coinciding, higher-prioritized assessment. Setting this option means that the current Welcome/Thank You-messages are not shown at all on coincidence, but only when the assessment occurs alone or simultaneously as lower-prioritized assessments. The \textbf{\emph{If merged}} -- Show\ldots{} box sets if Welcome/Thank you-messages are to be shown even on coincidence as per the Merge assessment setting above. Note that it can be tricky to write messages that work both standalone and together with/as part of other assessment messages. \hypertarget{automatic-reminders}{% \subsection{Automatic reminders}\label{automatic-reminders}} This option sets notes or reminders to automatically be sent to participants on certain events. The basic functionality is that a note is sent the same day as an assessment becomes available. With the check boxes you can choose which media to use, mobile text messages (SMS) and/or email. \emph{Create new quick login} needs to be checked if quick logins are to be sent with the reminders. \textbf{Hint:} Remember that you also need to activate quick login under Security Settings in the Main menu to enable this function. You can also add reminders to participants who are late with filling in their assessments. \textbf{Note:} It is not possible to only send remainders to participants that are late with filling in their assessments, you always need to activate availability notes too (by checking either of the sms/email boxes) for this extra functionality to be enabled. \emph{Remind interval} is the delay upon which reminders are sent to late participants, counted as days after the assessment became available. Max number of reminders sets how many reminders can be sent out to the participant, with the previously mentioned time interval. This setting needs to be at least 1 for any reminder to be sent. \textbf{Hint:} If you want additional reminders to be sent, increase the number in this box instead of rescheduling the assessment (see below) \textbf{Note:} Postponing an assessment that has automatic reminders to the future will neither make any new availability notes to be sent out, nor make any additional reminders to be sent out (because BASS counts the number of sent reminders independently of assessment date). Rescheduling an assessment that has automatic reminders to the past will disable the availability note (because the first day of availability has passed) and eventually disable reminders if the remind interval for them has passed. \emph{Use standard text for e-mail/SMS} -- Sets the content of reminders/notes to be sent to a predefined standard text. The current standard texts are shown below the checkbox. \textbf{Hint:} The standard texts for reminders and notifications can be edited via Main menu -\textgreater{} External messages. It is also possible to set a \emph{Custom notifications/reminders} text in the corresponding textbox. This text is shared between emails and SMS. The \emph{Subject for activation} emails and SMS can however differ and are set in the two bottom textboxes. Custom notifications/reminders can be up to around 150 characters long. \hypertarget{participant-flagging}{% \subsection{Participant flagging}\label{participant-flagging}} This option sets flags to be shown for therapists on certain, participant-specific events. \emph{Flag participant when assessment becomes activated} raises a flag for participants at the activation of the assessment. \emph{Flag late participants} raises a flag for participants that haven't filled in an assessment within a certain number of days after it became available. You can set the number of days to wait before flagging in the box labeled \emph{Days until participant is flagged late.} \hypertarget{copy-assessment}{% \section{Copy assessment}\label{copy-assessment}} You can create a copy of an assessment and save it to the same project before editing it. This functionality makes it quick and easy to create several similar assessments that occur at different time points within your project. It is also possible to mark several or even all assessments of a project and copy them to another project, thus creating two similar projects. To copy assessments in the Assessment view, check the boxes of the assessments you want to duplicate. This makes the dropdown menu Copy selected assessments to\ldots{} appear below the assessment list. From the dropdown you can select the project where you want to paste the assessment. If it is the current project, the copy-pasted assessments will appear at the bottom of the list with the prefix COPY. \includegraphics{images/copy-assessment.png} \hypertarget{dummy-assessments---some-scheduling-tricks}{% \section{Dummy assessments - some scheduling tricks}\label{dummy-assessments---some-scheduling-tricks}} Empty assessments that doesn't contain any instruments can be used as timers to schedule administrative activities. \textbf{Example 1, Scheduling automatic events:} A single, empty assessment can be created to hold the time point from which other assessments are scheduled to automatically become available. This circumvents the issue when a ``real'' but reoccurring assessment can't be used as starting date for a timetable. \textbf{Example 2, Scheduling manual actions:} An empty assessment can also be used together with flagging to prompt a certain action from the therapist. This is useful when something needs to be manually sent or done by the therapist a certain number of days into treatment while participants have individual treatment start dates. The trick is achieved by creating an empty assessment (i.e.~FLAG FOR SENDING DEVICE) that is dependent on a previous assessment (i.e.~FIRST ASSESSMENT) and activated with a chosen delay (i.e.~63 days after). By checking the box Flag participant when assessment becomes activated, the therapist will see individually occurring flags on participants whenever it is time for them to receive the attention/service from the therapist. Since the assessment doesn't contain any instruments, only the therapist will get a notice (flag). \textbf{Example 3, Scheduling text messages (SMS):} An empty assessment that is managed In group and linked to a text message can be used to schedule an independent reminder to all participants (i.e. ``Happy New Year! If you find it hard to keep to your new lifestyle during events like this, log in and re-read the advice in module 3''). An empty assessment can also be created to remind a single individual that hasn't done so for a while to log in. \hypertarget{assessment-instruments}{% \section{Assessment instruments}\label{assessment-instruments}} All available instruments are listed in the right panel of the Assessment view. To include an instrument, check the box to the left of its name and then press the \emph{Save} button. The number shown between the brackets to the right of each instrument name shows how many questions the instrument contains. The \emph{Total number of items} top row shows the current total sum of questions in the assessment. The order in which the instruments appear in the assessment will be the same as the order they are presented in the box Assessment Instruments shown to the right. You can change the order of an instrument listed in the box by pressing the upward arrow to the left of the instrument name. \includegraphics{images/assessment-instrument.png} \textbf{Hint:} If you can't see any/all arrows, first select the instruments you want to include and press Save. Then change the order of the instruments and press Save again \hypertarget{participants}{% \chapter{Participants}\label{participants}} \hypertarget{create-new-instrument}{% \chapter{Create new instrument}\label{create-new-instrument}} \hypertarget{create-new-treatment}{% \chapter{Create new treatment}\label{create-new-treatment}} \hypertarget{references}{% \chapter{References}\label{references}} \bibliography{bibliography.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.7943987926, "avg_line_length": 58.4607843137, "ext": "tex", "hexsha": "a2d5ab7a68e039e1b85329c6d32cf1c464d922f4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6d4efacb484beda15fc2d5254b5aa4fbb6a1bab9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "LouiseSerenhov/BASS-bookdown-test", "max_forks_repo_path": "docs/bass4-manual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6d4efacb484beda15fc2d5254b5aa4fbb6a1bab9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "LouiseSerenhov/BASS-bookdown-test", "max_issues_repo_path": "docs/bass4-manual.tex", "max_line_length": 861, "max_stars_count": null, "max_stars_repo_head_hexsha": "6d4efacb484beda15fc2d5254b5aa4fbb6a1bab9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "LouiseSerenhov/BASS-bookdown-test", "max_stars_repo_path": "docs/bass4-manual.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6619, "size": 29815 }
\documentclass[iop]{emulateapj} \usepackage[utf8]{inputenc} \newcommand{\sick}{\texttt{sick}} \newcommand{\article}{\textit{Article}} \usepackage{amsmath} \usepackage{bm} \usepackage{natbib} \usepackage{graphicx} \begin{document} \title{\sick, the spectroscopic inference crank} \author{Andrew R. Casey\altaffilmark{1}} \altaffiltext{1}{Institute of Astronomy, University of Cambridge, Madingley Road, Cambdridge, CB3 0HA, United Kingdom; \email{[email protected]}} \begin{abstract} In this \article{} I introduce \sick{}, the spectroscopic inference crank, an open source probabilistic code for inferring astrophysical parameters from spectra. \sick{} enables any user to easily construct an approximate \textit{generative model} for spectral data, allowing for precise inference for a host of astrophysical quantities. Model fluxes (or intensities) are approximated by efficient multi-dimensional interpolation of pre-computed spectra. This allows any user to capitalise on the plethora of published synthetic and observed spectral libraries with minimal effort. Additional phenomena that transform the data (e.g., redshift, continuum, smoothing) are incorporated as free parameters. Outlier pixels (e.g., cosmic rays or poorly modelled regimes) can be treated with a mixture model, and a simplistic noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model allows for precise inference with credible uncertainties. Commonly employed features are introduced, and the implementation details are described. I demonstrate the utility of a generative model approach with accurate and precise stellar photospheric measurements from noisy (e.g., S/N $\sim{} 7$) high- and low-resolution spectra. \sick{} is very easy to use, well-tested, parallelised, and freely available online through GitHub under the MIT license. \end{abstract} \section{Introduction} Most of our understanding of astrophysics has been interpreted from spectra. Given how informative spectroscopic data is to our understanding of astrophysics, it is not surprising that the last decade has seen a substantial increase of publicly accessible spectral data. Large scale surveys have driven this trend, each releasing in excess of hundreds of thousands \citep[e.g.,][]{wigglez,boss, segue,rave,gaia-eso} of spectra. Millions more spectra are expected in the coming years \citep[e.g.,][]{lamost,galah}. The spectra are obtained from different astrophysical sources to meet specific scientific objectives, and the data vary in wavelength coverage, resolution, and noise distributions. For these reasons many collaborations expend significant resources to produce bespoke analysis software. Unfortunately this impedes scientific reproducibility, as many codes still remain closed-source more than a decade after the original article was published. Any comprehensive literature comparison subsequently becomes impossible, as systematics can be difficult to accurately characterise without in-depth knowledge of the methods or access to the software. Broadly speaking there are three types of methods employed for spectral analysis: measuring the strengths of spectral features, pure data-generating models or template-matching methods. Approaches that measure spectral features are inexpensive, but regularly encounter problems with blended (often hidden) lines or continuum placement. As such, some subjective interaction, tuning, or ad-hoc `calibration' is almost always required. Data-generating methods compute model spectra at run-time, and whilst accurate, can be prohibitively expensive for large samples. For template-matching methods, synthetic spectra are only generated once to produce a grid of fluxes or intensities for a subset of permutations of astrophysical quantities. Although there are differences between these methods, the preparatory steps are usually the same. Spectra are placed at rest-frame by calculating line-of-sight velocities, typically by cross-correlation, before being continuum-normalised to flux intensities. Credible uncertainties can be difficult to discern from these approaches. The implied assumptions of a $\chi^2$ distribution are often inapplicable, and uncertainties in the doppler-shift, smoothing, and normalisation steps are almost always ignored. These effects result in ill-characterised uncertainties in astrophysical parameters. Alternatively the uncertainties are frequently assumed to be approximately the same for all objects. This is an incorrect approach: there are few, if any, examples of homoscedastic datasets in astrophysics. The noise properties of each spectrum \textit{are} different, and the parameter uncertainties (random and systematic) will differ for every object. Consequently the uncertainties in astrophysical parameters by template-matching methods are generally found to either be incorrectly assumed, under-estimated, or at least ill-characterised. In addition to affecting the uncertainties, the effects of redshift, continuum normalisation and smoothing \textit{will} systematically bias the reported maximum-likelihood parameters. Experienced spectroscopists will frequently differ in their decision of continuum placement, even in the most straightforward cases (e.g., metal-poor stars). For example, there are a number of controversial examples within the literature where the subjective (human) decision of continuum placement have significantly altered the scientific conclusions \citep[e.g., see][where this issue is discussed in great detail]{kerzendorf}. The implications of these phenomena \textit{must} be considered if we are to understand subtle astrophysical processes. Spectroscopy requires objectivity: one should endeavour to incorporate these phenomena as free parameters into a generative model and infer them simultaneously with the astrophysical parameters. In this \article{} I present \sick{}: a well-tested, MIT-licensed probabilistic software package for inference from spectroscopic data. \sick{} employs an approximation to data-generating models. Instead of attempting to model all kinds of expensive astrophysical processes (e.g., stars, supernova, and any other interesting astrophysical processes) at run-time, it performs efficient multi-dimensional interpolation of model spectra from pre-computed grids and includes contributory phenomena (e.g., continuum, redshift) as free parameters within an objective scalar-justified model. This approach is suitable for a plethora of different astrophysical processes, allowing any user to easily specify a model using existing published spectral grids, and infer astrophysical properties from their data. Aspects of the probabilistic model are described in Section \ref{sec:model}. A number of examples are included in Sections \ref{sec:inference-test} and \ref{sec:examples}, where I present accurate and precise objective photospheric measurements using noisy spectra. I conclude in Section \ref{sec:conclusions} with references to the online documentation and applicability of the software. \section{The Generative Model} \label{sec:model} The primary advantage of \sick{} is the simultaneous inference of both astrophysical and other pertinent parameters that can alter the spectra. The generative model described here is agnostic as to \textit{what} the astrophysical parameters actually describe. Typical examples might be properties of supernova (e.g., explosion energies and luminosities), galaxy characteristics from integrated light, or mean plasma properties of a stellar photosphere. There are a plethora of spectral libraries (observed and synthetic) published for these types of applications \citep[e.g.,][]{snid,pegase,phoenix,pollux}. All of these models are fully-\sick{} compatible. However given the research background of the author, I will introduce the probabilistic model by focussing on stellar photospheric inferences. The first step of the (approximate\footnote{The term approximate is used because model spectra are interpolated.}) generative model is to interpolate a model spectrum. \sick{} employs the Quickhull algorithm \citep{quickhull} to linearly interpolate in multiple dimensions. Irregularly-spaced grids are perfectly acceptable; Quickhull does not require the grid to be rectangular or regularly-spaced points. Large grids of model spectra are cached for computational efficiency, allowing for the total size of the model grid to far exceed the available random access memory.\footnote{In practice this is performed by efficient memory-mapping.} I introduce an interpolation function $S(\bm{\psi})$ that produces a model spectrum with $J$ points $\{\lambda_j,I_j\}_{0}^{J}$ at any evaluable point $\bm{\psi} \equiv (T_{\rm eff},\log{g},{\rm [M/H]},[\alpha/{\rm Fe}])$ within the grid boundaries. \begin{equation} S(\bm{\psi}) \rightarrow \{\lambda_j,I_j\} \end{equation} The model spectrum $S(\bm{\psi})$ should always be of higher spectral resolution than the data. Therefore it is necessary to smooth (or broaden) $\{\lambda_j,I_j\}$ with a Gaussian filter of standard deviation $\sigma_{s}$ such that it matches the spectral resolution of the data. For some applications it is tempting to perform this step \textit{a priori} from the instrument spectral resolution. However it is not necessarily true that the quoted instrumental resolution will perfectly match the data. In truth the data have a \textit{distribution} of spectral resolutions. Even if the spectral resolution is accurately known with low variance, fixing this value will result in under-estimated uncertainties $\sigma(\bm{\psi})$ and likely bias the posterior distribution distributions for $\bm{\psi}$. Employing the free parameter $\sigma_{s}$ allows for the identification of additional instrumental or atmospheric broadening. For stellar applications, the free parameter $\sigma_{s}$ also aids in the identification of unresolved binary companions. The doppler shift of the source is treated by transforming the model spectrum (in uniformly sampled $\log\lambda$ space). \sick{} solves for redshift $z$ (Equation \ref{eq:redshift}), and returns posterior distributions in the user's preferred units (e.g., km s$^{-1}$). \begin{equation} \label{eq:redshift} \lambda_{s} = \lambda_{j}(1 + z) \end{equation} After the model spectrum is smoothed and doppler shifted, model intensities are required at the wavelengths $\lambda_i$ of the observed pixels. The complete function to yield model intensities $M_i$ at wavelengths $\lambda_i$ is therefore a function of the parameters of interest $\bm{\psi}$, redshift $z$, smoothing kernel $\sigma_{s}$, and the wavelengths of the observed pixels $\lambda_i$: \begin{equation} M_{i} \equiv S\left(\bm{\psi},z,\sigma_{s},\lambda_i\right) \end{equation} In principle there is no reason to believe that the doppler-shifted, smoothed model intensities $M_i$ should match the data at all. The counts and shape of observed fluxes are a function of source magnitude, exposure time, instrument sensitivities, atmospheric conditions, and a host of unaddressed effects. In contrast the model spectra are calculated either as intensities (e.g., $I_j$) or fluxes calibrated to an empirical system. Even for the true values of $\bm{\psi}$, a function is required to normalise\footnote{The normalisation process is frequently abused by stellar spectroscopists in the literature. Wherever possible, data should not be transformed. One should seek to fit a model to the data, not the other way around.} the model to the data. As such I transform the \textit{model} intensities by some function $C$ to fit the data. Although the function $C$ incorporates a number of effects (e.g., source blackbody temperature, dust, instrument sensitivities), they are phenomena that typically cannot be separated without additional information, and here I only care about their combined effect. I only wish to ensure that the overall shape of the data are accounted for, which is usually achievable with a low-order polynomial \begin{equation} C_i = c_{0}\lambda_{i}^{m-1} + c_{1}\lambda_{i}^{m-2} + \dots + c_{m} \end{equation} \noindent{}with $m$ free coefficients $c_{0\dots{}m}$, where $m$ is specified by the user. Other continuum functions are available in \sick{}, and the user is encouraged to experiment for a specific problem. This allows us to express the \textit{expected flux} $E_i$ at a given observed pixel with wavelength $\lambda_i$ as: \begin{equation} E_i = S_{i}\times{}C_i \end{equation} In many astrophysical cases the flux uncertainties $\sigma_{i}$ for a given pixel are not well-characterised. This is often due to unpropagated uncertainties during data reduction. Since the observed flux counts are expected to be Poisson-distributed, in many astrophysical scenarios the pixel flux uncertainty can be made estimated as $\sigma_i \sim 1/\sqrt{F_{i}}$. Knowing that this approximation may underestimate the noise, I include an additional parameter to account for the possibility that the variance in all pixels is underestimated by some fractional amount $f$ (Equation \ref{eq:noise-model}). While this crude noise model is probably unrepresentative of the data in a large number of cases, including \textit{any} noise model is preferable to none. \begin{equation} s_{i}^2 = \sigma_{i}^2 + f^{2}E_{i}^2 \label{eq:noise-model} \end{equation} We now have a \textit{generative model} for the data. The frequency (or probability) distribution ${p\left(F_i|\lambda_i,\sigma_i,\bm{\psi},z,\sigma_s,\{c\}_{0}^{m}\right)}$ for the observed data $F_i$ is given by \begin{equation} p\left(F_i|\lambda_i,\sigma_{i},\bm{\psi},z,\sigma_{s},\{c\}_{0}^{m},f\right) = \frac{1}{\sqrt{2\pi{}s_{i}^2}}\exp{\left(-\frac{\left[F_i - E_i\right]^2}{2s_{i}^2}\right)} \label{eq:p_model} \end{equation} \noindent{}and with the implied assumption that the data are independently drawn, the likelihood $\mathcal{L}$ is calculable by the product of individual probabilities: \begin{equation} \mathcal{L} = \prod_{i=1}^{N}\,p\left(F_i|\lambda_i,\sigma_{i},\bm{\psi},z,\sigma_{s},\{c\}_{0}^{m},f\right) \end{equation} From the original astrophysical parameters $\bm{\psi}$ that I care about, I now have an additional $3 + m$ parameters to consider. The situation seemingly becomes more complex when separately observed channels are considered. Many spectrographs provide small portions of spectra (a channel, also colloquially expressed as beams or apertures) separated by some large gap. The data for each channel are usually captured by different CCDs and are therefore reduced separately. The instrumental parameters $\sigma_{s}$, $\{c_k\}_{k=0}^{m}$, and $f$ are likely to be different for each channel. Although redshift $z$ is an astrophysical effect and should not differ between channels, this may not be true if each channel has been wavelength-calibrated differently. It is therefore prudent to introduce separate parameters $z$, $\sigma_{s}$, $\{c_k\}_{k=0}^{m}$, $f$ for \textit{each} of the $N_{chan}$ observed channels. This scales the total dimensionality of the model as $3\times{}N_{chan} + \sum_{k=0}^{N_{chan}}m_{k}$ in addition to $\bm{\psi}$. The inclusion of all of these parameters is not mandatory: each \sick{} model can be adjusted to include or ignore any combination of phenomena. Finally, I consider the handling of outliers in the data. These may be in the form of cosmic ray spikes, improper calibration of the data, telluric features, or simply poorly modelled spectral regions. Treatment of these artefacts is achieved using a Gaussian mixture model: a combination of two models. In a mixture model, the data are fit by the sum of amplitudes ($1 - P_o$ and $P_o$, respectively) of two distributions: the expected fluxes $E_i$, and a normal distribution centered along the continuum function $C_i$ with variance $s_{i}^2 + V_{o}$. This requires the inclusion of two additional parameters: $P_o$ and $V_o$. The prior distribution function $p\left(V_o\right)$ requires $V_{o}$ to always be positive (Equation \ref{eq:default_priors}), and as such the outlier distribution will \textit{always} have a larger variance. Distributions of smaller variance are more informative, so conceptually a fit to the expected fluxes $E_{i}$ is generally preferred wherever possible. For brevity I define $\bm{\kappa} \equiv (\bm{\psi},\{z,\sigma_s,\{c_k\}_{k=0}^{m},f\}_{0}^{N_{c}})$, and the likelihood for the mixture model is given by \begin{multline} \mathcal{L} = \prod_{i=1}^{N}\,\left[\left(1 - P_{o}\right)\times{}p_{model}\left(F_i|\lambda_i,\sigma_{i},\bm{\kappa}\right)\right. \\ + \left. P_{o}\times{}p_{outlier}\left(F_i|\lambda_i,\sigma_i,\bm{\kappa},V_{o},P_o\right)\right] \end{multline} \noindent{}where $p_{model}$ refers to $p$ in Equation \ref{eq:p_model} and \begin{multline} p_{outlier}\left(F_i|\lambda_i,\sigma_i,\bm{\kappa},V_{o},P_o\right) = \dots \\ \frac{1}{\sqrt{2\pi\left(s_{i}^2 + V_{o}\right)}} \exp\left(-\frac{[F_i - C_i]^2}{2\left[s_{i}^2 + V_{o}\right]}\right) \end{multline} \noindent{}such that the likelihood $\mathcal{L}$ becomes: \begin{multline} \mathcal{L} = \prod_{i=1}^{N} \left[ \frac{1-P_o}{\sqrt{2\pi{}s_{i}^2}}\exp\left(-\frac{[F_i - E_i]^2}{2s_{i}^{2}}\right) \right.\\ \left. + \frac{P_o}{\sqrt{2\pi\left[s_{i}^2 + V_o\right]}}\exp\left(-\frac{[F_i - C_i]^2}{2\left[s_{i}^{2} + V_o\right]}\right)\right] \label{eq:full_likelihood} \end{multline} I define the full parameter space with ${\bm{\theta} \equiv \left(\bm{\psi},\{z,\sigma_s,\{c_{b_k}\}_{k=0}^{m},f\}_{b=0}^{N_{c}},V_o,P_o\right)}$. From Bayes theorem the posterior probability distribution for $\bm{\theta}$ (up to a constant) is given by \begin{eqnarray} \mathcal{P} & \propto & likelihood \times prior \nonumber \\ p(\bm{\theta}|\{F_i\}_{i=1}^{N}) & \propto & p(\{F_i\}_{i=1}^{N}|\bm{\theta})\,\times\,p(\bm{\theta}) \label{eq:probability} \end{eqnarray} \noindent{}where $p(\bm{\theta}|\{F_i\}_{i=1}^{N})$ is the probability $\mathcal{P}$ of $\bm{\theta}$ given the data (and given the model: e.g., see Section \ref{sec:sun}), $p(\{F_i\}_{i=1}^{N}|\bm{\theta})$ is our previously defined likelihood function $\mathcal{L}$ (Equation \ref{eq:full_likelihood}), and $p(\bm{\theta})$ is the prior probability distribution. Priors are discussed in more detail in Section \ref{sec:mcmc}. % Log probability, expand function %\begin{eqnarray} %\log({\mathcal{P}}) & \propto & \log{likelihood} + \log{prior} \nonumber \\ %\log(\mathcal{P}) & \propto & \prod_{i=1}^{N} \left[ \frac{1-P_b}{\sqrt{2\pi\sigma_{i}^2}}\,\exp\,\left(-\frac{[F_i - E_i]^2}{2\sigma_{i}^{2}}\right) + \frac{P_b}{\sqrt{2\pi\left[\sigma_i^2 + V_o\right]}}\,\exp\,\left(-\frac{[F_i - C_i]^2}{2\left[\sigma_i^{2} + V_o\right]}\right)\right] \nonumber \\ %& & \dots + \ln(p(\bm{\theta})) %\end{eqnarray} With all of the combined effects there are 16 free parameters including $\bm{\psi}$, the astrophysical quantities of interest. For the examples presented in Section \ref{sec:examples}, this requires interpolation between $9.2 \times 10^{10}$ pixels in four dimensions. While the description might appear daunting, the problem is tractable, numerically efficient, and easy to configure. By default \sick{} numerically solves this problem in three sequential steps, which are described in the following sections: scattering, optimisation, and Monte-Carlo Markov Chain (MCMC) sampling. \subsection{Initial Scattering} \label{sec:scattering} We seek to maximise $\mathcal{P}$ (or in practice, $\log\mathcal{P}$) and calculate the posterior probability distributions for $\bm{\theta}$ given the data. In the first step I calculate the log-probability $\log{(\mathcal{P})}$ for $N_{sample}$ randomly drawn points from all over the astrophysical parameter space $\bm{\psi}$. Since \sick{} allows for arbitrarily large parameter spaces in $N_{D}$ dimensions, initially sampling the parameter space $N_{sample}$ times provides a coarse landscape of probability. If the probability distribution function were smoothly distributed across all $\bm{\psi}$, or the optimisation step was sufficiently robust for all potential applications, then in principle \textit{any} single point would be an adequate starting guess. However there is no way of knowing \textit{a priori} how smooth the posterior distribution will be for a given problem. The $N_{samples}$ points are uniformly drawn in $\bm{\psi}$ by default, unless priors are specified (see Section \ref{sec:mcmc}). A model spectrum $S(\bm{\psi})$ is interpolated for each point, which is used to estimate the redshift and continuum parameters. After smoothing the model fluxes $I_j$ by $\sigma_s$ (as drawn from an explicit prior, or estimated by $\left|\mathcal{N}\left(0, 1\right)\right|$) the smoothed spectrum is cross-correlated with the data to yield a redshift estimate. Continuum parameters $\{c_k\}_{k=0}^{m}$ are similarly estimated by fitting a function (a polynomial in this case) to the data divided by the smoothed flux at each $\lambda_i$. Estimating continuum and redshift provides us with a reasonable guess of the probability for any randomly scattered point $\bm{\psi_i}$. If outlier modelling is included in the model then $P_o$ is distributed as $\mathcal{U}\left(0, 1\right)$ by default, and $V_o$ is estimated by $\mathcal{N}\left(\widetilde{F_i}, \frac{1}{2}\widetilde{F_i}\right)$. This procedure is \textit{only} employed for the initial random scattering stage: it is \textit{not} applicable for the MCMC phase. \subsection{Optimisation} \label{sec:optimise} After the random scattering step, the most probable $\bm{\theta}$ point is used as an initial guess for numerical optimisation. A number of suitable minimisation algorithms are available in \sick{} through the SciPy \citep{scipy} optimization module. The \citet{nelder-mead} algorithm is employed by default. Unlike other optimisation techniques, the Nelder-Mead algorithm does not approximate first- or second-order derivatives, and can consequently be less efficient than other approaches. However it is robust in high-dimensional space, even in the presence of substantial noise, and has been successfully employed in a wide range of engineering and scientific problems. The reader is encouraged to experiment with other optimisation approaches if the Nelder-Mead algorithm proves unsuitable or takes an untenable amount of time. Since these optimisation algorithms are minimisation techniques and I seek to maximise the log-probability $\log{\left(\mathcal{P}\right)}$, I numerically optimise the parameters $\bm{\theta}$ by minimising the negative log-probability $-\log{\left(\mathcal{P}\right)}$. \subsection{Monte-Carlo Markov Chain Sampling} \label{sec:mcmc} The random scattering and numerical optimisation steps efficiently provide an accurate estimate of the optimal parameters $\bm{\theta_{opt}}$. Once the optimisation step is complete, \sick{} employs the affine-invariant ensemble sampler proposed by \citet{goodman;weare}, and implemented by \citet{emcee}. The Metropolis-Hastings MCMC algorithm is employed by default. \sick{} allows for the model settings to be specified in a human-readable \textsc{yaml}- or \textsc{json}-formatted configuration file\footnote{The reader is referred to the online documentation at http://astrowizici.st/sick/ for an example.}, where the number of Goodman \& Weare walkers can be specified, as well as the number of samples to perform. When the optimisation step is used, the initial points are taken from a small multi-dimensional ball around the optimised parameters $\bm{\theta_{opt}}$. If the optimisation step is not performed, initial values for the MCMC walkers are drawn in the same way as described for the scattering process. Priors represent our initial knowledge about a particular parameter before looking at the data, and are necessary for any Bayesian analysis. A number of different prior distributions can be specified by the user in the \sick{} model configuration file. When no prior is explicitly specified, the following uninformative prior distributions are assumed (for all channels, where appropriate): \begin{eqnarray} p\left(\bm{\psi}_{dim}\right) &\,=\,& \mathcal{U}\left(\min\left[\bm{\psi}_{dim}\right], \max\left[\bm{\psi}_{dim}\right]\right) \\ p\left(P_o\right) &\,=\,& \mathcal{U}\left(0, 1\right) \\ p\left(\log{f}\right) &\,=\,& \mathcal{U}\left(-10, 1\right) \\ p\left(V_o,\sigma_s\right) &\,=\,& \left\{ \begin{array}{c l} 1\,, &\mbox{for values greater than zero}\\ 0\,, &\mbox{otherwise} \end{array}\right. \\ p\left(z,\{c_k\}_{k=0}^{m}\right) &\,=\,& 1 \\ \label{eq:default_priors} \end{eqnarray} A consequence of allowing irregular model grids is that occasionally a model spectrum cannot be interpolated for some values of $\bm{\psi}$, even if they fall within $\left(\min\left[\bm{\psi}\right], \max\left[\bm{\psi}\right]\right)$. In these cases $p\left(\bm{\psi}\right) = 0$ and thus $\log\left(\mathcal{P}\right) = -\infty$. Quantitatively ensuring numerical convergence for MCMC analyses remains an unsolved problem. There are a number of excellent resources on MCMC sampling which outline this issue in greater detail. The mean acceptance fraction, auto-correlation times, and values of the parameter chains themselves all provide reasonable proxy indicators of convergence. The user is encouraged to inspect these metrics and devise some heuristic for when convergence has unambiguously been achieved. The final state of every MCMC analysis is automatically saved by \sick{}, allowing users to resume their analysis from the most recent state if they believe the system has not converged. \subsection{Self-consistent inference test} \label{sec:inference-test} As an initial test of the probabilistic framework I have interpolated a noise-free synthetic spectrum which will act as a faux observation. A single channel from the AMBRE synthetic library \citep{ambre} has been employed. These spectral channels range from 475\,nm to 685\,nm and were specifically calculated by the AMBRE group for the Gaia-ESO Survey \citep{gaia-eso}. I have applied a number of transformations to the interpolated spectrum: a second-order polynomial enters multiplicatively to represent the continuum, the spectrum is redshifted, and fluxes are convolved to a spectral resolution of $\mathcal{R} \sim 10000$. The faux data are resampled to a uniform spacing of $\sim{}0.08$\,nm, effectively discarding $99.5\%$ of the original pixels, which were sampled at $\sim4\times10^{-4}$\,nm. White noise has been added to replicate a S/N ratio of $\sim 7$ pixel$^{-1}$. I also assumed that the observed variance is systematically underestimated by 10 per cent: conceptually the faux data is $\sim{}$10 per cent noisier than what an observer would estimate from the flux counts. \begin{figure*} \label{fig:chains} \includegraphics[height=\textheight]{figures/chains.pdf} \caption{Points sampled by the 200 walkers at each step during the self-consistent inference test. The true values are marked in blue. The first 1250 steps are discarded as the burn-in period.} \end{figure*} \begin{figure*} \label{fig:corner-inference} \includegraphics[width=\textwidth,height=\textwidth]{figures/corner.pdf} \caption{Marginalised posterior distributions for all parameters $\bm{\theta}$ from a faux observation with spectral resolution $\mathcal{R} \sim 10,000$ and S/N ratio $\sim{}7$\,pixel$^{-1}$. True values are marked in blue. This figure demonstrates that precise inference of stellar parameters can be made with high-resolution spectra, even in the presence of substantial noise.} \end{figure*} \begin{table*} \center \caption{True and inferred model parameters for the self-consistent inference test} \label{tab:inference-test} \begin{tabular}{llrr} \hline \hline Parameter & Description & Truth & Inferred \\ \hline $T_{\rm eff}$ & Effective photospheric temperature (K) & 5454 & 5483$_{-137}^{+125}$ \\ $\log{}g$ & Surface gravity & 4.124 & 4.084$_{-0.274}^{+0.262}$ \\ ${\rm [Fe/H]}$ & Metallicity & --0.514 & $-0.512_{-0.160}^{+0.156}$ \\ $[\alpha/{\rm Fe}]$ & $\alpha$-element abundance & $+0.02$ & $+0.05_{-0.12}^{+0.13}$ \\ $v$ & Velocity (km s$^{-1}$) & 13.0 & $9.5_{-5.5}^{+5.3}$ \\ $\sigma_{blue}$ & Gaussian smoothing sigma (\AA{}) & 0.581 & 0.518$_{-0.34}^{+0.39}$ \\ $\ln{f_{blue}}$ & Logarithm of fractionally underestimated variance & $-2.30$ & $-2.28_{-0.02}^{+0.02}$ \\ $b_{0}$ & Continuum polynomial coefficient ($\times10^{-3}$) & 1.23 & $1.26_{-0.32}^{+0.32}$ \\ $b_{1}$ & Continuum polynomial coefficient & $-0.593$ & $-1.001_{-3.634}^{+3.681}$ \\ $b_{2}$ & Continuum polynomial coefficient & $-0.569$ & $894_{-10335}^{+10367}$ \\ \hline \end{tabular} \end{table*} \begin{figure*} \label{fig:spectrum-inference} \includegraphics[width=\textwidth]{figures/spectrum.pdf} \caption{A faux observed spectrum (black) of $\mathcal{R} \sim 10000$ and S/N ratio of $\sim7$ with an underestimated variance by $\sim$10\% which was used for the self-consistent inference test. The recovered maximum likelihood model spectrum is shown in red. } \end{figure*} Default priors and configurations were employed. 200 walkers were used to explore the parameter space for 1250 steps as burn-in, and another 250 steps were used to sample the posterior. The chains for each parameter are shown in Figure \ref{fig:chains}. At least by eye, the chains appears to have converged after just 300 steps. The marginalised posterior distributions are shown in Figure \ref{fig:corner-inference}, with the accepted truth values marked. The true and inferred values for all parameters are tabulated in Table \ref{tab:inference-test}. This test demonstrates that accurate and precise inference of stellar parameters can be made even in the presence of substantial noise. The true values of all parameters are recovered within the uncertainties, and the largest (relative) uncertainties are observed for continuum coefficients $b_1$ and $b_2$. Figure \ref{fig:corner-inference} shows there are strong covariances between these parameters. This is unsurprising given the shape of the spectrum (Figure \ref{fig:spectrum-inference}), which could reasonably be fit with a linear continuum, even though the continuum was truly represented by a third order polynomial. The logarithm of additional fractional variance $\ln{f_{blue}}$ is precisely constrained, suggesting the noise model is excellently matched by the data. Given the simplicity of the noise model, the precision of inference on $\ln{f_{blue}}$ is unlikely to be recovered for real data. \section{Utility} \label{sec:examples} Here I include two realistic applications to demonstrate the capabilities and ease-of-use of the software. \subsection{Sol} \label{sec:sun} A high-resolution ($\mathcal{R} \sim$ 20000), high S/N ($\sim{}$150 pixel$^{-1}$) twilight spectrum was obtained from the GIRAFFE/FLAMES archive\footnote{eso.org/observing/dfo/quality/GIRAFFE/pipeline/solar.html}. The H875.7 setup was chosen for this example, yielding spectra from 848.2\,nm to 899.2\,nm. Data points redder than 874\,nm have been discarded to more closely match the Gaia RVS wavelength coverage, and demonstrate the recoverability of stellar parameters on real data. The AMBRE \citep{ambre} synthetic grid was also employed for this example. The continuum was represented with a third order polynomial, and no redshift or outlier modelling was included. The central $\pm$0.1 nm cores of the Ca II near-infrared triplet lines were masked (specifically rest wavelengths 849.7-849.9\,nm, 854.1-854.3\,nm, and 866.1-866.3\,nm), as the cores are strongly affected by non-LTE effects that are not accounted for in the 1D LTE model atmospheres used to generate the AMBRE spectra. Pixels between rest wavelengths 850.10\,nm to 850.37\,nm were also masked as the models did not seem to accurately reproduce the data in this region. \begin{figure*} \label{fig:solar} \includegraphics[width=\textwidth,height=\textwidth]{figures/solar.pdf} \caption{Stellar parameter ($\bm{\psi}$: $T_{\rm eff}$, $\log{g}$, [Fe/H], [$\alpha$/Fe]) posterior distributions for the GIRAFFE/FLAMES twilight spectrum. Nuisance parameters are not shown. Maximum likelihood values for each parameter are drawn in red, and the 16\%, 50\% and 84\% quantiles are shown as dashed lines. The posterior distributions are precise, and close to the accepted values (not shown) of $T_{\rm eff} = 5777$\,K, $\log{g} = 4.445$, slightly discrepant from [$\alpha$/Fe] = 0, and significantly deviant from [Fe/H] = 0. This test illustrates the precision achievable using \sick, but highlights the need for accurate models.} \end{figure*} For this example 1000 draws were made during the random scattering stage, and optimisation was performed using the default options. One hundred walkers explored the parameter space for 1000 steps, before another 200 steps were used to sample the posterior. The posterior distributions for stellar parameters is shown in Figure \ref{fig:solar}. The currently accepted stellar parameters for the Sun are $T_{\rm eff} = 5777$\,K, $\log{g} = 4.445$, [Fe/H] = 0, and [$\alpha$/Fe] = 0. The inferred parameters agree excellently with the accepted values for effective temperature and surface gravity: $T_{{\rm eff},{\rm inferred}} = 5770^{+20}_{-14}$, and $\log{g}_{\rm inferred} = 4.40^{+0.06}_{-0.05}$. The agreement is less pleasing for $\alpha$-element enhancement, and worse for mean metallicity: [$\alpha$/Fe]$_{\rm inferred} = -0.03^{+0.01}_{-0.01}$, and [Fe/H]$_{\rm inferred} = -0.07^{+0.01}_{-0.01}$. While our inferences are precise, the offset in mean metallicity suggest the models may be slightly inaccurate in this wavelength regime, or the solar abundances employed are slightly discrepant, or perhaps there are unaccounted telluric features in the data. This test highlights the need to verify the accuracy of the input models. A useful extension of this project would be to simultaneously include telluric spectra, as well as pre-computed astereoseismic models, such that spectroscopy and time-series photometry could simultaneously constrain stellar parameters. Asteroseismology can accurately and precisely constrain surface gravity, but requires an input effective temperature. Conversely surface gravity is the least constrained stellar parameter by spectroscopy, but can precisely constrain effective temperature, and is the only method available to measure chemical abundances. Since \sick{} allows for any dispersion unit equivalency (e.g., frequency, wavelength, et cetera), spectroscopic and astereoseismic models can be trivially coupled into a generative framework for unprecedented accuracy and precision in stellar spectroscopy. \subsection{Globular Clusters} Globular cluster stars are excellent candidates for testing \sick{} over a large range of model parameters. Here I consider a realistic scenario of low-resolution, noisy spectra of red giant and asymptotic branch cluster candidates observed using the AAOmega spectrograph on the Anglo-Australian Telescope in Coonabarabran, Australia. The 580V and 1700D gratings were employed, yielding one channel between 370\,nm to 580\,nm with a spectral resolution $\mathcal{R} \sim 1300$, and another from 820\,nm to 890\,nm with spectral resolution $\mathcal{R} \sim 8000$. Although these data are generally considered to be of low spectral resolution, I show that astrophysical parameters can be accurately inferred with high precision. The AMBRE synthetic grid has also been employed for this example. Each observed channel was treated identically: individual doppler shifts were permitted in each channel, and the continuum was represented as a second-order polynomial. Outlier modelling was included. The parameter space was randomly sampled 1000 times before numerical optimisation was performed using the Nelder-Mead algorithm. 200 walkers explored the parameter space for a burn-in period of 450 steps, before sampling the posterior another 50 times. The auto-correlation times, sampled chain values, and mean acceptance fractions demonstrated that convergence was unambiguously achieved for all analysed sources. Radial velocities were used to identify and discard non-cluster members. The metallicity distribution of the remaining cluster stars are shown in Figure \ref{fig:clusters}, where a comparison to the literature is made. The \citet{harris} catalogue (accessed June 2014) has been used as a literature reference for all globular clusters. Note that although there are measurable intrinsic metallicity spreads and active debates on the mean abundances of these clusters, I have not adopted abscissa uncertainties. \begin{figure}[h!] \label{fig:clusters} \includegraphics[width=0.5\textwidth]{figures/clusters.pdf} \caption{Metallicities of cluster stars inferred from noisy, low-resolution spectra obtained with the AAOmega spectrograph. The agreement with the literature values (see text) is very reasonable.} \end{figure} Even ignoring uncertainties in the literature values, the agreement between is exceptional for noisy, low-resolution spectra. No systematic trend is present and a negligible mean metallicity difference of $-0.001$ dex (an accuracy of $<$0.2\%) is observed over three orders of magnitude. The agreement would be remarkable for high-resolution spectra with exquisite S/N ratios. I emphasise that these results were obtained without performing any `calibration' or ad-hoc posterior corrections, which are usually entirely empirical and not motivated by an astrophysical understanding. Individual uncertainties in stellar parameters are also extremely reasonable, particularly given the low spectral resolution and S/N of the data. Uncertainties of $\sim$75 K in $T_{\rm eff}$, $\sim$0.2 in $\log{g}$, $\sim$0.1 dex in [Fe/H] and $\sim$0.05 dex in [$\alpha$/Fe] are obtained in the presence of substantial noise (e.g., $S/N \sim 15$). The most uncertain results were obtained for {NGC\,7089}, where the lowest S/N ratios were achieved and only six cluster members were observed. \section{Conclusion} \label{sec:conclusions} A probabilistic code has been introduced to approximately forward model spectroscopic data, and thereby allow for simultaneous inference of astrophysical and nuisance parameters. The generative model approach described here has a number of advantages over previously published techniques. Preparatory and subjective decisions (e.g., redshift and placement of continuum) are objectively treated within a scalar-justified mathematical model. This allows users to credibly characterise the parameter posterior distributions and understand how they affect astrophysical measurements. Until now most (if not all) published techniques treat these processes separately, thereby systematically biasing and generally under-estimating the parameter uncertainties. With applications to both high- and low-resolution stellar spectra, I have demonstrated that the simultaneous incorporation of these effects leads to remarkable improvements in both accuracy and precision. While the examples presented here have focussed on stellar spectra, the code is ambivalent about \textit{what} the astrophysical parameters describe: the framework can be easily used for any kind of quantifiable astrophysical process. The code is MIT-licensed (freely distributable, open source) and has an extensive automatic testing suite, which includes regular reproduction of the examples presented in this \article{}. Complete documentation is available online\footnote{astrowizici.st/sick}, which includes a number of additional examples and tutorials. In the spirit of scientific reproducibility, the online documentation is complemented with the files necessary to reproduce examples presented in this \article{}. The source code is distributed using \textsc{git}, and hosted online at GitHub\footnote{github.com/andycasey/sick}. Great care has been spent to ensure the code is easy to use, and allow users to obtain precise inferences with little effort. I strongly encourage the use of this software for existing and future spectral datasets. With the sheer volume and high-quality of spectral data available, astronomers must begin to adopt objective, generative models for their data. Subtle astrophysical processes can only be discovered and understood with the proper characterisation of uncertainties afforded by generative models. \acknowledgements I am pleased to thank Louise Howes for extensive testing of this code, Martin Asplund for providing commentary, as well as Sergey Koposov and Jarryd Page for constructive discussions on this work. This research has made extensive use of NASA's Astrophysics Data System Bibliographic Services, the Coveralls continuous integration service, GitHub, and the \textsc{triangle.py} code \citep{triangle.py}. This research has been funded by European Research Council grant 320360: The Gaia-ESO Milky Way Survey. \bibliographystyle{apj} \bibliography{biblio} \end{document}
{ "alphanum_fraction": 0.7698663134, "avg_line_length": 59.3071428571, "ext": "tex", "hexsha": "34a06125a1dd24b1d99fd4c5a4c3fd45bf4b1065", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-06-08T12:16:36.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-08T12:16:36.000Z", "max_forks_repo_head_hexsha": "6c37686182794c4cafea45abf7062b30b789b1a2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "andycasey/sick", "max_forks_repo_path": "document/manuscript.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "6c37686182794c4cafea45abf7062b30b789b1a2", "max_issues_repo_issues_event_max_datetime": "2016-05-17T02:28:52.000Z", "max_issues_repo_issues_event_min_datetime": "2015-05-18T11:34:04.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "andycasey/sick", "max_issues_repo_path": "document/manuscript.tex", "max_line_length": 301, "max_stars_count": 8, "max_stars_repo_head_hexsha": "6c37686182794c4cafea45abf7062b30b789b1a2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "andycasey/sick", "max_stars_repo_path": "document/manuscript.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-24T04:46:37.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-05T15:09:44.000Z", "num_tokens": 10569, "size": 41515 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % GEANT manual in LaTeX form % % % % Michel Goossens (for translation into LaTeX) % % Version 1.00 % % Last Mod. Jan 24 1991 1300 MG + IB % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Documentation{R.Brun, F.Bruyant} \Submitted{01.10.84} \Revised{26.10.93} \Version{Geant 3.16} \Routid{BASE090} \Makehead{The reference systems and physical units} \section{The {\tt MA}ster {\tt R}eference {\tt S}ystem ({\tt MARS})} The kinematic variables of the particles transporter by {\tt GEANT} are always referred to the so-called {\tt MA}ster {\tt R}eference {\tt S}ystem ({\tt MARS}). This system is implicitly defined as the local reference system of the first volume defined, which contains all the others. This is a Cartesian coordinate system with axis $\hat{x}, \hat{y}, \hat{z}$ where $\hat{z} = \hat{x} \times \hat{y}$. If the axes are labelled {\tt (X,Y,Z)}, then the point {\tt P} is represented in fig \ref{fg:base090-1}. \begin{figure}[hbt] \centering \epsfig{file=eps/base090-1.eps,width=10cm} \caption{{\tt GEANT} reference system} \label{fg:base090-1} \end{figure} Tracking is performed in the {\tt MARS} and the input position for user routines such as the magnetic field routine is given in this system. \section{The local reference systems ({\tt MRS} and {\tt DRS})} As explained in {\tt [GEOM001]}, the setup is described via the definition of an initial volume inside which all the others will be positioned. In {\tt GEANT} terminology, each time a volume has contents, created either via division or by positioning other volumes inside, it is called a {\tt MOTHER}. The volumes contained are called {\tt DAUGHTER}s, and they, in turn, can contain volumes to a depth of 15 levels. This is sometimes referred to as a {\it Russian doll} geometry. Every volume defined in {\tt GEANT} has a reference system attached to it (see {\tt GEOM} section). When this volume has contents, this is referred to as the {\tt M}other {\tt R}eference {\tt S}ystem ({\tt MRS}, with origin in O$_m$). Daughters are positioned inside the mother with respect to the {\tt MRS}. The {\tt MRS} of the first volume defined, containing all the others, is nothing else than the {\tt MARS}. Each one of the daughters has its own reference system, which is referred to as the {\tt D}aughter {\tt R}eference {\tt S}ystem, or {\tt DRS} with origin in O$_d$. The transformation of a point from the {\tt MRS} (V$_m$) to the {\tt DRS} (V$_d$), at any level, is performed using a rotation matrix $[R]$ and a translation vector $T$ via the relation : \[ V_d =[ R ](V_m -T) \] The components of $T$ are the projections of the vector $ (O_m, O_d) $ onto the {\tt MRS} axes. The rotation matrices are computed from the spherical angles of each of the axes of the daughter reference systems ({\tt I, II, III}) with respect to the mother reference system ({\tt 1, 2, 3}). The spherical angles $\Theta$ and $\Phi$ of a direction $D$ are defined as follows : \begin{DLtt}{MMMMM} \item[$\Theta$] is the angle formed by the axis 3 and D ($0^{\circ}\;<\;\Theta\;<\;180^{\circ}$). \item[$\Phi$] is the angle formed by the axis 1 and the projection of D onto the plane defined by the axes 1 and 2 ($0^{\circ}\;<\;\Phi\;<\;360^{\circ}$). \end{DLtt} Examples are given in {\tt [GEOM200]}. The various rotation matrices required for a given setup must be defined by the user during the initialisation stage. A number is assigned to each matrix {\tt [GEOM200]}. The translation vector and the number of the rotation matrix are specified by the user when the volumes are positioned inside their mother {\tt [GEOM110]}. \section{Physical units} Unless otherwise specified, the following units are used throughout the program: centimeter, second, kilogauss, GeV, GeV c$^{-1}$ (momentum), GeV c$^{-2}$ (mass) and degree.
{ "alphanum_fraction": 0.6243947429, "avg_line_length": 46.6344086022, "ext": "tex", "hexsha": "56f6e9b703d5579b348ffa50da96ac2e9582abe6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "geant/base090.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "geant/base090.tex", "max_line_length": 75, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "geant/base090.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 1130, "size": 4337 }
\section{Computer Science Approach} \label{sec:ComputerScienceApproach} \myext{../sections/5results} The following definitions are adopted from the code: \begin{lstlisting}[language=C, caption=Data type definitions, label=lst:definitions] #define FLT float #define INS int #define DBL double #define INT long #define UCHR unsigned char #define CHR char #define UINT unsigned int #define ULON unsigned long \end{lstlisting} \subsection{VTK format} \label{subsec:VTKformat} The Visualisation Toolkit \texttt{VTK} \cite{Kit} is an extremely popular open source software for graphic visualisation of scientific data. \texttt{VTK} \texttt{API}s are available in many different programming languages and facilitate the generation of \texttt{VTK} files. Until now, the \texttt{Y} code used the \texttt{VTK} \texttt{API} for programming language \texttt{c} to generate output files. \bigbreak The required \texttt{VTK} version \texttt{5.8} library dependencies revealed themselves to be outdated and unobtainable. This limited the applicability of the entire \texttt{Y} code to operating systems which support this obsolete version. Specifically, all attempts by the author to obtain such a version for testing were not successful. Rewriting the code using a newer \texttt{VTK} version (i.e. a newer \texttt{VTK API}) would only temporarily solve this problem. \bigbreak To diversify application to a wider range of operating systems, it was deemed necessary to hard code the output without using \texttt{VTK} libaries. Aside from the partly ill-documented offical website \cite{Kit}, one of the only few reliable resources available is the \texttt{Earth Models} website by Bunge \cite{Bun09}. In what follows, the aim is to provide the reader with a more detailed and more in-depth description of the \texttt{VTK} format. \subsection{VTK legacy file format} \label{subsec:VTKlegacyfileformat} The \texttt{VTK} legacy file format is sufficiently documented (see \cite{Kit} for official documentation) and relatively easy to implement. On the downside, it only supports a minimal amount of features and is relatively inflexible. List. \ref{lst:vtklegacy} in section \ref{sec:results} shows an example \texttt{VTK} legacy output file. \bigbreak The header defines \texttt{VTK} version, file name, data encoding (\texttt{ascii} or \texttt{binary}) and dataset type \footnote{here: unstructured grid} \cite{Kit}. \bigbreak The data is divided into sections including points, cells, cell types. The section header contains the keyword along with the total amount of entities of the section. The data is written continuously using spaces as separators. \cite{Kit} \bigbreak Every section has additional unique formatting rules. The points sections requires an additional data type identifier (here: \lstinline[language=C]{float}) to correctly identify the coordinates data. \cite{Kit} \bigbreak The data in the cells section lists the node numbers of each element, appended to a number identifying the total amount of nodes of the element. \cite{Kit} \bigbreak The cell type section identifies the geometric shape of each element, given by an index. Each index is written on a separate line. \cite{Kit} \subsection{VTK XML file format} \label{subsec:VTKXMLfileformat} Compared to \texttt{legacy} files, \texttt{XML} files are much more difficult to implement. An example is given in List. \ref{lst:vtkxmlinline} in section \ref{sec:results}. \bigbreak The file is structured using nested keyword headers which are enclosed in angle brackets ("<" and ">"), according to \texttt{XML} language. Text indentation is not required but enhances readability. \bigbreak Keyword headers are opened using \lstinline[language=XML]{<keyword>} and closed using \lstinline[language=XML]{</keyword>}. Keywords which do not contain any sub-keyword headers are opened and closed in one line using \lstinline[language=XML]{<keyword\>}. Keyword headers usually contain additional mandatory and optional keywords in the format of \lstinline[language=XML]{option="value"} \cite{Kit}. \bigbreak The file header specifies versions, \texttt{VTK} file type and \lstinline[language=XML]{byte_order} \footnote{endianess, here: little endian} \cite{Kit}. Big endian byte order was not tested, as such operating systems were not available for testing. \bigbreak Depending on the specified \lstinline[language=XML]{VTKFile type}, a unique set of sub-keyword headers is used. For the \lstinline[language=XML]{Unstruc}\allowbreak\lstinline[language=XML]{tured}\allowbreak\lstinline[language=XML]{_Grid} \texttt{VTK} file type chosen here, the file content is structured using \lstinline[language=XML]{<Piece>}. Within this header, it is required to specify the number of points and cells (elements) using \lstinline[language=XML]{NumberOfPoints} and \lstinline[language=XML]{NumberOfCells} \cite{Kit}. \bigbreak \lstinline[language=C]{<Piece>} contains definition of \lstinline[language=XML]{<Points>} and \lstinline[language=XML]{<Cells>}, among other optional keyword sub-headers. \lstinline[language=XML]{<Points>} expects exactly one \lstinline[language=XML]{<DataArray>} containing the nodal coordinates. \lstinline[language=XML]{<Cells>} expects one \lstinline[language=XML]{<DataArray>} each for the cell connectivity (node configuration of each cell), the cell offsets (offsets in the cell connectivity list), as well as the cell type (cell shape index, see \cite{Kit}) \cite{Bun09}. \bigbreak The data, listed after \lstinline[language=XML]{<DataArray>} includes \lstinline[language=XML]{type} (data type), \lstinline[language=XML]{Name}, \lstinline[language=XML]{NumberOfComponents}, \lstinline[language=XML]{format}, \lstinline[language=XML]{RangeMin} and \lstinline[language=XML]{RangeMax}. \bigbreak \lstinline[language=XML]{RangeMin} and \lstinline[language=XML]{RangeMax} indicate the minimal and maximal values of the data and are optional\footnote{confirmed by tests}. Simple array search functions were implemented in order to find the individual values. \bigbreak \lstinline[language=C]{Name} specifies a unique name, enclosed by \texttt{""}, for the data set. The data array contained in \lstinline[language=XML]{<Points>} requires no name. The data arrays enclosed by \lstinline[language=XML]{<Cells>} require fixed names \lstinline[language=XML]{Name="connectivity"}, \lstinline[language=XML]{Name="offsets"} and \lstinline[language=XML]{Name="types"} \cite{Kit}. \bigbreak \lstinline[language=XML]{format} specifies the encoding of the data. \bigbreak In the case \lstinline[language=XML]{format="ascii"}, the data is provided in decimal form, delimited by spaces, similar to section \ref{subsec:VTKlegacyfileformat} \cite{Bun09}. Specifying an inappropriate data \lstinline[language=XML]{type} for this format will not leave the data completely unusable, although induced type casting may yield unexpected results. The dependence on spaces as delimiters is highly impractical when transferring files between different operating systems and applications. \bigbreak A more robust option is given by \lstinline[language=XML]{format="binary"}. Characters such as "<" (binary encoding of 0b00111100=d060) may potentially appear in binary encoded data. As these characters compose \texttt{XML} keyword headers, binary encoding is not realizable \cite{Bun09}. Thus, the data is encoded to \texttt{base 64} (\texttt{b64}) (see \ref{subsec:b64encoding}). \bigbreak For \texttt{b64} encoding, a data header needs to be prepended to the data. The data header is always of type \lstinline[language=C]{int32} (data size: 4 bytes) and specifies the data size, i.e. the amount of binary bytes of subsequent data. The data size is not to be confused with the length of the printed \texttt{b64} encoded data string in the output file, which is insignificant for now. The data size rather refers to the size of the binary data array in \lstinline[language=C]{bytes}, stored as specified by the datatype in the header. Specifying a data \lstinline[language=XML]{type} of a different \texttt{byte} size for this format produces errors and renders the data useless. \bigbreak As an example, consider a \lstinline[language=C]{float32} array which contains $100$ data values. The corresponding header is given by: \begin{equation} \label{eq:b64inlineex} h=100\,\frac{32\,{\rm bits}}{8\,{\rm bits}/{\rm byte}}=400\,{\rm bytes} \end{equation} \bigbreak The header then needs to be encoded to b64, separately from the data. The encoded header string is immediately followed by the encoded data string, without any delimiters. Considering the fact that the data header type is fixed, the \texttt{b64} encoded data header string length is always the same. Thus, no additional offset is necessary to specify the positional onset of the data after the data header \cite{Bun09}. \bigbreak Instead of supplying the data inline, it is possible to append data once, to an \lstinline[language=XML]{<AppendedData>} section at the end of the file. This is achieved by specifying option \lstinline[language=XML]{format="appended"} for each data array keyword header and providing an \lstinline[language=XML]{offset} each time. \bigbreak In doing so, the entire data is provided in a single long \texttt{b64} encoded string, appended to an underscore "\_". Like in the inline format, each encoded data string must be prepended by a \texttt{b64} encoded header, which specifies the size of the following unencoded data section in bytes. The headers and the data sections are encoded separately each, without any delimiters between headers and data sections. \bigbreak The offset value provided in each \lstinline[language=XML]{<DataArray>} specifies the amount of bytes of unencoded data after the underscore to the beginning of the corresponding header. The method of evaluation of header offsets is illustrated in Tab. \ref{tab:offset}. \begin{table*}[t] \caption{Breakdown of offset values for an example output file with $642$ points and $1280$ cells. The desired (header) offsets are given in column $2$.} \begin{tabular}{@{}llllllll@{}} Name & header offset & data offset & data size & data type size & values & components & points/cells \\\midrule Points & 0 & 4 & 15408 & 8 & 1926 & 3 & 642\\ Connectivity & 15412 & 15416 & 15360 & 4 & 3840 & 3 & 1280\\ Offsets & 30776 & 30780 & 5120 & 4 & 1280 & 1 & 1280\\ Types & 35900 & 35904 & 1280 & 1 & 1280 & 1 & 1280\\ Points s & 37184 & 37188 & 5136 & 8 & 642 & 1 & 642\\ Points v & 42324 & 42328 & 15408 & 8 & 1926 & 3 & 642\\ Points n & 57736 & 57740 & 15408 & 8 & 1926 & 3 & 642\\ Density & 73148 & 73152 & 5136 & 8 & 642 & 1 & 642\\\bottomrule \end{tabular} \label{tab:offset} \end{table*} \bigbreak In Tab. \ref{tab:offset}, the header offset of the first header is always $0$. As the unencoded header is always an \lstinline[language=C]{int32} of size $4\,{\rm bytes}$, the first data offset is $4$. For a data array with $1926$ values and data type size $8\,{\rm bytes}$ (e.g. \lstinline[language=C]{float64}), the data size in \lstinline[language=C]{bytes} amounts to $1926\cdot8\,{\rm bytes}=15408{\rm bytes}$. The number of values is comprised by the number of components and the number of points, i.e. $3\cdot642=1926$. The next header offset is evaluated by adding the data size to the data offset, i.e. $4+15408=15412$. The remaining offset values are evaluated accordingly. \subsection{Base 64 encoding} \label{subsec:b64encoding} %\begin{figure} %\begin{tikzpicture} % \draw (0,0) node[anchor=north] {float* array} % \draw (2,0) node[anchor=north] {float} % \draw (4,0) node[anchor=north] {float byte} %\node [rectangle split,rectangle split parts=36, rectangle split vertical,draw ] %at (0,0); %\node [rectangle split,rectangle split parts=32, rectangle split vertical,draw ] %at (0,0); %\node [rectangle split,rectangle split parts=32, rectangle split vertical,draw ] %at (2,0); %\node [rectangle split,rectangle split parts=8, rectangle split vertical,draw ] %at (2,0); %\node [rectangle split,rectangle split parts=8, rectangle split vertical,draw ] %at (2,-1); %\node [rectangle split,rectangle split parts=8, rectangle split vertical,draw ] %at (2,-2); %\node [rectangle split,rectangle split parts=8, rectangle split vertical,draw ] %at (2,-3); %\end{tikzpicture} %\caption{Data segmentation of a \lstinline[language=C]{float*} array. Each rectangle %represents one bit in memory.} %\label{fig:segmentation} %\end{figure} The output data consists of one- and two-dimensional \lstinline[language=C]{int} and \lstinline[language=C]{float} arrays containing nodal properties, elemental properties and flags. The computer stores the data in binary form, according to byte order (endianess) of the operating system at hand. \bigbreak \lstinline[language=C]{float} numbers are usually stored in \texttt{IEEE-754} format \cite{Asp14}. As the numbers are stored internally, the exact manner of storage is theoretically not significant as long as the position (the significance) of the bits in binary-stored \lstinline[language=C]{float} is not being violated. \bigbreak As the base64 encoding function takes in a contiguous \lstinline[language=C]{CHR*} array, the data needs to be segmented into bytes. For the \lstinline[language=C]{INS} data header, this simply involves conversion from a 4-byte \lstinline[language=C]{int32} to 4 \lstinline[language=C]{CHR} bytes. The segmentation process for arrays is illustrated in Fig. \ref{fig:b64}. The array is segmented into single bytes, rearranged in groups of 3 bytes and again rearranged in groups of 6 bits. The 6-bit groups are encoded to characters and compose the encoded string. \bigbreak As an example, the implementation approach is discussed by use of the 2-dimensional contact force array. A complete implementation can be found in file \lstinline[language=C]{Yod.c}. In a first step, 2-dimensional arrays are reduced to contiguous one-dimensional arrays (List. \ref{lst:2d21d}). \begin{lstlisting}[language=C, caption=Converting 2-dimensional array to 1-dimensional array, label=lst:2d21d] DBL *cf; //contact force INS i, j, k; INS header; header = sizeof(DBL) * ydn->nnopo * ydn->nnodim; cf = malloc(header); k = 0; for (i = 0; i != ydn->nnopo; i++){ for (j = 0; j != ydn->nnodim; j++){ cf[k] = ydn->d2nfcon[j][i]; k++;} cf[k] = ydn->d2nfcon[j][i]; k++;} \end{lstlisting} In List. \ref{lst:2d21d}, \lstinline[language=C]{ydn->nnopo} denotes, in the class "\textbf{Y} \textbf{d}atabase of \textbf{n}odes", the current \textbf{n}umber of \textbf{nodal} \textbf{po}ints. \lstinline[language=C]{ydn->nnodim} denotes the current \textbf{n}umber of \textbf{no}dal \textbf{dim}ensions (2) and \lstinline[language=C]{ydn->d2nfcon} denotes the \textbf{d}ouble \textbf{2}-dimensional \textbf{n}odal \textbf{f}orce of \textbf{con}tact. \bigbreak As a result, the data is now stored in a contiguous one-dimensional array of \lstinline[language=C]{float64} and now needs to be segmented into bytes. \begin{figure}[!htbp] \centering \includegraphics[width=\columnwidth]{b64.png} \caption{Subsequent conversion of binary data to bytes, bit sextets, and encoded characters \cite{App}.} \label{fig:b64} \end{figure} A first possible strategy is to apply simple data type casting, see List. \ref{lst:typecasting}. \begin{lstlisting}[language=C, caption=Segmentation approach via type casting with pointers, label=lst:typecasting] FLT floats[4] = {1, 2, 3, 4} //actual array is of variable size; CHR *bytes = (CHR *) floats; \end{lstlisting} The \lstinline[language=C]{CHR* bytes} array could then be passed on to the encoding function. Test showed that this strategy seems to only work if the array to type cast does not contain any data before type casting. Thus, this approach is not appropriate here. \bigbreak A second possible strategy is to declare a \lstinline[language=C]{union}, see List. \ref{lst:union}. \begin{lstlisting}[language=C, caption=Segmentation approach via a union, label=lst:union] union INTToCHR { INS i; CHR *c[sizeof(INS)];}; \end{lstlisting} Here, both \lstinline[language=C]{i} and \lstinline[language=C]{c} refer to the same location in memory. This approach potentially works for a single \lstinline[language=C]{float}. As the \lstinline[language=C]{CHR*} arrays would need to be concatenated dynamically, this approach is (according to the author's opinion) not appropriate either. Test which were carried out using this strategy showed inconsistent outputs, indicating that the prompted data may not be contiguously stored. \bigbreak A third strategy for dividing the data arrays into bytes uses a combination of \lstinline[language=C]{malloc} and \lstinline[language=C]{memcpy}, see List. \ref{lst:mallocandmemcpy}. \begin{lstlisting}[language=C, caption=Segmentation approach via memcpy, label=lst:mallocandmemcpy] CHR *bytes; bytes = malloc(header); memcpy(&bytes[0], cf, header); \end{lstlisting} Testing confirmed that this method is successful. The \lstinline[language=C]{byte} array now contains the floats in contiguous order in binary form, stored as bytes. \bigbreak The prepared data is passed to the base 64 encoding function \lstinline[language=C]{b64enc} in file \lstinline[language=C]{Yod.c}. The function takes some \lstinline[language=C]{CHR*} array \lstinline[language=C]{src} of length \lstinline[language=C]{len} as input, encodes it byte for byte and outputs character for character into the \texttt{VTK} output file \lstinline[language=C]{fout}. \bigbreak The core of the algorithm (List. \ref{lst:b64alg}) was adopted from Malinen \cite{Mal05}. \begin{lstlisting}[language=C, caption=b64 algorithm ,label=lst:b64alg] while (end - in > 2){ putc(enc[in[0] >> 2], fout); putc(enc[((in[0] & 0x03) << 4) } (in[1] >> 4)], fout); putc(enc[((in[1] & 0x0f) << 2) } (in[2] >> 6)], fout); putc(enc[in[2] & 0x3f], fout); in += 3;} if (end - in){ putc(enc[in[0] >> 2], fout); if (end - in == 1){ putc(enc[(in[0] & 0x03) << 4], fout); putc('=', fout);} else{ putc(enc[((in[0] & 0x03) << 4) } (in[1] >> 4)], fout); putc(enc[(in[1] & 0x0f) << 2], fout);} putc('=', fout);} return;} \end{lstlisting} In List. \ref{lst:b64alg}, \lstinline[language=C]{in} and \lstinline[language=C]{end} are \lstinline[language=C]{CHR*} pointers which point to the beginning and end of the input array. Converting to base 64 requires rearranging the input in chunks of 6 bits. Each of these 6-bit chunks produces a value $\varv < 64 = 0b1000000$ and is able to be converted to one of 64 unique characters from the encoding array \begin{lstlisting}[language=C, frame=none, numbers=none,breaklines=true] CHR* enc = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz0123456789+/" . \end{lstlisting} \bigbreak For every $3$ bytes (or $24$ bits), the algorithm produces $4$ groups of $6$ bits, i.e. $4$ characters. The bits are extracted continuously via bit shifting. As mentioned before, the position or significance of the bits in the byte must not be violated. In other words, the value of the bit must not get lost through bit shifting. The algorithm is now explained in detail: \bigbreak In line $2$, the first $6$ bits of the current character \lstinline[language=C]{in[0]} are extracted by bit shifting to produce the first $6$-bit chunk. The value of this chunk is used to produce the index of the corresponding character in the encoding array \lstinline[language=C]{enc}. \bigbreak The last two remaining bits of the current input character \lstinline[language=C]{in[0]} are evaluated and bit shifted in line $3$ to account for the first $2$ significant bits of the next $6$-bit chunk. The $4$ missing bits to complete the $6$ bits are extracted from the first $4$ bits of the second character \lstinline[language=C]{in[1]}, again by bit shifting. \bigbreak In line $4$, the $4$ remaining bits from the second input character \lstinline[language=C]{in[1]} make up the first four bits of the next $6$-bit chunk. The missing $2$ bits of the $6$-bit chunk are extracted from the third input character \lstinline[language=C]{in[2]}. This leaves $6$ bits of the third input character remaining, making up another $6$-bit chunk, as implemented in line $5$. \bigbreak If the input array length \lstinline[language=C]{len} = \lstinline[language=C]{end} - \lstinline[language=C]{in} is not divisible by $3$, padding characters are required. The amount of padding padding depends on the amount of remaining input byes to complete a group of three input bytes. If $\rm{len}\,\%\,3==1$, two padding characters '==' are appended to the encoded string. If $\rm{len}\,\%\,3==2$, one padding character '=' is required. The algorithm is given in List. \ref{lst:b64alg}, lines 8-17. \bigbreak Writing the output characters to a buffer instead of using \lstinline[language=C]{putc} reduces the amount of times of invoking the \texttt{I/O} buffer and may save computational time. This alternative was discarded as it did not provide any significant advantage in terms of computational costs in this minimal example.
{ "alphanum_fraction": 0.7565168115, "avg_line_length": 79.0149253731, "ext": "tex", "hexsha": "a40835c0027269d921784839f9bb92e144be2906", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-11-01T03:36:35.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-24T21:47:45.000Z", "max_forks_repo_head_hexsha": "03f29628b762a55e98b96fcedaab754cedcc092a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "msc-acse/acse-9-independent-research-project-mt5918", "max_forks_repo_path": "Documentation/finalreport/sections/4compapproach.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "03f29628b762a55e98b96fcedaab754cedcc092a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "msc-acse/acse-9-independent-research-project-mt5918", "max_issues_repo_path": "Documentation/finalreport/sections/4compapproach.tex", "max_line_length": 688, "max_stars_count": 2, "max_stars_repo_head_hexsha": "03f29628b762a55e98b96fcedaab754cedcc092a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "msc-acse/acse-9-independent-research-project-mt5918", "max_stars_repo_path": "Documentation/finalreport/sections/4compapproach.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-01T04:39:28.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-01T04:29:17.000Z", "num_tokens": 5732, "size": 21176 }
\chapter{User documentation} This part of the documentation is focused on the end user. We introduce installation details and manual for program usage. \section{Installation guide} This section documents the process of downloading til running the program. \subsection{Dependencies} Following packages are required to compile the application. We also provide versions of packages used to create and test our implementation. \begin{center} \begin{tabular}{l l} package & version \\ \hline Python & 3.3 \\ NumPy & x.x \\ OpenCV & x.x \end{tabular} \end{center} In case OpenCVcontrib could not be obtained, the program can be used with OpenCV (version x.x). As the result only detection trackers will be available. \subsection{Hardware requirements} The software was primarly tested on a system with Intel(R) Core(TM) i5-7300HQ CPU (2.50GHz, 2496 Mhz, 4Core), 16GB RAM running Microsoft Windows 10 Enterprise. Minimal requirements are lower, but the copmutation power reflects on frequency of getting localization results. Also two cameras are needed. We tested on .... and .... . Laptop camera may be used. Requirements for the cameras are 640x320 px and 20 FPS. \subsection{Downloading the application} The application can be downloaded from \ref{https://github.com/JankaSvK/thesis}. \section{Usage guide} \subsection{Running the application} In the folder \verb+application+ we find an entry point for our application \verb+Main.py+. Different options may be passed to the program (ref to list). In case no option is passed program runs on first two available cameras. Firstly calibration for each camera is done and then stereo calibration. As a tracker is used \verb+KCF+. \begin{code} Usage: Main.py [options] Options: -h, --help show this help message and exit --calibrationi\_results1=CALIB1 Calibration results for the first camera --calibration\_results2=CALIB2 Calibration results for the second camera --stereo\_calibration\_results=STEREO Stereo calibration results --video\_recording1=VIDEO1 Video recording for the first camera --video\_recording2=VIDEO2 Video recording for the second camera --tracker=TRACKER The algorithm used for tracking \end{code} % len vykopiruj z toho co ti da -h \subsection{Notes for options} Video file - formats \verb+TODO+ are accepted. As \verb+TRACKER+ may be used a name of implemented trackers. We introduce a list of the names \verb+KCF, SIMPLEBACKGROUND, PATTERNMATCHING, HSV, TLD, ...+. For calibration results JSON in this format are used: Calibration results: Stereo calibration: \begin{code} \end{code} \subsection{Reusing calibration results} Calibration results from previous runs may be reused by adding the option for specific camera. After succesfull calibration the results are automatically saved in this structure: \begin{code} calib\_results/ - 1/ - 2/ - stereo\_calib\_results/ \end{code} The file for specific calibration result is named in this manner: {year}-{month}-{day}-at-{hour}-{minute}.json when the calibration successfully ended. \subsection {Inspecting localization data} After appropiate close of the program, localization data are automatically saved in \verb+localization\_data/+. Same convention for file names is used as with calibration results. Sample of localization data exaplained: TODO
{ "alphanum_fraction": 0.7516661837, "avg_line_length": 34.51, "ext": "tex", "hexsha": "5291d7af5f819f7d5979d0ef58e3f642f77b12b2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c440ab8242b058f580fdf9d5a1d00708a1696561", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JankaSvK/thesis", "max_forks_repo_path": "text/sk/user_documentation.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "c440ab8242b058f580fdf9d5a1d00708a1696561", "max_issues_repo_issues_event_max_datetime": "2018-05-11T23:25:07.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-24T18:30:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JankaSvK/thesis", "max_issues_repo_path": "text/sk/user_documentation.tex", "max_line_length": 178, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c440ab8242b058f580fdf9d5a1d00708a1696561", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JankaSvK/thesis", "max_stars_repo_path": "text/sk/user_documentation.tex", "max_stars_repo_stars_event_max_datetime": "2018-11-29T14:13:47.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-29T14:13:47.000Z", "num_tokens": 813, "size": 3451 }
\subsection{Bundle Protocol Agent Interface} The bundle protocol agent interface will use the current DTN2 application interface. We seek feedback on any additional features that are required for this interface. In addition, MITRE is developing an XML-based interface that conforms to the ICCP. The interface being developed is expected to support the same primitives as are found in the current RPC-based interface in DTN2. We expect the XML-based BPA interface to be included in a future version of this document.
{ "alphanum_fraction": 0.8049242424, "avg_line_length": 40.6153846154, "ext": "tex", "hexsha": "b30722e711e52b9d3b6761000c1421572ce94991", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-06-28T20:41:24.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-23T11:07:39.000Z", "max_forks_repo_head_hexsha": "1c12a9dea32c5cbae8c46db105012a2031f4161e", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "delay-tolerant-networking/DTN2", "max_forks_repo_path": "doc/plugin-architecture/bpa-interface.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1c12a9dea32c5cbae8c46db105012a2031f4161e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "delay-tolerant-networking/DTN2", "max_issues_repo_path": "doc/plugin-architecture/bpa-interface.tex", "max_line_length": 74, "max_stars_count": 14, "max_stars_repo_head_hexsha": "1c12a9dea32c5cbae8c46db105012a2031f4161e", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "delay-tolerant-networking/DTN2", "max_stars_repo_path": "doc/plugin-architecture/bpa-interface.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-28T20:41:17.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-27T19:28:23.000Z", "num_tokens": 107, "size": 528 }
\section{Introduction} \label{sec:grammars-and-metamodels:Introduction} Many popular \Eclipse-based modeling formalisms focus on notations that are either mainly textual or mainly graphical. Although tools exist that transform models written in a textual language to representations of those models that can be manipulated and depicted using graphical notations, the construction and manipulation of models written using a combination of both languages is not well facilitated. The popular modeling language \UML offers graphical diagrams for the construction of models. Research has shown, however, that graphical languages are not inherently superior to textual languages \cite{looking-seeing} and that both types of languages have their benefits. Therefore, we investigate the integration of textual and graphical languages to be able to exploit the benefits of both types of languages. In particular, this integration facilitates the creation of large \UML models and addresses research question~\RQ{1}. \RQOne One of the problems that arise when using two or more languages to construct one model is that parts of the model written in one language can refer to elements contained in parts written in another language. Transforming a model written in multiple languages to a model written in one language involves introducing correct references between various parts of the model. Existing tools are aimed at converting textual models conforming to grammars into models conforming to metamodels and vice versa \cite{TCS, Efftinge2006xText}. These tools can not transform models that consist of parts that conform to grammars as well as parts that conform to metamodels. We use a textual alternative for activity diagrams, a textual surface language, as a case study and have implemented two versions of this language. One alternative uses tools and techniques related to grammars, and the other uses tools and techniques related to models and metamodels. The approach related to grammars transforms \UML models containing fragments of behavior modeled using our surface language to plain \UML models by rewriting the XMI representation of the model provided as input. We used the \ASFSDFME~\cite{Brand:2001:ASF} to implement this approach. The approach related to models and metamodels extracts the fragments of surface language, converts them to metamodel based equivalents, transforms these equivalents to Activities, and uses these to replace the fragments in the original model. We used the \OAW platform~\cite{Haase2007OAW, Voelter2006OAW} to implement this approach. The remainder of this chapter is organized as follows: Section~\ref{sec:grammars-and-metamodels:Preliminaries} introduces a number of relevant concepts. A specification of the surface language we implemented, a description of its embedding in the \UML, and the transformation from surface language to Activities is given in Section~\ref{sec:grammars-and-metamodels:SL-specification}. The approach based on grammars is described in Section~\ref{sec:grammars-and-metamodels:Grammarware}, and the approach based on models and metamodels is described in Section~\ref{sec:grammars-and-metamodels:Modelware}. A number of other applications involving the integration of textual and graphical languages, and the transformation of models constructed using multiple languages are discussed in Section~\ref{sec:grammars-and-metamodels:Other-Applications-of}. Section~\ref{sec:grammars-and-metamodels:Case-Study} provides a short description of a case study concerning the application of our surface language. Section~\ref{sec:grammars-and-metamodels:Related-Work} discusses how our work relates to earlier work. We draw conclusions and discuss future work in Section~\ref{sec:grammars-and-metamodels:Conclusions-and-Future}.
{ "alphanum_fraction": 0.8278041074, "avg_line_length": 111.7058823529, "ext": "tex", "hexsha": "fd9bf60722bbef98654fb2e79a07595e3e0a4795", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8cabcf160a6f06e12b5ced92bb5cec06983e5bb7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ljpengelen/latex-phd-thesis", "max_forks_repo_path": "grammars-and-metamodels/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8cabcf160a6f06e12b5ced92bb5cec06983e5bb7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ljpengelen/latex-phd-thesis", "max_issues_repo_path": "grammars-and-metamodels/introduction.tex", "max_line_length": 286, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8cabcf160a6f06e12b5ced92bb5cec06983e5bb7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ljpengelen/latex-phd-thesis", "max_stars_repo_path": "grammars-and-metamodels/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-18T21:53:57.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-18T21:53:57.000Z", "num_tokens": 787, "size": 3798 }
\chapter{Bevelle} \begin{equip} \begin{itemize} \tidusf Equip Sonic Steel \end{itemize} \end{equip} \begin{enumerate} \item Use a Mega-Potion \item \textit{With Sleeping Powder:} \end{enumerate} \begin{battle}{Guard Fights - Sleeping Powder} \begin{itemize} \item \textit{Fights 1 and 3:} \begin{itemize} \tidusf Attack \item Defend or use Distillers \end{itemize} \item \textit{Fights 2 and 4:} \begin{itemize} \tidusf Attack \rikkuf Sleeping Powder \kimahrif Silence Grenade/Smoke Bomb/Distiller \end{itemize} \item \textit{Fight 5:} \begin{itemize} \tidusf Haste \rikku \rikkuf Throw Items x2 \tidusf Attack \end{itemize} \end{itemize} \end{battle} \begin{enumerate}[resume] \item \textit{Without Sleeping Powder:} \begin{itemize} \item \formation{\tidus}{\rikku}{\auron} \textit{unless \lulu\ doesn't have at least 35 levels, then } \formation{\tidus}{\rikku}{\lulu} \end{itemize} \end{enumerate} \vfill \begin{battle}{Guard Fights - No Sleeping Powder} \begin{itemize} \item \textit{Fights 1 and 3:} \begin{itemize} \tidusf Attack \item Defend or use Distillers \end{itemize} \item \textit{Fights 2 and 4:} \begin{itemize} \switch{\tidus}{\kimahri} \kimahrif Silence Grenade/Smoke Bomb \switch{\rikku}{\tidus} \tidusf Attack \kimahrif Repeat \item If Underdamaged anyone, use another Throwable \end{itemize} \item After the second fight, \formation{\tidus}{\rikku}{\lulu} \item \textit{Fight 5:} \begin{itemize} \switch{\tidus}{\rikku} \rikkuf Silence Grenade/Smoke Bomb x2 \switch{\kimahri}{\tidus} \tidusf Attack \end{itemize} \end{itemize} \end{battle} \begin{enumerate}[resume] \item \sd, \fmv[1:30], \sd\ on \yuna\ dialogue. \skippablefmv[30], \sd. Use lift, \sd. \end{enumerate} \vfill \begin{trial} \begin{itemize} \item For all of these you can Hold X instead of pressing it when you get onto the directional pad \item Push the pedestal in \item Press X \item Go left at the second junction \item Take sphere, push pedestal back into the junction \item At the third junction, go back \item Go left at the second junction \item Place sphere into wall, push pedestal back \item Go left at the first junction \item Go left \item At the third junction and go right \item Take glyph sphere from wall, push pedestal back onto the road \item At the fourth junction go right \item Place glyph sphere into pedestal \item Take Bevelle sphere from pedestal \item Place Bevelle sphere into the wall \item Take the glyph sphere \item Place into the next wall \item Take Destruction sphere from the new wall %where to put it \item Take Bevelle sphere from old wall \item Push pedestal back and fall off the edge \item Go straight \item At the third junction go right \item Place destruction sphere into wall \item Push pedestal back and fall off the edge \item Go straight \item At the second junction go right \item Push pedestal \item Go up the stairs, open the chest \end{itemize} \end{trial} \begin{enumerate}[resume] \item \sd, name \bahamut, don't save, \sd \end{enumerate}
{ "alphanum_fraction": 0.6701337296, "avg_line_length": 31.7452830189, "ext": "tex", "hexsha": "0c37d71c133a3058d3705f3814d08186381786cd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "849dba76a706d0d894886dfe44ecd4fdf6d13e5b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Hoishin/Final-Fantasy-Speedruns", "max_forks_repo_path": "Final Fantasy X/Chapters_Blitz_Loss/bevelle.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "849dba76a706d0d894886dfe44ecd4fdf6d13e5b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Hoishin/Final-Fantasy-Speedruns", "max_issues_repo_path": "Final Fantasy X/Chapters_Blitz_Loss/bevelle.tex", "max_line_length": 144, "max_stars_count": null, "max_stars_repo_head_hexsha": "849dba76a706d0d894886dfe44ecd4fdf6d13e5b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Hoishin/Final-Fantasy-Speedruns", "max_stars_repo_path": "Final Fantasy X/Chapters_Blitz_Loss/bevelle.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1071, "size": 3365 }
\documentclass{article} % For LaTeX2e \usepackage{nips13submit_e,times} \usepackage[style=numeric]{biblatex} \usepackage{hyperref} \usepackage{url} \usepackage{graphicx} \graphicspath{{images/}} %\documentstyle[nips13submit_09,times,art10]{article} % For LaTeX 2.09 \title{Formatting Instructions for NIPS 2013} \author{ Viraj Mehta Department of Mathematics\\ Stanford University\\ \texttt{[email protected]} \\ \And Meena Chetty \\ Department of Computer Science \\ Stanford University \\ \texttt{[email protected]} \\ } % The \author macro works with any number of authors. There are two commands % used to separate the names and addresses of multiple authors: \And and \AND. % % Using \And between authors leaves it to \LaTeX{} to determine where to break % the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{} % puts 3 of 4 authors names on the first line, and the last on the second % line, try using \AND instead of \And before the third author name. \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \addbibresource{citations.bib} \nipsfinalcopy % Uncomment for camera-ready version \begin{document} \nocite{*} \maketitle \begin{abstract} In this paper, we develop a reverse dictionary model that learns formal and colloquial English word definitions. Thus, given a meaning or description, our model will output predictions for the target word. We collect data from the Merriam-Webster and Oxford English Dictionaries, as well as 16 years of data from the New York Times Crossword database and the web-scraped entries at \emph{crosswordsolver.org}. Using this data, we develop a two-fold model: it can be used on formal descriptions and meanings, as well as on puzzle-oriented ones. We tune our predictions by measuring accuracy based on the top-n predictions. For the puzzle solver application of our model, we utilize the contextual information that is inherent to puzzles such as crosswords, and we provide information such as word length and random characters within the target word to improve our prediction accuracy. Through this process, we can predict words from formal meanings with an accuracy of 0.712 and determine the solutions represented by crossword clues with an accuracy of 0.631. \end{abstract} \section{Introduction} Our motivation for this project stems from the idea that people often know what they want to say, but don’t always have to words to do so. We seek to address this problem by building a reverse dictionary. Essentially, users should be able to input the general meaning of the word they are looking for, and our reverse dictionary model would output predictions for this word. Such a tool would have applications in writing papers, solving crossword puzzles, and many other daily uses of the English language. \subsection{Goal} We will build multiple models using techniques such as BOW and LSTM that will be trained on word definitions. We will extract these definitions from dictionaries for more formal definitions, and from crossword clues for more colloquial ones. We will experiment with various models, word vector sizes, and hyperparameters to determine which scheme works best. We will measure accuracy based on the percentage of our model’s predictions that match the true word corresponding to the input definition. We will also determine a second metric for top-n accuracy, where we will measure the percentage of our model’s top n predictions that contain the true word corresponding to the input definition. From there, we will identify how to refine our model and test in which contexts it performs best. \subsection{Applications} Our first step in solving this problem is to understand the specific use cases that we want to target. We have 2 primary use cases: \begin{itemize} \item Traditional reverse dictionary: The input for this use case is a comprehensive definition or meaning of the target word. This application is better suited for writing essays or formal literature. \item Crossword/word puzzle solver: The input for this use case is more cryptic or colloquial than that of a reverse dictionary. This application is better suited for solving word puzzles, such as crosswords. The input to our model for this application would differ slightly in that the crossword solver would also be provided lengths of target words and select characters within the word, since crosswords and word puzzles inherently provide this information. \end{itemize} \section{Background and Related Work} Previous research has been conducted on how to build crossword solvers and reverse dictionaries independently with NLP, but the two problems are not often combined and do not always use deep learning. To build the crossword-solver aspect of this problem, Radev et al formed a constraint satisfaction problem using positioning of characters in words within a crossword puzzle as well as all other clues available to determine word predictions. Thus, using contextual information from an entire crossword along with word definitions and previously solved crossword puzzles, they determined confidence scores to generate predictions for solutions to entire crosswords. Meanwhile, Hill et al addressed the reverse dictionary component of this problem using deep learning, which is more relevant to the problem we are trying to solve. They use models similar to ours, such as Bag of Words (BOW) and Long Short-Term Memory Recurrent Neural Networks (LSTM RNN’s), but train their models exclusively on dictionary data for the purpose of reversing formal word definitions. They achieved an accuracy of 0.89 through these methods. Thus, a new area of research would be combining these two problems into one such that we can generate a single model that can accommodate both problems: reversing both formal and cryptic or colloquial definitions. \section{Data and Annotations} \subsection{Datasets} Training our model requires extensive datasets of word definitions. Finding such datasets is not difficult due to the existence of dictionaries. However, dictionaries contain exclusively formal definitions of words. Users should not be expected to input definitions with such a high degree of formality in order to obtain the word they are looking for, because that would not be a practical use case of our model. In order to ensure that our data includes less traditional word definitions, we scraped large datasets of crossword clues as well. While crossword clues tend to be more cryptic than traditional meanings and definitions, they incorporate more colloquial speech than dictionary definitions might. Thus, our comprehensive dataset includes all word definition and word pairs from the Oxford English Dictionary and Merriam Webster Dictionary, as well as 432,205 crossword clue and solution pairs from the New York Times database of crosswords from the past 16 years and \emph{crosswordsolver.org}. Our dictionary definitions are standard dictionary definitions pulled verbatim from accredited dictionaries. Below are a few examples of what crossword clues might look like: \textbf{Definition}: places for mending; \textbf{True word}: sanitaria. \textbf{Definition}: doesn’t talk smoothly; \textbf{True word}: rasps \textbf{Definition}: noodle-and-vegetable soup; \textbf{True word}: ramen \subsection{Application-based Data Usage} Our train dataset for our model includes all word definitions from the Oxford English Dictionary, and a randomized 70\% of the crossword clues from the New York Times database. Our test and dev sets vary between the two applications of our model. For the reverse dictionary application of our model, our test set consists of all word definitions from the Merriam Webster Dictionary (a different dictionary than the one contained in the train set). It did not make sense to combine all definitions and randomly split them into train, test, and dev datasets, because some definitions would be repeated while some would not exist at all in these datasets, as all English dictionaries contain most of the same words with similar definitions. [dev set is the same as train set…?] For the crossword solver application of our model, our dev set consists of 20\% of the crossword clues from the New York Times database, and our test set consists of the remaining 10\% of these crossword clues. For all our data, we used the GloVe pre-trained word vectors to represent our words. \section{Approach} As previously mentioned, the limited research surrounding deep learning for reverse dictionary applications focuses on LSTM models. Thus, the process that we follow to find an optimal model begins with a vanilla LSTM and experiments with stacked and bidirectional LSTMs as well. Our variable parameters in each model are word vector dimensions and number of layers for LSTM models. We also added a regularization parameter to account for overfitting. \subsection{Human Evaluation} In the course of training and testing our various machine-learning models, we wanted insight into the actual difficulty of solving these clues as well as the type of problem-solving humans use in that process. We therefore created a small dataset randomly sampled from the test set of crossword clues for humans to attempt. We’ll cover the results later. \subsection{Model Baseline} To establish a baseline for the kind of results we might see when solving this problem, we first implement a Bag of Words (BOW) model, where we start with the pretrained embeddings of every word in the clue, add them together, and use a fully-connected layer also initialized to the transpose of these embeddings to predict logit word probabilities. Because BOW is primarily used to represent features of words and does not use any deep learning, it establishes a solid baseline that our deep learning model should be able to beat. We begin testing this model with 50-dimensional word vectors. An example of our top 10 predictions for two similar words are as follows: \textbf{True word}: capital punishment \textbf{Predictions}: high, lash, ease, capital, jail, penitentiary, shush, turmoil, inner, upper \textbf{True word}: death penalty \textbf{Predictions}: death, sin, remission, ill, grievous, punishment, mortal, guilty, conviction, child While these true words have very similar meanings, the predictions vary drastically. Given a prediction in the first set, one would not necessarily associate a prediction in the second set. Thus, we conclude that overfitting may be occurring, since the slight difference in similar definitions is causing the model to vary drastically. To account for this, we add regularization in the form of dropout. As we saw in class, 0.5 was typically a successful dropout rate parameter, so we start with this value. Additionally, we increase the dimensionality of our word vectors to 300 dimensions so that more information could be stored in each word embedding. The greater number of dimensions would ideally represent words more accurately by establishing more difference between words that were truly different rather than over-emphasizing difference between words that were relatively similar. \subsection{LSTM Models} After implementing and studying results from the BOW model, which are discussed in the next section, we transition to building our LSTM models which we ideally want to result in stronger predictions. Our variables in these models are the dimensions of our word vectors, the number of LSTM stacked layers, and our regularization parameters to account for overfitting. \subsubsection{Vanilla LSTM} For our vanilla LSTM model, we implement the most basic version of an LSTM in TensorFlow. Thus, it has only one layer, and uses 300-dimensional word vectors and a regularization parameter of 0.5 based on our results from the BOW model. For a sample definition, “stretch out on a sofa,” for which the correct word is “loll,”, this model generates the following top 10 predictions: rest, couch, nap, chaise, bed, lounge, loll, dream, sit, and eke. While the top 10 predictions are capturing the target word, we can see that our model is generating words such as “couch,” “chaise,” and “bed,” which are similar to the noun “sofa,” but are not similar to the entire action of “stretch out on a sofa” as the other words are. To add more depth to our model such that it can account better for parts of speech, we will build a model with more layers, as discussed in the next section. \subsubsection{Stacked LSTM} \begin{figure} \centering \includegraphics[width=100mm]{stacked_network.png} \caption{Fig 1: Model of our stacked LSTM model} \end{figure} This model is identical to the final version of our vanilla LSTM aside from the number of stacked layers used. Because stacked LSTM’s increase the number of layers that an LSTM has to learn information about its input, our hope in increasing the number of layers is that predicted words will account for parts of speech and tense in ways that our vanilla LSTM did not. The predictions for “stretch out on a sofa” for this model are as follows: lounge, loll, couch, rest, nap, sleep, dream, sit, relax, unwind. We can see that these predictions are better than the vanilla LSTM in that they caption more actions as opposed to nouns similar to sofa. Rather than “chaise” and “bed,” we instead see “relax” and “unwind,” which are more similar in meaning to our definition. Additionally, we can see that our loss converges relatively steadily for this model in the figure below. \begin{figure} \centering \includegraphics[width=50mm]{loss.png} \caption{Fig 2: Loss of stacked LSTM model in tensorflow} \end{figure} \subsubsection{Bidirectional Stacked LSTM} To optimize our model further, we implement a bidirectional stacked LSTM so that our model would have more flexible input data. Since our problem is not chronologically limited and we have access to all words and definitions when building the model, bidirectionality will increase the amount of input information for each time step. Increasing the amount of data available for each prediction also helps introduce more context clues, which may help with distinguishing parts of speech and tense. The predictions for “stretch out on a sofa” for this model are as follows: rest, lounge, loll, sofa, relax, splay, sit, lie, nap, sleep. As we can see, this model performs similarly to the simply stacked LSTM, outputting only one word that more closely identifies with the meaning of “sofa” than with the meaning of “stretch out on a sofa”; that word happens to be “sofa” itself in this case, which is inherently closer to “sofa” than words generated from previous models, such as “couch” or “chaise.” \section{Model Limitations} The majority of our model’s limitations come from our datasets. When a person is using a reverse dictionary, the type of input they provide to our model is unpredictable: it could be a sentence with a blank word, or a colloquial representation of the word in question, or a comprehensive definition or meaning. It is hard to account for this type of unpredictability given our datasets. Dictionary definitions provide a holistic coverage of comprehensive and structured inputs. However, even crossword clues do not offer the best representation for colloquial definitions because crossword clues are more cryptic than colloquial. Colloquial definitions are likely to be the most frequent input to our model, since people do not speak in the same formal format that a dictionary is written. Additionally, our model accounts only for denotation as opposed to connotation. In colloquial word definitions, people often associate meaning with connotation. Thus, our model is limited because it is trained on a relatively specific input dataset compared to the types of definitions and meanings that it might encounter in an application context. \section{Experiments and Analysis} \subsection{Human Analysis} We had n=19 human subjects, with an average accuracy of 8.4\%. Our sample of human problem solvers included several Stanford faculty, a Jeopardy contestant, and undergrads from a variety of fields. The best human scorer was at 12\% accuracy, so we feel confident that our models are vastly outperforming even expert humans. Qualitatively, it seems that for the clues that human problem-solvers get correct they usually leverage a more bag-of-words approach than a sequential one to reach the clue. For example, ‘childrens block company’ quite easily maps to Lego irrespective of the permutations of the words. These are the vein of clues that humans do well on. We notice similar trends in our models, with the bag-of-words family only outdone by our most advanced LSTM models. \subsection{Experimental Results} We evaluate our models based on accuracy metrics that we calculate for all four models listed above, on both 50-dimensional and 300-dimensional word vectors. Additionally, evaluations are done separately for crossword clue predictions and dictionary definition predictions. Our first accuracy metric is ratio of correctly predicted words to total number of input word definitions. We measure accuracy on both our dictionary test set and our crossword clues test set to keep the accuracies of the two applications of our models separate, as discussed in Section 3. \begin{figure} \centering \includegraphics[width=50mm]{REAL_topdictdef.png} \caption{Fig. 3: Accuracy of top prediction for the true word corresponding to the dictionary definition.} \end{figure} \begin{figure} \centering \includegraphics[width=50mm]{REAL_topcrossclue.png} \caption{Fig. 4: Accuracy of top prediction for the true solution corresponding to the crossword clue.} \end{figure} Since these accuracy values are not very high for crossword clues, we then reveal the top 10 predicted words for crossword clues as opposed to simply the top predicted word for each definition, so that we can determine whether our model was in the ballpark for a larger number of words than it appeared. We redefine our accuracy metric to be the ratio of top 10 predicted words that contain the correct target word to the total number of input word definitions. The accuracies are as follows: \begin{figure} \centering \includegraphics[width=50mm]{top10cross.png} \caption{Fig. 5: Accuracy of top 10 predictions for the true solution corresponding to the crosswordclue.} \end{figure} These accuracy values are significantly better than the prior metric. We then see that adjusting our evaluation metrics can refine our model’s predictions better than adjusting our actual model can. For the crossword application of our model, we develop two new accuracy metrics to do so based on the extra information that is inherent to crosswords: Provide the length of the correct word with the word definition, so that we can iterate through the top n word predictions and return the first word of the correct length. Provide a random character and its index from the correct word with the word definition, so that we can iterate through the top n word predictions and return the first word with this character at the given position. This approach results in the following accuracies for our crossword clue test set, where n=10: \begin{figure} \centering \includegraphics[width=50mm]{top10len.png} \caption{Fig. 6: Accuracy of top the prediction for the true solution corresponding to the crossword clue given the true solution length.} \end{figure} \begin{figure} \centering \includegraphics[width=50mm]{top10char.png} \caption{Fig. 7: Accuracy of the top prediction for the true solution corresponding to the crossword clue given a random character in the true solution.} \end{figure} Since many of these accuracies were less than the top 10 general predictions, which was contrary to our hypothesis, we expanded our sample size to n=50 to improve our accuracy: \begin{figure} \centering \includegraphics[width=50mm]{top50len.png} \caption{Fig. 8: Accuracy of the top prediction for the true solution corresponding to the crossword clue given the true solution length.} \end{figure} \begin{figure} \centering \includegraphics[width=50mm]{top50char.png} \caption{Fig. 9: Accuracy of the top prediction for the true solution corresponding to the crossword clue given a random character in the true solution.} \end{figure} The best accuracy we achieved for dictionary definitions (and therefore the more formal reverse dictionary application of our model) was 0.712 with the 300-dimensional stacked LSTM model. The best accuracy we achieved for crossword clues (and therefore for the puzzle solver application of our model) was 0.631 with the 300-dimensional stacked LSTM model using character reveal during evaluation. Evidently, our model works better on dictionary definitions than on crossword clues. This is to be expected given our discussion about our model’s limitations and the fact that it is better trained to handle formal word definitions. \section{Conclusion} In designing and then in pursuing this project we were simultaneously pursuing a variety of goals. We succeeded to varying degrees in achieving them with the flexible architecture we pursued. In particular, we are pleased with our performance on the challenging problem of solving crossword clues without additional information, in which we vastly outperformed humans. The main takeaway of this result is that we have found a reasonable neural representation of the language which encodes the knowledge advantage a neural system gets in training. Colloquially, after feeding our network a dictionary, we can now be convinced of the digestive process as well. We are also pleased that our postfix modifications to include additional information provide modest increases in performance, to the point that it might be reasonable to expect our system to bootstrap itself into solving full crosswords. While Hill et al achieved an accuracy of 0.89 on their dictionary definition-specific model and we only achieved 0.71 on our model for the reverse dictionary application, our model’s accuracy is likely lower because it includes crossword clues in the training set as well as dictionary definitions in order to accommodate the puzzle solver application of our model. Though our accuracy on dictionary clues is far from perfect, our example results show that the system is typically in the right ballpark on finding the word and might be useful for finding related words. \subsection{Future Directions} We see a few directions for future research in the area. First, it could be instructive to devise methods of combining this model with search- or constraint-based models to attack the task of solving full crosswords. In a model like that, this algorithm would be a method of finding and ranking candidates for a search or CSP module that would address the more global picture of the crossword. It could also be interesting to try and design a differentiable architecture to allow such a system to be trained end-to-end. Another potential direction for future research is in building character-by-character models to predict each successive character of a correct answer. These architectures could potentially include the added crossword information to the character-prediction process. We strongly considered implementing one of these for our current research but ultimately decided that it was not in line with our goal of building the dual-purpose system we currently have. \section*{Acknowledgments} Use unnumbered third level headings for the acknowledgments. All acknowledgments go at the end of the paper. Do not include acknowledgments in the anonymized submission, only in the final paper. \section*{References} \small{ \printbibliography } \end{document}
{ "alphanum_fraction": 0.7973742158, "avg_line_length": 100.7071129707, "ext": "tex", "hexsha": "3dc64cbe2f3a1ed22cd59a1d289b461c36ad2c4d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2622f5b02ed47084370e22f6a17109809aa71d08", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "virajmehta/backwards_dict", "max_forks_repo_path": "report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2622f5b02ed47084370e22f6a17109809aa71d08", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "virajmehta/backwards_dict", "max_issues_repo_path": "report.tex", "max_line_length": 1060, "max_stars_count": null, "max_stars_repo_head_hexsha": "2622f5b02ed47084370e22f6a17109809aa71d08", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "virajmehta/backwards_dict", "max_stars_repo_path": "report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4960, "size": 24069 }
\documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[usenames,dvipsnames]{xcolor} \usepackage{tikz} \usepackage{graphicx} \usepackage{float} \usepackage{subfig} %\usepackage{subfigure} \usepackage{multirow} \usepackage{caption} \usepackage{subcaption} \usepackage{float} \usepackage{listings} \usepackage{natbib} \usepackage{geometry} \usepackage{array} \usepackage[nottoc]{tocbibind} \geometry{margin=2cm} %set the indentation length to zero \setlength{\parindent}{0pt} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=Black, filecolor=TUMBlue, urlcolor=TUMBlue, citecolor=TUMBlue, } \urlstyle{same} \graphicspath{ {./Figure/} } \rmfamily %set your name here \newcommand*{\getAuthor}{Author} %set your Submission Date here \newcommand*{\getSubDate}{Submission date} \newcommand*{\getSubLoc}{Munich} \definecolor{TUMBlue}{cmyk}{1,0.43,0,0} \newcommand\AtPageUpperRight[1]{\AtPageUpperLeft{% \makebox[\paperwidth][r]{#1}}} \title{TUM-Document-Latex} \author{llqqyyllqqyy } \date{March 2020} \begin{document} \begin{titlepage} \begin{tikzpicture}[remember picture,overlay] \node[anchor=north east,inner sep=20pt] at (current page.north east) {\includegraphics[scale=0.06]{logo}}; \end{tikzpicture} \vspace{50mm} \centering {\Huge\bfseries Title } \vspace{130mm} \large{ \begin{tabular}{l l} \bfseries{Author}: & \getAuthor \\ \bfseries{Supervisor}: & Supervisor name \\ \bfseries{Advisor}: & Advisor\\ \bfseries{Submission Date}: & Submission Date \\ \end{tabular} } \end{titlepage} \input{Acknowledge} \tableofcontents \pagenumbering{Roman}% %\tableofcontents% \clearpage \listoffigures \clearpage \listoftables{} \clearpage \addcontentsline{toc}{section}{Listings} \lstlistoflistings \clearpage \addcontentsline{toc}{section}{Abstract} \section*{Abstract} \input{Abstract.tex} \clearpage \pagenumbering{arabic}% \section{Introduction} \subsection{Subsection} \subsubsection{Subsubsection} \paragraph{paragraph} Cite here \cite{latex} \section{Figure} \input{Figure} \clearpage \section{Table} \input{Table.tex} \clearpage \section{Math} \input{Math} \clearpage \renewcommand\bibname{References} %APA Format Reference \bibliographystyle{apalike} \bibliography{reference.bib} \addcontentsline{toc}{section}{Appendix} \appendix \end{document}
{ "alphanum_fraction": 0.7441275168, "avg_line_length": 19.5409836066, "ext": "tex", "hexsha": "bc62fc29706c0dbdbc331b0835d01deb8da77eac", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "61cacf66197925f93f04df7a8799a6629e948c8d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Stella-Sirius/TUM-Project-Study-Template", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "61cacf66197925f93f04df7a8799a6629e948c8d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Stella-Sirius/TUM-Project-Study-Template", "max_issues_repo_path": "main.tex", "max_line_length": 71, "max_stars_count": null, "max_stars_repo_head_hexsha": "61cacf66197925f93f04df7a8799a6629e948c8d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Stella-Sirius/TUM-Project-Study-Template", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 761, "size": 2384 }
\startcomponent ma-cb-en-metapost \product ma-cb-en \chapter{Graphical extension / \METAPOST} \index[metapost]{\METAPOST} \index{graphical features} The graphical possibilities of \TEX||related macro packages are rather limited. However by using the graphical package \METAPOST\ of John Hobby a complete range of graphical features has become available that may improve the look of your documents. In \CONTEXT\ there is a direct link to \METAPOST\ so users can apply the features of \METAPOST\ directly into their documents. The chapter headers and page numbers of this manual are extended by some graphical elements that are generated by \METAPOST. The usage and features of \METAPOST\ within \CONTEXT\ are described in the extensive \METAFUN\ manual. \stopcomponent
{ "alphanum_fraction": 0.7974193548, "avg_line_length": 29.8076923077, "ext": "tex", "hexsha": "c5c228e2c8dba630539e8456c9198d61feb044e0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "marcpaterno/texmf", "max_forks_repo_path": "contextman/context-beginners/en/ma-cb-en-metapost.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "marcpaterno/texmf", "max_issues_repo_path": "contextman/context-beginners/en/ma-cb-en-metapost.tex", "max_line_length": 59, "max_stars_count": null, "max_stars_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "marcpaterno/texmf", "max_stars_repo_path": "contextman/context-beginners/en/ma-cb-en-metapost.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 185, "size": 775 }
\chapter{Single Mauthner Cell Model - Theory} In this chapter we will explain the theoretical aspects of the neuronal model for a single Mauthner Cell. By 'single' Mauthner cell we only mean that we are considering the mechanisms of the surrounding circuit involving one of the two existing Mauthner cells instead of both. We will start with the description of the full model and continue with two reductions that assume a separation of timescales and thus provide stationary approximations of the model. \section{Full neuronal model} The full neuronal model of a single Mauthner cell consists of a rate-based model for the population of inhibitory interneurons that provide the feed-forward inhibition and a LIF model for the M-cell itself. Both the inhibitory population and the M-cell get their input from a single source. In our case this input will represent the visual information coming from the optic tectum which will be described in more detail in the next chapter. The time evolution of the activity $\rho$ of the inhibitory population is described by the following equation: \begin{equation} \tau _{\rho} \frac{d\rho}{dt} = - (\rho(t) - \rho_{0}) + c_{\rho} I(t) + \eta _{\rho}, \label{eq:inhib} \end{equation} where $\tau _{_\rho}$ is the time constant, $\rho _{0}$ is the resting activity of the population, $c_{\rho}$ is a scaling factor, $I(t)$ is the time dependent input and $\eta _{\rho}$ is a Gaussian noise term. While we assume that the resting activity $\rho_{0}$ is constant during a single trial of an experiment, we sample its value during a single trial from a random distribution that we further specify in the next chapter.\\ For the M-cell we use a LIF model where the time evolution of the membrane potential $V_m$ is described by the following equation: \begin{equation} \tau _m \frac{dV_m}{dt} = - (V(t) - E_{L}) + R_{m} I(t) - \rho (t) + \eta _m, \label{eq:mcell} \end{equation} where $\tau_{m}$ is the membrane time constant, $E_L$ is the resting potential, $R_m$ is the membrane resistance and $\eta_{m}$ is again a Gaussian noise term. The M-cell thus gets the direct visual input $I(t)$ and is inhibited by $\rho(t)$. If the membrane potential $V_m$ crosses a threshold $V_t$ an action potential is artificially produced and the membrane potential is reset to the resting potential $E_L$. Additional to the noise terms in equations \ref{eq:inhib} and \ref{eq:mcell} we will also consider fluctuations of the firing threshold $V_t$: \begin{equation} V_t (t) = V_t + \eta_t(t), \label{eq:thrs} \end{equation} where $\eta_t$ is a Gaussian noise term.\\ The basic parameters of the LIF model, i.e. $E_L$, $R_m$, $\tau_m$ and $V_t$, have been fitted to experimental data in a previous study by \cite{Koyama2016} using recordings from four larval zebrafish at four days post-fertilization(dpf). For the details of the fitting procedure see their methods section.\\ One important property of this dynamic system are the time scales on which the described activity is going on. Since we know that the synapses at the inhibitory interneurons are electric, at least for the auditory input, the time constant, and therefore the relevant time scale, of $\rho$ is in the order of milliseconds. As we will see later on, in the experiments that we want to reproduce the changes in the input over time are on much bigger time scales of at least hundreds of milliseconds. This fact motivates the reduction in the next section where we approximate the activity of the inhibitory population by an adiabatic ansatz assuming a separation of time scales. \section{Stationary Approximation of Inhibitory Population}\label{approx inhibition} Here we reduce the model by approximating the activity of the inhibitory population by its stationary solution. This approximation is the more accurate the higher the difference is between the time scale of the dynamics of the inhibitory population and the time scale of the input. If we use $\tau_{\rho}$ as the time scale of the inhibitory population and denote $\tau_{in}$ as the time scale of the input, the approximation becomes equivalent for the limit $\tau_{\rho}/ \tau_{in} \rightarrow 0$. In the model, this means that equation \ref{eq:inhib} becomes: \begin{equation} \hat{\rho} (t) = \rho_{0} + c_{\rho} I(t) + \eta_{\rho}. \label{eq:inhib_approx} \end{equation} Now we can replace $\rho (t)$ in equation \ref{eq:mcell} and get: \begin{equation} \tau _m \frac{dV_m}{dt} = - (V(t) - E_{L}) + I(t)(R_{m} - c_{\rho}) - \rho_{0} - \eta_{\rho} + \eta _m. \label{eq:mcell_approx1} \end{equation} In the resulting LIF model the input is now weighted by the difference between the scaling factor $c_{\rho}$ and the membrane resistance $R_m$. If we ignore the noise terms for a moment and assume that $\rho_{0}=0$, this means that the input can only excite the M-cell and therefore evoke an action potential if $c_{\rho} < R_m$. Increasing $\rho_{0}$ would effectively increase the firing threshold $V_t$. \section{Stationary Approximation of Full Model}\label{approx full model} As a next step we can further approximate the LIF model in equation \ref{eq:mcell_approx1} by its stationary solution: \begin{equation} \hat{V}_m(t) = E_{L} + I(t)(R_{m} - c_{\rho}) - \rho_{0} - \eta_{\rho} + \eta _m. \end{equation} If we set all noise to zero we can derive an expression for the input at which the membrane potential reaches the threshold $V_{t}$: \begin{equation} \hat{V}_m(t) \overset{!}{=} V_t \end{equation} \begin{equation} \Leftrightarrow E_{L} + I(t)(R_{m} - c_{\rho}) - \rho_{0} \overset{!}{=} V_t \end{equation} \begin{equation} \Leftrightarrow I(t) \overset{!}{=} \frac{V_t - E_{L} + \rho_{0}}{(R_{m} - c_{\rho})} \label{eq:crit_input} %TODO: look up solution for simple LIF equation even if it's only for linear input %TODO: say that this is comparable to first-passage time problems such as in the %drift-diffusion model for decision making(maybe cite ratcliff2002 or so) \end{equation} %----------------------------------------------------------------------------------------
{ "alphanum_fraction": 0.7272285251, "avg_line_length": 63.6082474227, "ext": "tex", "hexsha": "c9aeccfe1c4fa5e689e2e526a484b52440940849", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d612c3b240b2d7325cac6fbf9c85c3d81250b558", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "awakenting/master-thesis", "max_forks_repo_path": "manuscript/Chapters/2-Theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d612c3b240b2d7325cac6fbf9c85c3d81250b558", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "awakenting/master-thesis", "max_issues_repo_path": "manuscript/Chapters/2-Theory.tex", "max_line_length": 239, "max_stars_count": 4, "max_stars_repo_head_hexsha": "d612c3b240b2d7325cac6fbf9c85c3d81250b558", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "awakenting/master-thesis", "max_stars_repo_path": "manuscript/Chapters/2-Theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T00:18:22.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-04T17:51:34.000Z", "num_tokens": 1713, "size": 6170 }
\section{Examples} \label{sec:examples} We now see three applicative examples of the Marciani Normal Form and Marciani's Rule. Let us first consider a very simple application. \begin{example} Let us consider the context-free grammar $G$ with axiom $S$ and the following productions \begin{flalign*} S&\rightarrow abcS|Sdef|ghi|\varepsilon \end{flalign*} The grammar is in MNF, thus we know it generates a context-free language. The language generated by $G$ is denoted by the regular expression \begin{equation*} (abc)^{*}(ghi+\varepsilon)(def)^{*} \end{equation*} \end{example} Let us now consider a slightly more complex application. \begin{example} Let us consider the context-free grammar $G$ with axiom $S$ and the following productions \begin{flalign*} S&\rightarrow aAS|SBdef|CD|\varepsilon \\ A&\rightarrow uA|Av|m \\ B&\rightarrow xB|By|n \\ C&\rightarrow gC|Ch|i \\ D&\rightarrow pD|Dq|r \end{flalign*} The grammar is in MNF, thus we know it generates a context-free language. The language generated by $G$ is denoted by the regular expression \begin{equation*} (au^{*}mv^{*})^{*}(g^{*}ih^{*}p^{*}rq^{*}+\varepsilon)(x^{*}ny^{*}def)^{*} \end{equation*} \end{example} We now consider an application on a grammar in the notable Chomsky Normal Form (CNF) \cite{chomsky1959certain}. \begin{example} Let us consider the context-free grammar $G$ with axiom $S$ and the following productions \begin{flalign*} S&\rightarrow AS|SB|CD|z|\varepsilon \\ A&\rightarrow UA|AU|m \\ B&\rightarrow XB|BX|n \\ C&\rightarrow UC|CU|i \\ D&\rightarrow XD|DX|r \\ U&\rightarrow u \\ X&\rightarrow x \end{flalign*} The grammar is in MNF, thus we know it will generate a context-free language. The language generated by $G$ is denoted by the regular expression \begin{equation*} (u^{*}mu^{*})^{*}(u^{*}iu^{*}x^{*}rx^{*}+z+\varepsilon)(x^{*}nx^{*})^{*} \end{equation*} \end{example}
{ "alphanum_fraction": 0.6941722537, "avg_line_length": 27.7, "ext": "tex", "hexsha": "dbd9892b8b0667e0eb381c336da53fdb31e731c8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gmarciani/research", "max_forks_repo_path": "marciani-normal-form/sec/examples.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gmarciani/research", "max_issues_repo_path": "marciani-normal-form/sec/examples.tex", "max_line_length": 80, "max_stars_count": 3, "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gmarciani/research", "max_stars_repo_path": "marciani-normal-form/sec/examples.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "num_tokens": 637, "size": 1939 }
\subsection{Relating to previous work} One of the goals of our thesis is to at least reach the same level of performance compared to the work done by Kaliszyk. \cite{kaliszyk2014machine} For this purpose our experiments are set up in a comparable way. For example, in the experiments by Kaliszyk also 10-fold \crossvalidation is used, along with something comparable to the canonical consistensize method. Their final results are also presented for the \corn dataset. \begin{figure}[H] \begin{tabular}{lcccccc} - & \oocover & (ours) & & \auc & (ours) & \\ \hline \input{data/relative} \end{tabular} \caption{The relative performances of both the experiments in this thesis and the experiments as done by Kaliszyk.} \end{figure} Our \oocover performance is markedly lower, and can not be explained away by minor differences in approach or parameters. We discussed this discrepancy with Kaliszyk, but were not able to reach a conclusion as to why this is the case. Unfortunately their code is not available for further research. Surprisingly our \auc performance is quite similar. However as discussed earlier this does not help with regards to the premise selection problem. \subsection{Reliability} The results compared to the work by Kaliszyk et al \cite{kaliszyk2014machine} bring the reliability of this thesis into question. There are two key differences compared to our approach. We have tried to account for these differences by measuring their effect. \subsubsection{Prior knowledge} We would expect the machine learning methods to perform better given the prior knowledge of corpora on which is depended. This however does not seem to be the case. The research by Kaliszyk \cite{kaliszyk2014machine} only states that 10-fold \crossvalidation is used, but not if the prior datasets are also tested as part of the testset. Thus it is difficult to relate our results to that by Kaliszyk. As discussed in the results in Section \ref{section:prior}, we believe that the prior datasets introduce too much noise for the prediction methods to handle. This is purely conjecture, but we believe that this would be less of a problem when better descriptive features would be used. Currently each definition forms a feature, but a singular definition does not directly relate to which theorems would be useful. It is up to the model to internally relate definitions to useful theorems. If there is no prior context of usages of a theorem in relation to said definition, a model would not have a way to build these internal relations. Prior datasets increase the set of theorems and definitions, but do not necessarily generate a prior context which relates to the theorems of the testset. Thus they might make the problem to solve harder. Better descriptive features would be more abstract, but might provide a better context that follows naturally from that abstraction. Concretely the \emph{structure} of a theorem might help as a better feature. \subsubsection{Poset consistency} The canonical method performs quite a bit worse than the pessimistic method. As mentioned in Section \ref{section:poset-consistency}, the canonical method performs worse because it removes more possibly relevant context from the trainingset compared to the pessimistic method. Which part is removed exactly depends on the random seed determining the dataset order used in the canonical method, whereas for the pessimistic method only (always the same) part of illegal theorems and definitions is removed. For the canonical method it is possible to come up with a different seed (and thus a different ordering) that yields wildly different results. In this thesis a singular random seed is consistently used for all of the results. The pessimistic method is not a fair comparison to the prior research, as it finds the optimal ordering (or even better). What it however does provide is an upper bound on the performance of the premise selection tools, as well as a novel less biased way of comparing similar solutions to the premise selection problem. Whilst the canonical method might yield worse results due to its random seed, the results from the pessimistic method still do not hold a candle to the results by Kaliszyk. Thus we can conclude that regardless of the random seed used, our solution will not yield a better result than that by Kaliszyk. \subsection{Machine learning approaches} Our main contribution to the premise selection problem for \coq is the implementation of various machine learning methods. We have discovered that the \knnadaptive method performs badly, regardless of corpus used. The \nb method works OK given suitable parameters. However these suitable parameters are hard to find. Our novel adaptation of \adarank works quite well, but is expensive computationally and memory-wise. Our quick \ensemble experiment shows promise, but we haven't dabbled with various combinations all too much. \adarank is interesting in that it successfully maps solutions from the \emph{Learning to rank} domain to our \emph{Premise selection} domain. However the amount of resources that it requires results in that it can not be feasibly run with the current implementation or hardware. As such were we to implement a Clippy for \coq based on our experiments, it would be with a learner based on \ensemble. \subsection{Frequency, depth, flat models} The depth model seems to work best, as it is probably a more descriptive model of the dataset. The frequency model works fine. The flat model is not descriptive at all, and as such not useful. It would be interesting to see if models based on both frequency and depth will be significantly better. \subsection{Corpora} Given the performance of our models it is hard to say anything about the various corpora used, except infer their triviality. For each corpus we've summarised the performance in the following table: \begin{figure}[H] \begin{tabular}{lllrrr} Corpus & $|S|$ & $|\defs|$ & $|\thms|$ & \oocover & \auc \\\hline \input{data/counts-performance} \end{tabular} \caption{Corpora (depth model) examined in this thesis, number of objects, definitions and theorems and corresponding best performance for any machine learning method with the canonical strategy} \end{figure} Surprisingly \formalin seems to be most trivial, followed by \coq, \mathclasses, \corn and finally \mathcomp. In retrospect \formalin is mostly comprised of mechanical proofs concerning language semantics, and thus would indeed contain the more trivial theorems. This relation is not linear with regards to their size. \subsection{Tooling} We did not integrate our tooling into the CoqIDE GUI, as was one of the design goals for the \roerei tooling. As we quickly switched to C++ for the prediction component of \roerei, it is doubtful \roerei would ever be merged into the \coq main branch, even if the performance was significant. Due to the disappointing performance we decided to not invest the effort to actually implement an assistant like Clippy within CoqIDE. \subsection{Future work} \subsubsection{Deep learning} Sarah Loos et al have experimented with proof guidance and clause selection for \mizar using Deep Learning \cite{loos2017deep}. Their research show a lot of promise. Adapting their work to the premise selection for \coq problem however is not straightforward, as the clause selection used is dependent on the First Order logic nature of \mizar and on operations on proofs in the Clausal Normal Form (CNF). Alternatively our transformation of the Learning to Rank problem for \adarank could be applied to previous Deep Learning solutions like those by Song \cite{song2018deep}. \subsubsection{Ensembles} For this thesis we've ran a small experiment using ensemble learning, which worked moderately well compared to the more simple solutions. With more solutions in the core, quite a few variants of ensembles can be tried out. Particularly different ensembles can be built from models using both the depth and frequency datasets. Also ensembles can be made where a part of the ensemble is only aware of local theorems, and others have a wide knowledge base. I think this might work because in some cases the models that did not learn from prior datasets performed better. \subsection{Conclusion} We have developed a method to extract the necessary information from the Coq system in order to perform premise selection inspired by the work of Kaliszyk and described in Section \ref{section:extraction}. Several variants of this have been developed, of which the depth-model currently works best. Of our experiments the Adarank-method has the best performance, but requires significant amounts of resources. If resources are of import, than our single ensemble method works best. The Adarank method hints that other Learning-to-Rank solutions can also be transformed and might also perform well. In retrospect our comparison of various corpora is of limited use by itself. Combined with an analysis of the methods most notably the \knnadaptive method performs similar in simple cases, but is unable to keep up for complex corpora such as \corn. The dependency of prior corpora also gives insight into the incapability of the methods to make use of a larger context. \subsection{Reflection} In academic works any mistake can impact the results significantly. This is especially true for work that applies machine learning, as machine learning could be compared to a black box which is difficult to reason about. During the implementation of this thesis I often wondered if the results that were published in the various papers I read were actually correct, or whether the results were flawed due to bugs in for example the implementation of the performance metrics. In various cases I am not able to reproduce the results published as the technical descriptions lack the necessary detail to re-implement the described experiments. Even when speculating on these implementation details, sub-par performance results are achieved. As a result I have become sceptical about machine learning as an academic field. I think it is absolutely critical for the machine learning field, indeed all of academia, that reproducibility but also falsification of prior results gains more importance. Currently it is almost impossible to get a paper published that re-affirms prior results. In Computing Science the horrors of \emph{PhD-code}, code which is of abhorrent bad quality, are also well known. By which I do not mean that no code written by PhD students is of good quality, but surely it is a rarity. However I also understand the desire to quickly adapt and experiment, and that code quality is of a minor concern to those authoring it. It is exactly this mindset that should be stimulated to change. A lot could be learned from the principles of Open Source software, where code is typically written in an universally understandable manner and published in an accessible way. I would even go further as to state that all published academic works should also publish their complete datasets and instrumentation (in the case of Computing Science: the code) for the sake of efficiency and integrity. It was conveyed to me that in particular Computing Science has put in effort to improve this situation, and that publication of all materials involved has become the standard, at least at the Radboud University. I have done my utmost best, given limited time, to enable the full reproducibility of the datasets and ultimately the results of this thesis. Even though it is highly likely that there is a critical error in either the method or the code and that thus my results are of limited use, I sincerely hope that my description as written down in this thesis and the published code online will still be of some use to someone.
{ "alphanum_fraction": 0.8072890242, "avg_line_length": 64.2717391304, "ext": "tex", "hexsha": "8193218528702357a00d3eb205c3538a1c7acd9e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c07c7d2d52605fd3d960ec4b5d952eb0aae4bb5b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Wassasin/premiseselection", "max_forks_repo_path": "documents/thesis/document-discussion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c07c7d2d52605fd3d960ec4b5d952eb0aae4bb5b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Wassasin/premiseselection", "max_issues_repo_path": "documents/thesis/document-discussion.tex", "max_line_length": 152, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c07c7d2d52605fd3d960ec4b5d952eb0aae4bb5b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Wassasin/premiseselection", "max_stars_repo_path": "documents/thesis/document-discussion.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-11T14:59:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-11T14:59:32.000Z", "num_tokens": 2486, "size": 11826 }
\vssub \subsubsection{~Rotated grids} \label{sub:num_space_rotagrid} \opthead{RTD}{\ws\ (MetOffice)}{J.-G. Li} \noindent The rotated grid is a latitude-longitude (lat-lon) grid and is obtained by rotating the North Pole along a longitude $\lambda_{p}$ to a new position at latitude $\phi_{p}$ in the standard latitude-longitude system. The new pole position is chosen so that the model domain of interest may be placed around the rotated equatorial area for a evenly-spaced lat-lon mesh. For this reason the rotated grid is also known as \emph{Equatorial grid}. For instance, the North Atlantic and European wave (NAEW) model used in the UK Met Office uses a rotated pole at 37.5N, 177.5E so that London, UK (\textasciitilde{}51.5N 0.0E) is almost on the rotated equator. This rotated grid allows a much more evenly spaced lat-lon mesh in the NAEW domain than the standard lat-lon grid in the same area. In \ws, the rotated grid is implemented with minimum changes to the original lat-lon grid. In fact, the rotated grid is treated just like the standard lat-lon grid inside the model. To set up and run a rotated grid model configuration, users should choose the regular lat-lon grid along with the {\code RTD} switch. The rotated pole position is set using the {\code PLAT} and {\code PLON} variables under the namelist {\code ROTD} in the input file {\bf ww3\_grid.inp} (see Sect.~\ref{sec:config011}). If the pole is set as {\code PLAT = 90.0}, {\code PLON = -180.0}, the grid is treated as a standard lat-lon system. Model input files, like wind, current and ice files should be mapped on to the rotated grid. For convenience of nesting in standard lat-lon grid frameworks, boundary conditions data provided to and output from the rotated grid use spectra referenced to a standard grid north and standard lat-lon grid points values, which are converted into rotated grid lat-lon inside \ws. The list of 2D spectral output locations in {\bf ww3\_shel.inp} or {\bf ww3\_shel.nml} are also specified in standard lat-lon. When nesting from a standard grid to a rotated grid model, both the outer and inner grids should be built with the {\code RTD} switch set when compiling executables for the models. When nesting from a rotated grid to a standard grid model, the inner (standard) grid model does not require to be built with {\code RTD}. Output of spectra at boundary points to one-way nested inner grids are transferred as described in Appendix~\ref{app:nest}. Output b.c.\ are defined in the input file (see {\bf ww3\_grid.inp}, Sect.~\ref{sec:config011}) as a sequence of straight lines given in coordinates of the inner grid. If the inner grid is rotated, it's pole position must be defined as the values of the array elements {\code BPLAT(\sl{n})} and {\code BPLON(\sl{n})} under namelist {\code ROTB} for the outer grid. The array index {\sl{n}} is the index ({\sl{1:9}}) of the boundary conditions file {\file nest{\sl{n}}.ww3}. Model directional and x-y vector outputs from a rotated grid can be converted to a standard grid north reference by setting the UNROT variable in the {\bf ww3\_grid.inp} namelist ROTD to True. With this set, for point outputs lat-lon locations all directional values such as wind direction, current direction and 2D spectra are converted into standard lat-lon orientation. Functions to de-rotate gridded fields are applied in {\bf ww3\_ounf}, {\bf ww3\_outf} and {\bf ww3\_grib}. When running {\bf ww3\_ounf} and {\bf ww3\_ounp}, the resulting netCDF files will include a variable attribute direction\_reference, which describes whether a standard (True North) or rotated grid directional reference frame has been used. Gridded netCDF files generated by {\bf ww3\_ounf} also include \emph{standard\_latitude} and \emph{standard\_longitude} two-dimensional arrays that describe location of the rotated model cell centres in the standard lat-lon reference frame. If the user wishes to generate boundary conditions for a rotated pole grid using the {\bf ww3\_bound} or {\bf ww3\_bounc} boundary processing programs, then it should be noted that the input spectra for these programs are always expected to be formulated on a \emph{standard pole}. Six subroutines are provided in module {\bf w3servmd.ftn} for rotated grid conversion: \begin{vlist} \vit{w3spectn}{}{Turns wave spectrum anti-clockwise by AnglD} \vit{w3acturn}{}{Turns wave action(k,nth) anti-clockwise by AnglD} \vit{w3thrtn}{}{Turns direction parameters anti-clockwise by AnglD} \vit{w3xyrtn}{}{Turns x-y vector parameters anti-clockwise by AnglD} \vit{w3lltoeq}{}{Convert standard into rotated lat/lon plus AnglD} \vit{w3eqtoll}{}{Reverse of w3lltoeq, but AnglD unchanged} \end{vlist} These subroutines are self-contained and can be extracted outside the model for pre- or post-processing of rotated grid files. Some conversion tools have been developed based on these subroutines but have not been included in \ws\ yet. Refer to the regression test \emph{regtests/ww3\_tp2.11} for an example of a rotated grid model (NAEW). Users may find more information in \emph{smc\_docs/Rotated\_Grid.pdf} or contact Jian-Guo Li for help (\url{[email protected]}).
{ "alphanum_fraction": 0.7751833269, "avg_line_length": 60.2558139535, "ext": "tex", "hexsha": "c0e5249da90b8be4d5b02d458b151a70becdd688", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_path": "WW3/manual/num/rotagrid.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_path": "WW3/manual/num/rotagrid.tex", "max_line_length": 86, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_path": "WW3/manual/num/rotagrid.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1397, "size": 5182 }
\chapter{Introduction} \emph{“你们CS比CS的强吗?那你们的EE比EE的强吗?”} \newpage \section{Basics} The basic assumption of machine learning is that \textbf{data samples are i.i.d.}. The goal of training a model is to \textbf{minimize the generalization error of the model}. Since we only have limited amount of data, what we can actually do is to minimize the emprical error. However, we do not always want the emprical error to be as small as possible due to the risk of overfitting. \section{Overfitting and Underfitting} \paragraph{Overfitting.} High variance. The model performs well on training sets but performs poorly on new unseen samples. Using a high-order model to fit low-order distribution of data usually leads to overfitting. \paragraph{Underfitting.} High bias. The model has not fully captured the underlying structure of the data. Conduct more training or change a more complicated model. \section{Methods for Splitting data} To train a model, we first need to divide data into training set and test set. Training set and test set should be disjoint. \subsection{Hold-Out} Divide dataset $\mathcal{D}$ into traning set $\mathcal{S}$ and test set $\mathcal{T}$ s.t. \[ \mathcal{S} \cup \mathcal{T} = \mathcal{D} \quad \mathcal{S} \cap \mathcal{T} = \emptyset \] Typical proportion of $\mathcal{S}$ and $\mathcal{T}$ is 30\% and 70\%. \subsection{Cross-Validation} Divide $\mathcal{D}$ into $k$ disjoint sets of similar size. \[ \mathcal{D} = \mathcal{D}_1 \cup \mathcal{D}_2 \cup \dots \mathcal{D}_k \quad \text{s.t.} \quad \mathcal{D}_i \cap \mathcal{D}_j = \emptyset \] Each time use $k-1$ sets for training and the remaining set for testing. A typical value of $k$ is $10$. \subsection{Leave-One-Out} A special case of cross-validation, wehre each set $\mathcal{D}_i$ contains only one sample. \subsection{Bootstrapping} Suppose $\mathcal{D}$ has $m$ samples. Randomly pick a sample from $\mathcal{D}$, copy it into some $\mathcal{D}'$ and put it back to $\mathcal{D}$. Repeat the process for $m$ times. \[ \lim_{m\to\infty}(1-\frac{1}{m})^m = \frac{1}{e} \approx 0.368 \] About $36.8\%$ samples in $\mathcal{D}$ will not be in $\mathcal{D}'$. So we can use $\mathcal{D}'$ for training and $\mathcal{D}\backslash\mathcal{D}'$ for testing. \section{Performance Evaluation} \subsection{Measure} \paragraph{Regression} Common performance measure for a regression model is \textbf{Mean Squared Error}. \[ E = \frac{1}{m}\sum_{i=1}^m(f(x^{(i)}) - y^{(i)})^2 \] \paragraph{Classification} Common measure for a classification model is \textbf{Error Rate} \[ E = \frac{1}{m}\sum_{i=1}^m\mathbb{I}[f(x^{(i)}) \neq y^{(i)}] \] \subsection{TPR and FPR} \begin{definition}[Sensitivity/TPR] \[ TPR = \frac{TP}{TP + FN} \] \end{definition} \begin{definition}[FPR] \[ FPR = \frac{FP}{TN + FP} \] \end{definition} \subsection{Receiver Operating Characteristic} Many classification models output a real value and compare it to a certain threshold. The \textbf{ROC Curve} uses $FPR$ as its $x$-axis, and $TPR$ as its $y$-axis. It can be plotted by setting different thresholds for dividing positive and negative samples. The \textbf{Area Under Curve, AUC} is used to evaluate different models. Usually models with a larger AUC is considered to have better performance. \subsection{Precision and Recall} \begin{definition}[Precision] \[ P = \frac{TP}{TP + FP} \] \end{definition} \begin{definition}[Recall] \[ R = \frac{TP}{TP + FN} \] \end{definition} Similar to the ROC Curve, we can also plot the \textbf{P-R Curve}. And the \textbf{Break-Even Point, BEP}, defined as the value when $P = R$, is used to evaluate different models. Another more common measure is the $F1$ rate \begin{definition}[$F1$ Rate] \[ F1 = \frac{2 \times P \times R}{P + R} = \frac{2 \times TP}{\#Samples + TP - TN} \] \end{definition} \begin{remark} The $F1$ rate is defined by the harmonic mean of Precision and Recall. \end{remark} \begin{definition}[$F_{\beta}$ Rate] \[ F_{\beta} = \frac{(1+\beta^2)\times P \times R}{(\beta^2 \times P)+R} \] \end{definition} \begin{remark} $F_{\beta}$ is the weighted harmonic mean. When $\beta > 1$, precision has a higher weight. When $0 < \beta < 1$, recall has a higher weight. \end{remark} \section{Error Analysis} \paragraph{Bias.} The \textbf{bias} is the difference between model prediction and ground truth. This is usually because the model is not well-trained, or because the model is not complex enough to fit the data distribution. \paragraph{Variance.} The \textbf{variance} is the variance of outputs of the same model fitted different times. This is usually because the model is too complex and mistakenly fits the noise or specific features in the dataset. \paragraph{Noise.} Noise. High variance $\to$ Overfitting. High bias $\to$ Underfitting. \subsection{Bias-Variance Decomposition} Let \[ bias(x) = f(x) - y \] \[ var(x) = \mathbb{E}_{\mathcal{D}}[(f(x;\mathcal{D}-f(x)))^2] \] The generalization error of a model $f$ trained on $\mathcal{D}$ can be represented by \[ E(f;\mathcal{D}) = bias^2(x) + var(x) + \varepsilon^2 \]
{ "alphanum_fraction": 0.7157420114, "avg_line_length": 52.0510204082, "ext": "tex", "hexsha": "338cdece73add4b10f2daa8cc7666cb00e17b1f3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-12-14T11:31:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-12-14T11:31:00.000Z", "max_forks_repo_head_hexsha": "58a4ccb6b8f8d1de9ec10b627a45442519855dfc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "YBRua/CourseNotes", "max_forks_repo_path": "Machine Learning/Introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58a4ccb6b8f8d1de9ec10b627a45442519855dfc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "YBRua/CourseNotes", "max_issues_repo_path": "Machine Learning/Introduction.tex", "max_line_length": 228, "max_stars_count": 8, "max_stars_repo_head_hexsha": "58a4ccb6b8f8d1de9ec10b627a45442519855dfc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "YBRua/CourseNotes", "max_stars_repo_path": "Machine Learning/Introduction.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-25T08:15:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-20T10:40:58.000Z", "num_tokens": 1543, "size": 5101 }
%% %% ACS project dissertation template. %% %% Currently designed for printing two-sided, but if you prefer to %% print single-sided just remove ",twoside,openright" from the %% \documentclass[] line below. %% %% %% SMH, May 2010. \documentclass[a4paper,12pt,twoside,openright]{report} %% %% EDIT THE BELOW TO CUSTOMIZE %% \def\authorname{Ziheng Zhang\xspace} \def\authorcollege{Darwin College\xspace} \def\authoremail{[email protected]} \def\dissertationtitle{Automatic Recognition of Bipolar Disorder from Multimodal Data} \def\wordcount{11004} \usepackage{epsfig,graphicx,parskip,setspace,tabularx,xspace} \usepackage{makecell,enumerate,hyperref, ifthen, rotating, amsmath} \usepackage[toc,page]{appendix} \usepackage[utf8]{inputenc} \DeclareMathOperator{\sign}{sign} \usepackage{caption} \captionsetup[table]{skip=5pt} %% START OF DOCUMENT \begin{document} %% FRONTMATTER (TITLE PAGE, DECLARATION, ABSTRACT, ETC) \pagestyle{empty} \singlespacing \input{preambles/titlepage} \onehalfspacing \input{preambles/declaration} \singlespacing \input{preambles/abstract} \pagenumbering{roman} \setcounter{page}{0} \pagestyle{plain} \tableofcontents \listoffigures \addcontentsline{toc}{chapter}{List of Figures} \listoftables \addcontentsline{toc}{chapter}{List of tables} \onehalfspacing %% START OF MAIN TEXT \chapter{Introduction} \label{ch:introduction} \pagenumbering{arabic} \setcounter{page}{1} \input{mainSections/1-introduction} \chapter{Background} \label{ch:background} \input{mainSections/2-background} \chapter{Related Work} \label{ch:literature} \input{mainSections/3-literature} \chapter{Design and Implementation} \label{ch:design} \input{mainSections/4-design} \chapter{Evaluation} \label{ch:evaluation} \input{mainSections/5-evaluation} \chapter{Summary and Conclusion} \label{ch:summary} \input{mainSections/6-conclusion} \let\svaddcontentsline\addcontentsline \renewcommand\addcontentsline[3]{% \ifthenelse{\equal{#1}{lof}}{}% {\ifthenelse{\equal{#1}{lot}}{}{\svaddcontentsline{#1}{#2}{#3}}}} \appendix \addcontentsline{toc}{chapter}{Appendix} \chapter{Complete Experimental Results} \label{ch:appendix} \input{mainSections/appendix.tex} \singlespacing \bibliographystyle{unsrt} \bibliography{reference} \end{document}
{ "alphanum_fraction": 0.7677165354, "avg_line_length": 20.7818181818, "ext": "tex", "hexsha": "6722b6e90230c601062f88575df9af4a8a150ce8", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-03-25T08:20:48.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-22T07:10:32.000Z", "max_forks_repo_head_hexsha": "deef966d65014175d6cb8f35320b2b33bfadfd13", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ZihengZZH/bipolar-disorder", "max_forks_repo_path": "paperwork/dissertation_final/acs-dissertation.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "deef966d65014175d6cb8f35320b2b33bfadfd13", "max_issues_repo_issues_event_max_datetime": "2022-02-10T00:15:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T22:56:27.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ZihengZZH/bipolar-disorder", "max_issues_repo_path": "paperwork/dissertation_final/acs-dissertation.tex", "max_line_length": 86, "max_stars_count": 17, "max_stars_repo_head_hexsha": "deef966d65014175d6cb8f35320b2b33bfadfd13", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ZihengZZH/bipolar-disorder", "max_stars_repo_path": "paperwork/dissertation_final/acs-dissertation.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-01T00:57:09.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-07T11:21:01.000Z", "num_tokens": 696, "size": 2286 }
% === [ Evaluation ] =========================================================== \section{Evaluation} % MUST: Outcome of Mandatory Objectives % SHOULD, COULD: Outcome of Minor Objectives of Incremental Difficulty % TODO: Outcome of Mandatory Objectives % TODO: Outcome of Minor Objectives of Incremental Difficulty % TODO: compared results against thesis. It would be interesting to see if the results actually aligns with the thesis. % === [ Subsections ] ========================================================== \input{sections/6_evaluation/1_metric_for_the_effectiveness_of_control_flow_recovery} \input{sections/6_evaluation/2_control_flow_recovery_methods} \input{sections/6_evaluation/3_effectiveness_of_control_flow_recovery_methods} \input{sections/6_evaluation/4_intuition_behind_identified_deficiencies_in_control_flow_recovery_methods}
{ "alphanum_fraction": 0.7187864644, "avg_line_length": 40.8095238095, "ext": "tex", "hexsha": "3fdf396b7f4d20c28d57c9106f9497b57f232470", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-09-09T07:36:14.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-25T21:15:26.000Z", "max_forks_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "decomp/doc", "max_forks_repo_path": "report/control_flow_analysis/sections/6_evaluation.tex", "max_issues_count": 48, "max_issues_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_issues_repo_issues_event_max_datetime": "2020-01-29T19:17:53.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-30T19:08:59.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "decomp/doc", "max_issues_repo_path": "report/control_flow_analysis/sections/6_evaluation.tex", "max_line_length": 119, "max_stars_count": 23, "max_stars_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "decomp/doc", "max_stars_repo_path": "report/control_flow_analysis/sections/6_evaluation.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-16T08:14:04.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-27T10:16:40.000Z", "num_tokens": 166, "size": 857 }
\section{Buzzer}\label{s:buzzer} \project{Morse code SOS using a Buzzer}{In this project, you will learn how to wire and program a buzzer, and use it to produce Morse code. You will be using `subroutines'. } \subsection*{Equipment Required} The cuircuit built in Sectino~\ref{s:button}, plus the following: \begin{itemize} \item Buzzer \item 1 x M/F jumper wire \item 1 x M/M jumper wire \end{itemize} \subsection*{Additional Parts} You will be adding a buzzer to the LED and switch circuit that you made in Section~\ref{s:button}. Let us look at the additional component. Do not skip this section, as you will need to know how to connect the buzzer. \subsubsection*{Buzzer} \limage{0.15}{buzzer} The buzzer supplied in the EduKit is an `active' buzzer, which means that it only needs an electric current to make a noise. In this case, you are using the Raspberry Pi to supply that current. The buzzer has positive and negative legs. The longer leg is positive (shown in red in the diagram), the shorter leg is negative (shown in black on the diagram). ~\hfill \vspace{5ex} ~\hfill \newpage \subsection*{Building the Circuit} \limage{0.5}{buzzer-circuit} Before you connect additional components to your circuit, you should turn off your Pi. Leave the LED and switch circuit from Section~\ref{s:button} in place. Place the buzzer on the breadboard straddling the middle divide. The longer leg should be connected via a jumper wire to GPIO 16. The other leg should be connected to the ground rail. GPIO 16 will be an output pin and, when it is set on, the buzzer will sound. Remember that we will need to ensure that this pin can be used as an output. This pin is bit 4 in the user port and has a value of 16. You will need to set this bit in the DDR to 1 to allow output. ~\hfill ~\hfill ~\hfill ~\hfill ~\hfill ~\hfill ~\hfill ~\hfill ~\hfill \subsection*{Concepts} You are going to be using `subroutines' in the code below. These are pieces of code that you may want to run more than once, but by using subroutines, you only have to write them once. You then `call' that subroutine from within your code each time you want to run it. A subroutine is any section of code which ends with a `RETURN' statement: \begin{basic} 200 PRINT "HELLO WORLD!" 210 RETURN \end{basic} To use this subroutine, you must `call' it by using a `GOSUB' statement with the line number that starts the subroutine: \begin{basic} 50 GOSUB 200 \end{basic} Now, every time BASIC sees `GOSUB 200' in your code, it will print ``HELLO WORLD!''. \subsection*{Code} Type in the following code below exactly as seen: \begin{basic} 10 REM MORSE CODE 20 REM SETUP LOCATIONS 30 UP=56577:DDR=56579 40 REM PIN VALUES FOR BUZZER 50 Z=16 60 REM SET BUZZER PIN FOR OUTPUT 70 POKE DDR,Z 80 REM CLEAR USER PORT (ALL PINS OFF) 90 POKE UP,0 100 PRINT CHR\$(147);:REM CLEARS THE SCREEN 110 PRINT "MORSE CODE" 120 REM PROMPT THE USER FOR INPUT 130 INPUT "HOW MANY TIMES FOR SOS TO LOOP";C 140 FOR L=1 TO C 150 GOSUB 1500:REM S 160 GOSUB 1300:REM LETTER SPACE 170 GOSUB 1600:REM O 180 GOSUB 1300:REM LETTER SPACE 190 GOSUB 1500:REM S 200 GOSUB 1400:REM WORD SPACE 210 NEXT L 220 END 1000 REM A DELAY OF D TENTHS OF A SECOND 1010 T=D*72 1020 FOR I=1 TO T:NEXT I 1030 RETURN 1100 REM A SINGLE MORSE DOT 1110 POKE UP,Z 1120 D=1:GOSUB 1000 1130 POKE UP,0 1140 D=1:GOSUB 1000 1150 RETURN 1200 REM A SINGLE MORSE DASH 1210 POKE UP,Z 1220 D=3:GOSUB 1000 1230 POKE UP,0 1240 D=1:GOSUB 1000 1250 RETURN 1300 REM THE SPACE BETWEEN LETTERS 1310 D=2:GOSUB 1000 1320 RETURN 1400 REM THE SPACE BETWEEN WORDS 1410 D=6:GOSUB 1000 1420 RETURN 1500 REM THE MORSE FOR S 1510 GOSUB 1100:REM DOT 1520 GOSUB 1100:REM DOT 1530 GOSUB 1100:REM DOT 1540 RETURN 1600 REM THE MORSE FOR O 1610 GOSUB 1200:REM DASH 1620 GOSUB 1200:REM DASH 1630 GOSUB 1200:REM DASH 1640 RETURN \end{basic} Save the program as ``6 MORSE CODE''. \subsection*{Running the Code} Run the program. You will be prompted for the number of times you want to repeat `SOS'. \subsection*{Challenge} Using the above code as your template, write another program that will allow you to sound any Morse code you choose. Use the following rules: \begin{itemize} \item The length of a dot is one unit. \item The length of a dash is three units. \item The space between the parts of each letter is one unit. \item The space between letters is three units. \item The space between words is seven units. \item The letter and number codes are: \end{itemize} \begin{center} \begin{tabular}{llllllllllll} A & \ds\dl & G & \dl\dl\ds & M & \dl\dl & S & \ds\ds\ds & Y & \dl\ds\dl\dl & 4 & \ds\ds\ds\ds\dl \\ B & \dl\ds\ds\ds & H & \ds\ds\ds\ds & N & \dl\ds & T & \dl & Z & \dl\dl\ds\ds & 5 & \ds\ds\ds\ds\ds \\ C & \dl\ds\dl\ds & I & \ds\ds & O & \dl\dl\dl & U & \ds\ds\dl & 0 & \dl\dl\dl\dl\dl & 6 & \dl\ds\ds\ds\ds \\ D & \dl\ds\ds & J & \ds\dl\dl\dl & P & \ds\dl\dl\ds & V & \ds\ds\ds\dl & 1 & \ds\dl\dl\dl\dl & 7 & \dl\dl\ds\ds\ds \\ E & \ds & K & \dl\ds\dl & Q & \dl\dl\ds\dl & W & \ds\dl\dl & 2 & \ds\ds\dl\dl\dl & 8 & \dl\dl\dl\ds\ds \\ F & \ds\ds\dl\ds & L & \ds\dl\ds\ds & R & \ds\dl\ds & X & \dl\ds\ds\dl & 3 & \ds\ds\ds\dl\dl & 9 & \dl\dl\dl\dl\ds \end{tabular} \end{center} \subsection*{Advanced Challenge} Using what you have learned so far, especially from Section~\ref{s:button}, make your own Morse code machine by making the buzzer sound when you press the button.
{ "alphanum_fraction": 0.7224376731, "avg_line_length": 30.4213483146, "ext": "tex", "hexsha": "f8b9fb6b19cfc10ea97547db3985c5a35a784d1f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e04b1a4dc246f74377505548cb47e9cc91d1c89a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "markbush/EduKit1", "max_forks_repo_path": "s-buzzer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e04b1a4dc246f74377505548cb47e9cc91d1c89a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "markbush/EduKit1", "max_issues_repo_path": "s-buzzer.tex", "max_line_length": 270, "max_stars_count": null, "max_stars_repo_head_hexsha": "e04b1a4dc246f74377505548cb47e9cc91d1c89a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "markbush/EduKit1", "max_stars_repo_path": "s-buzzer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1834, "size": 5415 }
\documentclass{entcs} \usepackage{entcsmacro} \input pdfcolor.tex % The following is enclosed to allow easy detection of differences in % ascii coding. % Upper-case A B C D E F G H I J K L M N O P Q R S T U V W X Y Z % Lower-case a b c d e f g h i j k l m n o p q r s t u v w x y z % Digits 0 1 2 3 4 5 6 7 8 9 % Exclamation ! Double quote " Hash (number) # % Dollar $ Percent % Ampersand & % Acute accent ' Left paren ( Right paren ) % Asterisk * Plus + Comma , % Minus - Point . Solidus / % Colon : Semicolon ; Less than < % Equals =3D Greater than > Question mark ? % At @ Left bracket [ Backslash \ % Right bracket ] Circumflex ^ Underscore _ % Grave accent ` Left brace { Vertical bar | % Right brace } Tilde ~ % A couple of exemplary definitions: \newcommand{\Nat}{{\mathbb N}} \newcommand{\Real}{{\mathbb R}} \def\lastname{Please list Your Lastname Here} \begin{document} \begin{frontmatter} \title{An Example Paper} \author{My Name\thanksref{ALL}\thanksref{myemail}} \address{My Department\\ My University\\ My City, My Country} \author{My Co-author\thanksref{coemail}} \address{My Co-author's Department\\My Co-author's University\\ My Co-author's City, My Co-author's Country} \thanks[ALL]{Thanks to everyone who should be thanked} \thanks[myemail]{Email: \href{mailto:[email protected]} {\texttt{\normalshape [email protected]}}} \thanks[coemail]{Email: \href{mailto:[email protected]} {\texttt{\normalshape [email protected]}}} \begin{abstract} This is a short example to show the basics of using the ENTCS style macro files. Ample examples of how files should look may be found among the published volumes of the series at the ENTCS Home Page \texttt{http://www.elsevier.nl/locate/entcs}. \end{abstract} \begin{keyword} Please list keywords from your paper here, separated by commas. \end{keyword} \end{frontmatter} \section{Introduction}\label{intro} This short note provides a guide to using the ENTCS macro package for preparing papers for publication in your conference \emph{Proceedings}. The \emph{Proceedings} may be printed and hard copies distributed to participants at the meeting; this is an option to Conference Organizers may choose to exercise. The \emph{Proceedings} also will be par of a volume in the series \emph{Electronic Notes in Theoretical Computer Science} (ENTCS), which is published under the auspices of Elsevier Science B.~V., the publishers of \emph{Theoretical Computer Science}. It's home page is \href{http://www.elsevier.nl/locate/entcs} {\texttt{http://www.elsevier.nl/locate/entcs}} The ENTCS macro package consists of two files: \begin{description} \item[\texttt{entcs.cls},] the basic style file, and \item[\texttt{entcsmacro.sty},] a macro file containing the definitions of some of the theorem-like environments and a few other tidbits. \end{description} The formatting these style files impose should \emph{not} be altered -- the reason for using them is to attain a uniform format for all papers in the \emph{Proceedings} of which your paper is a part. Additional macro files can be added using \verb+\usepackage{...}+. The file \texttt{entcs\-macro.sty} \emph{must} be included in the list, as is done at the start of the source file for this paper. The ENTCS package requires a relatively up-to-date \LaTeX\ system in order to be successfully used. This is reflected in two other packages that are called by entcs.cls, which must be available on your machine. These are: \begin{itemize} \item The \texttt{hyperref} package. This package allows the use of hyperlinks in files prepared using \LaTeX 2e, one of the main features of Adobe's Acrobat$^{\tiny \copyright}$ Reader software. Be sure that you have at least version 6.69d of this package. \item The \texttt{ifpdf} package. This is used by hyperref to differentiate between the use of pdf\LaTeX\ and \LaTeX 2e, followed by dvips and then ps2pdf. \end{itemize} The file \texttt{instraut.dvi} contains information about the use of \LaTeX to prepare files for online publication by Elsevier. This file refers to the older version of \LaTeX\ that is no longer suppported, and that is inadequate for preparing \texttt{.pdf} files for online publication. Reading this file should answer most of the basic questions about \LaTeX\ that might arise. \section{Frontmatter} The biggest difference between a ``usual'' \LaTeX\ style such as \texttt{article.sty} and the ENTCS package is that the ENTCS macro package requires the title, author's name or names, abstract, keywords and ``thanks'' all to be included within the \texttt{frontmatter} environment. At the beginning of the source file for this paper, you'll notice this. Also, you'll notice that the usual \verb+\maketitle+ is absent; it no longer is needed. The ENTCS style package automatically generates the title, author's name and address, and related material at the beginning of the paper. Note also that hyperref has been disabled in this part of the entcs.cls file, so references to footnotes aren't linked to the appropriate footenotes or addresses. This is an old problem with \LaTeX, involving the fact that the references within the frontmatter aren't passed cleanly to the linking software. For those who have used the ENTCS package before, the one new thing to note is the inclusion of \emph{Keywords}; these are now required by Elsevier -- they're also required by ACM's \emph{Computing Reviews} which reviews ENTCS publications. The ENTCS macro package provides two alternatives to listing authors names and addresses. These are described in detail in the file \texttt{instraut.dvi}. Basically, listing each author and his or her address in turn, is the simplest method. But, if there are several authors and two or more share the same address (but not all authors are at this address), then the method of listing authors first, and then the addresses, and of referencing addresses to authors should be used. Also, notice that acknowledgment of support (the contents of \verb+\thanks+) should be done by a separate listing of \verb+\thanks[NSF]{To the NSF}+ with the optional argument -- \verb+[NSF]+ -- being used for \verb+\thanksref+ which is attached to those authors acknowledging such support. It is important that the \verb+\thanks+ not be included within the scope of \verb+\author{}+ or of \verb+\title{}+, but it must be within the scope of the environment \texttt{frontmatter}. More details about added terms such as \verb+\collab+ can be found in \texttt{inst.dvi}, if they are needed. Also, notice that the command \verb+\lastname{My Lastname}+ has been included \emph{before} the \texttt{frontmatter} begins. This command should contain the last names of the authors of the paper. If there are no more than three authors, then they should be listed with the word ``and'' between the last two; if more than three authors collaborated on the paper, then the first author only should be listed, together with \verb+\emph{et al}+. This command creates the headline for each page after page 1. Finally, please be sure to include an abstract for your paper. \section{Sectioning and Environments} Since ENTCS is published through the auspices of Elsevier Science B.~V., their style files have been used to create the ENTCS macro package. Here's a proof that this package is not much different than most of the ones one encounters: \begin{definition} A file is \emph{derived} from another if it is obtained with only a few modifications from the original file. \end{definition} \begin{theorem} The file \texttt{\normalshape entcs.cls} is derived from \texttt{\normalshape elsart.sty}. \end{theorem} \begin{proof} This is clear from the similarity of the output to the output from Elsevier's style files. \end{proof} If one wants to start a proof with a descriptive word, such as ``sketch'', then one can use the \verb+\begin{proof*}...\end{proof*}+ environment, as in \begin{proof*}{Proof (Sketch)} This can be derived from simple observations. \end{proof*} The main differences between the file \texttt{entcs.cls} and the \texttt{elsartr.cls} file used by Elsevier are the more precise format we use -- Elsevier's generic files are meant for preliminary editing, and more precise formatting is imposed using a macro file designed for the specific Elsevier journal in which the paper will eventually appear. The \texttt{entcs.cls} and \texttt{entcsmacro.sty} files format papers uniformly so that they all are easily recognizable as being from the series \emph{Electronic Notes in Theoretical Computer Science}. All of the usual features of \LaTeX\ are available with these style files -- it is only the formatting that has been rigorously defined. Thus, one has available the sectioning commands \verb+\section,\subsection, \paragraph+ and \verb+\subparagraph.+ The numbering scheme used is one under which Theorem 1.2.3 is the third numbered item in second subsection of the first section of the paper. In order to facilitate cross-references, all of the named environments given below are numbered, and all use the same number scheme. The file \texttt{entcsmacro.sty} contains additional information that is needed to typeset a paper. It also has the definitions of the $\cal AMS$ \texttt{euler} and \texttt{blackboard bold} fonts builtin. If you want to use symbols for the natural numbers, the reals, etc., then we prefer that you use the blackboard bold fonts, and not plain bold fonts. This is accomplished by using the \verb+\mathbb+ font, as in $\Nat$ or $\Real$. The names of theorem-like environments are provided in \texttt{entcsmacro.sty}. With the exception of the environment Algorithm, the names of all of these are full name, rather than a shortened version. The environments provided and their names are \begin{itemize} \item \verb+\begin{theorem} ... \end{theorem}+ for Theorems, \item \verb+\begin{lemma} ... \end{lemma}+ for Lemmas, \item \verb+\begin{corollary} ... \end{corollary}+ for Corollaries, \item \verb+\begin{proposition} ... \end{proposition}+ for Propositions, \item \verb+\begin{criterion} ... \end{criterion}+ for Criteria, \item \verb+\begin{alg} ... \end{alg}+ for Algorithms, \item \verb+\begin{definition} ... \end{definition}+ for Definitions, \item \verb+\begin{conjecture} ... \end{conjecture}+ for Conjectures, \item \verb+\begin{example} ... \end{example}+ for Examples, \item \verb+\begin{problem} ... \end{problem}+ for Problems, \item \verb+\begin{remark} ... \end{remark}+ for Remarks, \item \verb+\begin{note} ... \end{note}+ for Notes, \item \verb+\begin{claim} ... \end{claim}+ for Claims, \item \verb+\begin{summary} ... \end{summary}+ for Summary, \item \verb+\begin{case} ... \end{case}+ for Cases, and \item \verb+\begin{ack} ... \end{ack}+ for Acknowledgements. \end{itemize} For example, \begin{algorithm}[h] \begin{alg} Step 1: Write the paper\\ Step 2: Format it with the ENTCS macro package\\ Step 3: Ship the whole thing to the Guest Editors\\ \end{alg} \end{algorithm} \section{References and Cross-references} All the cross-referencing facilities of \LaTeX\ are supported, so one can use \verb+\ref{}+ and \verb+\cite{}+ for cross-references within the paper and for references to bibliographic items. As is done in this note, the \textbf{References} section~\ref{bibliography} can be composed with \verb+\begin{thebibliography}...\end{thebibliography}+. Alternatively, Bib\TeX\ can be used to compile the bibliography. Whichever one is used, the references are to be numbered consecutively, rather than by author-defined acronyms. Of course you can use your own acronyms for easy reference to each of the items in the bibliography, as has been done with the listing for this short note. However, note that the references should \emph{not} be started with a new \verb+\section+ command. The package \texttt{hyperref} is automatically loaded by entcs.cls, and this makes all the cross-references within the document ``active'' when the pdf file of the paper is viewed with Adobe's Acrobat$^{\tiny \copyright}$ Reader. The format for including a link is simple: simply insert \verb+\href{URL}+ \verb+{text}+ where \emph{URL} is the URL to which you want the link to point, and \emph{text} is the text you want to be highlighted, which when clicked upon will bring up the desired web page. \subsection{Particulars about {\normalshape \texttt{.pdf} files}} We now require that \texttt{.pdf} files be provided for publication online. A \texttt{.pdf} file is viewable by Adobe's Acrobat$^{\tiny \copyright}$ viewer, which can be configured to load automatically within a browser. Viewing a properly formatted \texttt{.pdf} file with Acrobat$^{\tiny \copyright}$ allows the cross-references and links to URLs to be active. In fact, Elsevier utilizes \texttt{.pdf} files in order to take better advantage of the web's capabilities. But one point we want to emphasize is that you should be sure to use Type 1 fonts when you typeset your \LaTeX\ source file. These fonts are scalable, meaning that they carry information that allows the devise viewing the final output to scale the fonts to suit the viewer being used -- from an onscreen viewer such as Adobe's Acrobat$^{\tiny \copyright}$ Reader, to printing the file on a printer. You can tell if you have used the right fonts by viewing the final output on your machine. It the fonts look grainy, then you have not used Type 1 fonts. They can be located at the CTAN archive \href{http://www.ctan.org}{\tt http://www.ctan.org} -- they are public domain fonts, and don't cost anything to add them to your system. Assuming you have Type 1 fonts available, then there are there methods for producing \texttt{.pdf} files. \paragraph{Using \texttt{dvips} and \texttt{ps2pdf}} We list this option first since it appears to be the most reliable and the easiest to use, especially if you include embedded PostScript graphics (\texttt{.eps} files) in your source file. Simply run \LaTeX 2e on your source file, then apply \texttt{dvips} to produce a PostScript file, and finally apply \texttt{ps2pdf} to obtain a \texttt{.pdf} file. \paragraph{The \texttt{DVIPDFM} utility} Another easy method for producing acceptable \texttt{.pdf} files is via the utility \texttt{dvipdfm}. This utility is included in distributions of Mik\TeX, which runs on Windows machines, but it probably needs to be added to your te\TeX\ distribution, if you are running \LaTeX\ on a UNIX machine. The utility and precise information about installing it on your system can be found at the web page \href{http://gaspra.kettering.edu/dvipdfm/}{\tt http://gaspra.kettering.edu/dvipdfm/}. In essence, this utility converts a \texttt{.dvi} file into a \texttt{.pdf} file. So, one can first prepare the \texttt{.dvi} file using \LaTeX, and then apply the utility \texttt{dvipdfm} to produce the needed \texttt{.pdf} file.\footnote{ \emph{Beware}! The utility \texttt{dvipdf} does \emph{not} produce acceptable \texttt{.pdf} files, and should not be used. Only \texttt{dvipdfm} should be used to produce \texttt{.pdf} files.} This utility makes inclusion of graphics particularly simple -- those that are included in the \LaTeX\ source file are simply converted to the \texttt{.pdf} format. As we note below, things are not so simple with the second alternative, which is to use pdf\LaTeX. \paragraph{pdf\LaTeX} An alternative to the first possibilities to produce \texttt{.pdf} files is to process the source file with pdf\LaTeX. This format is available from the standard CTAN sites \href{http://www.ctan.org}{\tt http://www.ctan.org}. It appears that pdf\LaTeX\ and \texttt{hyperref} have some problems when used together. It is necessary to use pdf\LaTeX\ version 14d or later in order to minimize these issues. If your system has an earlier version (most te\TeX\ distributions have version 13d), then you can update your system by retrieving the latest version of pdf\LaTeX\ from \href{ftp://ftp.cstug.cz/pub/tex/local/cstug/thanh/pdftex/}{\tt ftp://ftp.cstug.cz/pub/tex/local/cstug/thanh/pdftex/}. Even if the recent versions are used, pdf\LaTeX\ has the same dealing with references embedded with the \texttt{frontmatter} section described above for \LaTeX. But there is one aspect of pdf\LaTeX\ that creates problems. Many authors include EPS\footnote{EPS stands for \emph{embedded PostScript}, which affords a mechanism for including pre-prepared PostScript files within a \LaTeX\ document.} files within their papers. While this is fairly straightforward with \LaTeX, there are a couple of points to note when attempting this with pdf\LaTeX. To include a PostScript image in a \texttt{.pdf} file produced with pdf\LaTeX, you first have to convert the image to a \texttt{.pdf} file, and then it can be included using the same command sequence as above. The conversion can be accomplished most easily using Ghostscript; you can simply view the file in Ghostview and then print the image to a \texttt{.pdf} file using the \verb+pdfwriter+ option within Ghostview. The result for a standard chess board that is part of the Ghostview distribution is the following image: \centerline{\pdfximage width 4in height 4in {chess.pdf}\pdfrefximage\pdflastximage} Here as well is a copy of a color image. While pdf\LaTeX\ can handle image files in other formats, \LaTeX\ can only handle \texttt{.eps} images reliably. \centerline{\pdfximage height 3.5truein width 3truein {tigre.pdf}\pdfrefximage\pdflastximage}\ \medbreak It also should be noted that you need to have two separate source files -- one for \LaTeX\ and one for pdf\LaTeX\ -- \emph{only} in case you wish to insert graphics images in your final paper. If your paper does not include such images, then the same source file can be formatted by both \LaTeX\ and by pdf\LaTeX. \paragraph{Using ENTCS Macros with Mac OS X} Of course, if your file doesn't require \texttt{.eps} or other PostScript files, then you can create the required \texttt{.pdf} file using any of the standard \TeX\ implementations for the Macintosh. If you need to include PostScript files, and if you are using \TeX Shop, then you can specify to use dvips and ghostview in processing your file, and then you can apply \texttt{ps2pdf} to create the needed \texttt{.pdf} file. Alternatibely, the Mac OS X operating system is based on UNIX, so it supports the use of te\TeX\ as described above. \section{Summary} The ENTCS macro package is relatively easy to use and provides a uniform layout for all the papers that appear in ENTCS. \begin{problem} Finish your paper and get it to your Program Chairman on time! \end{problem} When you have finished preparing your paper, send a copy of the \emph{source file}, together with any macro files that are needed to your Program Chairman. If the files are extensive, you can place copies in the \texttt{pub/incoming} sub-directory of the ftp directory on the machine indicated by your Program Chairman using anonymous ftp. If you do this, please send me email to alert me that the file(s) are here. \paragraph{Assigning Volume / Issue Numbers} One additional point worth mentioning is that ENTCS is moving to ScienceDirect, Elsevier's main platform for publishing electronic series, Because ScienceDirect must publish entire volumes at the same time, we have changed the procedure for preparing final versions so that volume numbers will not be assigned until the final versions are ready. Guest Editors will now have to receive the final version of all papers in their \emph{Proceedings} before a volume and issue number will be assigned for the \emph{Proceedings}. Even with the move to ScienceDirect, the reference scheme already used for publications in ENTCS -- \texttt{http://www.elsevier/nl/locate/entcs/} \texttt{NNnn.html} remains the valid way to cite papers published in ENTCS, where \texttt{NN} denotes the number of the volume, and \texttt{nn} denotes the issue number. Publications consisting of an entire volume should use \texttt{1} as the issue number. \paragraph{Copyright Transfer Forms} One result of the move to ScienceDirect is that the corresponding author of each paper published in ENTCS must submit a signed Copyright Transfer Form to Elsevier in order for their paper to be published. A copy of this form will be sent to each author by the Guest Editors of each volume. Details about this agreement specifying the rights of the authors and the rights of Elsevier are available at \href{http://authors.elsevier.com/PublisherInfoDetail.html?dc=AGI}{Elsevier's Author Gateway}. \paragraph{Publication of Final Versions} Because ScienceDirect cannot easily accommodate changes to published material, the Proceedings in its entirety must be ready before it can be published. This is one reason why the volume and issue number is not assigned until the final versions of all papers have been sent to the Guest Editors for final processing. \section{Bibliographical references}\label{references} ENTCS employs the \texttt{plain} style of bibliographic references in which references are listed in alphabetical order, according the the first author's last name, and are sequentially numbered. Please utilize this style. We have a Bib\TeX\ style file, for those who wish to use it. It is the file \texttt{entcs.bst} which in included in this package. The basic rules we have employed are the following: \begin{itemize} \item Authors' names should be listed in alphabetical order, with the first author's last name being the first listing, followed by the author's initials or first name, and with the other authors names listed as \emph{first name, last name}. \item Titles of articles in journals should be in \emph{emphasized} type. \item Titles of books, monographs, etc.\ should be in quotations. \item Journal names should be in plain roman type. \item Journal volume numbers should be in boldface type, with the year of publication immediately following in roman type, and enclosed in parentheses. \item References to URLs on the net should be ``active'' and the URL itself should be in typewriter font. \item Articles should include page numbers. \end{itemize} The criteria are illustrated in the following. \begin{thebibliography}{10}\label{bibliography} \bibitem{cy} Civin, P., and B. Yood, \emph{Involutions on Banach algebras}, Pacific J. Math. \textbf{9} (1959), 415--436. \bibitem{cp} Clifford, A. H., and G. B. Preston, ``The Algebraic Theory of Semigroups,'' Math. Surveys \textbf{7}, Amer. Math. Soc., Providence, R.I., 1961. \bibitem{f} Freyd, Peter, Peter O'Hearn, John Power, Robert Tennent and Makoto Takeyama, \emph{Bireflectivity}, Electronic Notes in Theoretical Computer Science {\bf 1} (1995), URL: \href{http://www.elsevier.nl/locate/entcs/volume1.html} {\texttt{http://www.elsevier.nl/locate/entcs/volume1.html}}. \bibitem{em2} Easdown, D., and W. D. Munn, \emph{Trace functions on inverse semigroup algebras}, U. of Glasgow, Dept. of Math., preprint 93/52. \bibitem{r} Roscoe, A. W., ``The Theory and Practice of Concurrency,'' Prentice Hall Series in Computer Science, Prentice Hall Publishers, London, New York (1198), 565pp. With associated web site\\ \href{http://www.comlab.ox.ac.uk/oucl/publications/books/concurrency/} {\texttt{http://www.comlab.ox.ac.uk/oucl/publications/books/concurrency/}}. \bibitem{s} Shehadah, A. A., ``Embedding theorems for semigroups with involution, `` Ph.D. thesis, Purdue University, Indiana, 1982. \bibitem{w} Weyl, H., ``The Classical Groups,'' 2nd Ed., Princeton U. Press, Princeton, N.J., 1946. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7543692142, "avg_line_length": 49.8737060041, "ext": "tex", "hexsha": "9ce23dd702e9e69157f3b416fce5ae61ddf05bc8", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2020-06-16T00:37:25.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-03T06:03:03.000Z", "max_forks_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "leithaus/pi4u", "max_forks_repo_path": "common/examplpdf.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189", "max_issues_repo_issues_event_max_datetime": "2019-08-19T22:39:58.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-06T19:01:06.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "leithaus/pi4u", "max_issues_repo_path": "common/examplpdf.tex", "max_line_length": 77, "max_stars_count": 13, "max_stars_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "leithaus/pi4u", "max_stars_repo_path": "common/examplpdf.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-16T00:37:17.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-12T20:35:01.000Z", "num_tokens": 6421, "size": 24089 }
\documentclass{ximera} \input{../preamble} \title{Centers of Mass and Centroids} %%%%%\author{Philip T. Gressman} \begin{document} \begin{abstract} We practice setting up calculations for centers of mass and centroids. \end{abstract} \maketitle \section*{(Video) Calculus: Single Variable} \textbf{Note: It is important to get as much of an intuitive sense as you can about what the double integrals (which appear around the 3:30 mark) mean, but do not worry about precisely what they represent.} \youtube{OLeYrY4AZXk} \section*{Online Texts} \begin{itemize} \item \link[OpenStax II 2.6: Centroids and Centers of Mass]{https://openstax.org/books/calculus-volume-2/pages/2-6-moments-and-centers-of-mass} \item \link[Community Calculus 9.6: Center of Mass]{https://www.whitman.edu/mathematics/calculus_online/section09.06.html} \end{itemize} \section*{Examples} \begin{example}%[CentroidQuad46] Compute the centroid of the region bounded by the inequalities $$0 \leq x \leq 2 \qquad \mbox{ and } \qquad {-\frac{9}{2}x^2+x} \leq y \leq {\frac{9}{2}x^2+x}.$$ \begin{itemize} \item The term ``centroid'' refers to the geometric center of a region. Practically speaking, this means we may assume constant density (e.g., density $1$). \item First we compute the ``mass'' of the region, which in this case is simply the area between curves: \[ M = \int_{\answer{0}}^{\answer{2}} \left[ \left(\answer{\frac{9}{2}x^2+x}\right) - \left(\answer{-\frac{9}{2}x^2+x}\right) \right] dx = \answer{24}. \] \item Next we compute the moments about the $y$ and $x$ axes. This always involves multiplying the integrand above by $\tilde x$ and $\tilde y$, respectively (note the reversal), where $(\tilde x,\tilde y)$ are the coordinates of the geometric center of a typical slice. \item Using $x$ as the slicing variable, slices are \wordChoice{\choice{horizontal}\choice[correct]{vertical}} and consequently the $x$-coordinate of the geometric center of a slice is just $x$ (but note that this would be different if $y$ were the slicing variable). Thus \[ M_y= \int_{\answer{0}}^{\answer{2}} x \left[ \answer{9 x^2} \right] dx = \answer{36}. \] \item The $y$-coordinate of the geometric center of a slice will be the average of $y$-coordinates at the top and bottom of a slice. Therefore \[ \tilde y = \answer{x}. \] Thus \[ M_x = \int_{\answer{0}}^{\answer{2}} \answer{9x^3} dx = \answer{36}. \] \item Thus \[ \begin{aligned} \overline{x} & = \frac{M_y}{M} = \answer{\frac{3}{2}}, \\ \overline{y} & = \frac{M_x}{M} = \answer{\frac{3}{2}}. \end{aligned} \] \end{itemize} \end{example} \begin{example}%[CentroidQuad38] Compute the centroid of the region given by ${-\frac{3}{2}y^2+2y} \leq x \leq {\frac{3}{2}y^2+2y}$ between $y = 0$ and $y = 2$. \begin{itemize} \item In this example, we should use $y$ as the slicing variable, so the roles of $x$ and $y$ are largely switched in comparison to the previous example. \item First compute the mass: \[ M = \int_{0}^{2} \left[ \left(\answer{\frac{3}{2}y^2+2y}\right) - \left(\answer{-\frac{3}{2}y^2+2y}\right) \right] dy = \answer{8}. \] \item In this case, $\tilde y = y$ since slices are \wordChoice{\choice[correct]{horizontal}\choice{vertical}}, so \[ M_x = \int_{0}^{2} y \left[ \answer{3 y^2} \right] dy = \answer{12}. \] \item Likewise, $\tilde x$ is the average of $x$-coordinates of endpoints of a slice. Thus \[ \tilde x = \answer{ 2y}. \] Therefore \[ M_y = \int_{\answer{0}}^{\answer{2}} \left[ \answer{6y^3} \right] dy = \answer{24} \] \item To conclude, $$ \begin{aligned} \overline{x} & = \frac{M_y}{M} = \answer{3}, \\ \overline{y} & = \frac{M_x}{M} = \answer{\frac{3}{2}}. \end{aligned}$$ \end{itemize} \end{example} \end{document}
{ "alphanum_fraction": 0.6858454889, "avg_line_length": 52.1408450704, "ext": "tex", "hexsha": "04d01d231b5e2c109fb0e11742f30e624ffc4f96", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ptgressman/math104", "max_forks_repo_path": "centroids/07centroidwarm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ptgressman/math104", "max_issues_repo_path": "centroids/07centroidwarm.tex", "max_line_length": 273, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ptgressman/math104", "max_stars_repo_path": "centroids/07centroidwarm.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1242, "size": 3702 }
\section*{Distributions} \subsection{Bernoulli} \begin{gather*} P(X = 1) = p \\ P(X = 0) = 1 - p = q \\ E(X) = p \\ E \left[ X^2 \right] = P(X=1) \cdot 1^2 + P(X=0) \cdot 0^2 = p \cdot 1 + q \cdot 0 = p = E [X] \\ \mathcal{D}[X] = E[X^2] - E[X] = p - p ^2 = p(1-p) = pq \end{gather*} \subsection{Binomial} \begin{gather*} P(\xi = k) = C_n^k p^k q^{n-k}, \quad k = 0,1,2,3,\ldots,n \quad p \in [0, 1], \quad q = 1-p \quad n \in \mathbb{N} \\ E[X] = np \\ \mathcal{D}[X] = np(1-p) \end{gather*} \subsection{Poisson} \begin{gather*} P(X=k) = \frac{\lambda^k e^{-\lambda}}{k!} \\ \lambda = E[X] = \mathcal{D}[X] \end{gather*} \subsection{Hypergeometric} \begin{gather*} P(X=k) = \frac{C_D^k \cdot C_{N-D}^{n-k}}{C_N^n} \\ E[X] = \frac{nD}{N} \\ \mathcal{D}[X] = \frac{n(D / N) (1 - D / N) (N - n)}{N - 1} \end{gather*} \subsection{Continuos Uniform} \begin{gather*} f_X(x) = \begin{cases} \frac{1}{b-a} & x \in [a, b] \\ 0 & x \in [a,b] \end{cases} \\ P(X \leq x) = \begin{cases} 0 & x < a \\ \frac{x-a}{b-a} & a \leq x \leq b \\ 1 & x \geq b \end{cases} \\ E[X] = \frac{a + b}{2} \\ \mathcal{D}[X] = \frac{(b-a)^2}{12} \end{gather*} \subsection{Normal} \begin{gather*} f(x) = \frac{1}{\sigma \sqrt{2\pi}} \cdot e^{-\frac{1}{2}\left( \frac{x-\mu}{\sigma} \right)^2} \\ E[X] = \mu \\ \mathcal{D}[X] = \sigma^2 \end{gather*} \subsection{Exponential} \begin{gather*} f(x, \lambda) = \begin{cases} \lambda e^{-\lambda x} & x \geq 0 \\ 0 & x < 0 \end{cases} \\ F(x, \lambda) = \begin{cases} 1 - e^{-\lambda x} & x \geq 0 \\ 0 & x < 0 \end{cases} \\ E[X] = \frac{1}{\lambda} \\ \mathcal{D}[X] = \frac{1}{\lambda^2} \end{gather*} \subsection{Cauchy} \begin{gather*} F(x, x_0, \gamma) = \frac{1}{\pi} \arctan\left( \frac{x-x_0}{\gamma} \right) + \frac{1}{2} \end{gather*}
{ "alphanum_fraction": 0.4849719817, "avg_line_length": 24.2345679012, "ext": "tex", "hexsha": "f2387913a02189e4062a89813bd5ecbd0642f102", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "59066a1110ea467b2aa43518da9295f23da89a32", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aipyth/notes", "max_forks_repo_path": "cources/statistics/distributions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "59066a1110ea467b2aa43518da9295f23da89a32", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aipyth/notes", "max_issues_repo_path": "cources/statistics/distributions.tex", "max_line_length": 102, "max_stars_count": null, "max_stars_repo_head_hexsha": "59066a1110ea467b2aa43518da9295f23da89a32", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aipyth/notes", "max_stars_repo_path": "cources/statistics/distributions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 871, "size": 1963 }
\subsection{tracemalloc -- Trace memory allocations} To be done .... %
{ "alphanum_fraction": 0.7083333333, "avg_line_length": 14.4, "ext": "tex", "hexsha": "72fa65026a9b75df2201ba2bd589ca1c5b0ca364", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "remigiusz-suwalski/programming-notes", "max_forks_repo_path": "src/python3/sections/tracemalloc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "remigiusz-suwalski/programming-notes", "max_issues_repo_path": "src/python3/sections/tracemalloc.tex", "max_line_length": 52, "max_stars_count": 1, "max_stars_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "remigiusz-suwalski/programming-notes", "max_stars_repo_path": "src/python3/sections/tracemalloc.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-28T05:03:18.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-28T05:03:18.000Z", "num_tokens": 17, "size": 72 }
\RequirePackage[l2tabu, orthodox]{nag} \documentclass[10pt]{article} % Useful LaTeX packages. \usepackage{amsmath} \usepackage[a4paper]{geometry} %\usepackage[a4paper,left=2.8cm,right=2.8cm,top=2.5cm,bottom=2.5cm]{geometry} \usepackage{graphicx} \usepackage{microtype} \usepackage{siunitx} \usepackage{booktabs} \usepackage[colorlinks=false, pdfborder={0 0 0}]{hyperref} \usepackage{cleveref} \usepackage[utf8]{inputenc} % cp1250, latin2 \usepackage[T1]{fontenc} \usepackage{indentfirst} \usepackage{amssymb} \usepackage{amsthm} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{cite} \usepackage{needspace} \usepackage{color} \usepackage{enumerate} %\usepackage[MeX]{polski} %\usepackage{tikz} %\usetikzlibrary{arrows,shapes} \newcommand\entry[1]{\needspace{5\baselineskip} \bigskip \bigskip \centerline{\bf #1} \bigskip \bigskip} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{claim}{Claim} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{problem}{Problem} %\pagestyle{empty} \title{} \author{} \date{} \begin{document} \maketitle \section*{Outline of the (Revised) Simplex Algorithm.} A standard form LP is: \begin{align} \textrm{min } & {\bf c}^T {\bf x} \label{lp:obj} \\ \textrm{st. } & {\bf A} {\bf x} = {\bf b}, \label{lp:Axeqb} \\ & {\bf x} \geq 0. \label{lp:nonneg} \end{align} Matrix ${\bf A}$ has $n$ columns and $m$ rows, and $n > m$. Usually the problem is originally defined with the use of both equality and inequality constraints, as well as special inequality constraints called {\em bounds}, i.e., instead of \eqref{lp:Axeqb} we have: \begin{align} {\bf A}^E {\bf x} = {\bf b}^E, \\ {\bf A}^L {\bf x} \leq {\bf b}^L, \label{lp:ineq}\\ {\bf l} \leq {\bf x} \leq {\bf u}. \label{lp:bounds} \end{align} Note that \eqref{lp:ineq}--\eqref{lp:bounds} can be transformed into constraint of type \eqref{lp:Axeqb} by extending the vector of variables ${\bf x}$ by {\em slack} variables and adding them to the lefthand side of \eqref{lp:ineq}. Also the inequalities of type ``greater than'' can be replaced by that of type ``less than'' by multiplying both sides by -1. Thus we assume that the problem is transformed into the standard form \eqref{lp:obj}--\eqref{lp:nonneg} before we run Simplex. \medskip {\bf Terminology} A {\em basis} $B$ is an ordered set of $m$ linearly independent columns. By ${\bf A}_B$ we denote the matrix consisting of columns in $B$, and by ${\bf c}_B$ we denote the vector of coefficients restricted to the elements of ${\bf c}$ that correspond to the indices of columns in ${B}$. We write ${\bf A}_N$ to denote the matrix consisting of non-basic columns, and, similarly, ${\bf c}_N$ to denote the vector of non-basic coefficients. A {\em bfs} (basic feasible solution) is a vector ${\bf x} = ({\bf x}_B, {\bf x}_N)$, where ${\bf x_N}$ is a zero vector and ${\bf x}_B$ is a solution of the system ${\bf A}_B {\bf x}_B = {\bf b}$ that satisfies ${\bf x}_B \geq 0$. Such a vector corresponds to a vertex of polytope defined by the set of LP constraints. A bfs is {\em degenerate} if ${\bf x}_B$ contains zero. This means that the vertex corresponding to this bfs also can be obtained by taking another bfs (i.e., the same vertex is obtainable as an intersection of different choices of polytope facets). \medskip {\bf Phase I} The purpose of the Phase I is to determine an initial {\em bfs}. There are two cases: \begin{enumerate} \item Initially, all constraints were inequalities of the form \eqref{lp:ineq}, with ${\bf b}^L \geq 0$. Then we have introduced $m$ slack variables when transforming them into equality constraints. Such slack variables also define a basis, which clearly is feasible. We proceed directly to Phase II from this point. \item No initial basis is known, but we can add $m$ artificial variables that would form a basis. \end{enumerate} Assuming that the second case occurs, we add to each constraint an artificial variable $x^{a}_i$, $i = 1, \ldots, m$, and solve an auxiliary LP with an objective function: $$ \sum_{i=1}^m x^{a}_i, $$ and the set of constraints: $$ {\bf A} {\bf x} + {\bf I} {\bf x}^a = {\bf b}, \;\;\; {\bf x}, {\bf x}^a \geq 0. $$ The simplex method itself can be used for solving this LP with initial bfs ${\bf x}^a_B = {\bf b}$. If the auxiliary LP has optimal value greater than zero, then the original LP \eqref{lp:obj}--\eqref{lp:nonneg} is infeasible, and we stop at this point. If the original LP is feasible, then the auxiliary LP has optimal value $0$. The basis corresponding to the solution of auxiliary LP should now consist entirely of non-artificial variables, and can be used as an initial basis in Phase II. If otherwise some artificial variable remains in the basis corresponding to the optimal solution of value $0$, then this variable can be moved out from the basis, and replaced by a variable corresponding to any nonzero element in the tableaux row corresponding to the artificial variable. If there are no nonzero elements in that row, then the original matrix ${\bf A}$ does not have a full rank, and the redundant row must be removed. \medskip {\bf Phase II} In this phase we solve the original LP \eqref{lp:obj}--\eqref{lp:nonneg}, given an initial bfs, obtained in Phase I. \medskip {\bf The Algorithm} \begin{enumerate} \item Given a basis $B$, construct $m$-by-$m$ matrix ${\bf A}_B$. \item Compute ${\bf x} = {\bf A}_B^{-1} {\bf b}$. \item Compute ${\bf y} = ({\bf A}_B^T)^{-1} {\bf c}_B$. \item Compute ${\bf s} = {\bf c}_N - {\bf A}_N^T {\bf y}$. \item If ${\bf s} \geq 0$ then STOP. Return optimal value $({\bf x}_B, {\bf x}_N = {\bf 0})$. \item Select entering index $j$, such that ${\bf s}_j < 0$ and $j$ is the smallest. \item Compute ${\bf d} = {\bf A}_B^{-1} {\bf A}_j$. \item If ${\bf d} \leq 0$ then STOP. Problem is unbounded. \item Select leaving index $i$, to be the smallest one in the set $\min \{ \frac{x_i}{d_i}, \; d_i > 0 \}$. \item Let $B \leftarrow B \setminus \{ i \} \cup \{ j \}$. Go to step 1. \end{enumerate} The rules for selecting entering and leaving indices are called {\em Bland's rules}. \section*{Simplex method for problems with bounded variables} Let us assume that variables in the problem are bounded, i.e., $$ \forall_{j=1,\ldots,n} \;\; L_j \leq x_j \leq U_j. $$ By substituting $x_j' = x_j - L_j$ we can, without the loss of generality, consider the following formulation: \begin{align} \textrm{min } & {\bf c}^T {\bf x} \label{lp:obj_sub} \\ \textrm{st. } & {\bf A} {\bf x} = {\bf b}, \label{lp:Axeqb_sub} \\ & 0 \leq {\bf x} \leq {\bf u}. \label{lp:bounds} \end{align} where ${\bf u} = [U_1 - L_1, U_2 - L_2, \ldots, U_n - L_n]^T$, and we use ${\bf x}$ instead of ${\bf x}'$ for brevity (note that we also omit the constant term from the objective function, as the optimal solution remains the same). Instead of adding slack variables to singleton upper-bound constraints and appending them to the matrix ${\bf A}$, we consider this special type of constraints separately. We use the notion of {\it extended basic feasible solution}. In such a solution, all basic variables assume values $$0 < x_j < U_j,$$ while all nonbasic variables assume $$ x_j = 0 \;\; \textrm{ or } \;\; x_j = U_j.$$ \medskip {\bf Theorem.} If every nonbasic variable at its lower bound has a nonnegative objective coefficient, and every nonbasic variable at its upper bound has a nonpositive objective coefficient, then the extended basic feasible solution minimizes the objective function over the feasible region. \qed \medskip If the objective coefficient $\hat{c}_j$ of nonbasic variable $x_j = 0$ is negative, we may increase $x_j$. If the objective coefficient $\hat{c}_j$ of nonbasic variable $x_j = U_j$ is positive, we may decrease $x_j$. In either case, the value of solution is improving. To simplify the computations, whenever a nonbasic variable assumes $x_j=U_j$, we apply the {\it upper-bound substitution}: $$ x_j = U_j - x_j'. $$ This substitution affects the objective function coefficient $c_j$, as well as $a_{ij}$ and $b_i$ of each constraint that involves $x_j$. After applying it, all nonbasic variables have $x_j = 0$ or $x_j' = 0$. Thus we may use the same condition for selecting entering variable $x_s$ as in the original simplex method: \begin{quote} Select entering index $j$, such that ${\bf s}_j < 0$ and $j$ is the smallest. \end{quote} To determine the leaving variable we need to take 3 cases into consideration that may happend when the entering variable $x_s$ is increased from 0. Either $x_s$ reaches its upper bound $U_s$, some basic variable $x_k$ decreases to 0, or some basic variable $x_k$ increases to its upper bound $U_k$. Thus we compute the following: $$ t_1 = \min_i \{ \frac{x_i}{d_i} : \; d_i > 0 \}, $$ $$ t_2 = \min \{ \frac{U_k - x_i}{-d_i} : \; d_i < 0 \}, $$ $$ \theta = \min \left\{ U_s, \; t_1, \; t_2 \right\}. $$ \begin{enumerate}[i)] \item If $\theta = \infty$ then problem is unbounded. \item If $\theta = U_s$ then we apply the upper-bounding substitution to entering variable $x_s$ (basis remains unchanged). \item If $\theta = t_1$ then we perform usual simplex pivot to introduce $x_s$ into basis. \item If $\theta = t_2$ then we first apply the upper-bounding substitution to basic variable $x_k$ (corresponding to the minimum in $t_2$), and then perform simplex pivot to introduce $x_s$ into basis. \end{enumerate} When optimal solution is found (no increase in any variable can lead to improvement of the objective function) we must reverse all upper-bounding substitutions to retrieve the actual solution. \end{document}
{ "alphanum_fraction": 0.7008495146, "avg_line_length": 51.2331606218, "ext": "tex", "hexsha": "f979d151dd05939b30cadb2408f17fe9a2861419", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "da6130aea2405b9b211f7bacbd82af9001716e66", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "maciejdrwal/simplex", "max_forks_repo_path": "doc/doc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "da6130aea2405b9b211f7bacbd82af9001716e66", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "maciejdrwal/simplex", "max_issues_repo_path": "doc/doc.tex", "max_line_length": 674, "max_stars_count": 3, "max_stars_repo_head_hexsha": "da6130aea2405b9b211f7bacbd82af9001716e66", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "maciejdrwal/simplex", "max_stars_repo_path": "doc/doc.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-16T22:26:25.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-20T22:27:47.000Z", "num_tokens": 2969, "size": 9888 }
%******************************************************************************* %****************************** Fifth Chapter ********************************* %******************************************************************************* \chapter{Assessing LISP interworking performance through RIPE Atlas} \label{cha:pxtr} % **************************** Define Graphics Path ************************** \ifpdf \graphicspath{{Chapter6/Pics/Raster/}{Chapter6/Pics/PDF/}{Chapter6/}} \else \graphicspath{{Chapter6/Pics/Vector/}{Chapter6/}} \fi %-< ABSTRACT >-------------------------------------------------------------------- % LISP is currently under standardization in IETF and is deployed in the wild at the same time, thanks to two testbeds: the LISP Beta Network and the LISP-Lab project. Thanks to two testbeds: the LISP Beta Network and the LISP-Lab project, LISP is gradually deployed in the wild. The interworking mechanism is proposed to ensure the communication between LISP-speaking sites and legacy Internet. % The performance of LISP interworking with legacy Internet is of paramount importance to promote the adoption of LISP in future Internet. Although LISP has been evaluated in terms of scalability~\cite{lispCCR}, stability~\cite{yue2016stability}, LISP evolution~\cite{li2017lisp}, and delay resolving the bindings between EIDs and RLOCs (\cite{lispCCR}, \cite{coras2014performance}), LISP was never analyzed from the aspect of the interworking with the legacy Internet at large scale. This dissertation fills this gap by providing a latency evaluation and routing path measurement about LISP interworking mechanism. The work is based on two experiments (one is conducted in 2015, the other one is in 2016) by using RIPE Atlas, which is the largest existing Internet measurement infrastructure for both IPv4 and IPv6. % Experimental results show that LISP introduces additional latency, especially for close destinations, but negligible for intercontinental long-distance destinations. The additional latency highly depends on the position of the new network element being in charge of the communication between LISP-sites and the legacy Internet. % Although LISP introduces some stretch, its performance is generally stable and its latency is not very high compared to natively forwarding without using LISP. The experimental results confirm that the use of proxies to connect LISP and non-LISP sites introduce negative effects, which are important for the nearby destinations but can be ignored for the intercontinental long-distance destinations. It also shows that the selection of the proxy location is very important, since having them close either to the sources or to the destinations can decrease a lot the negative stretch. Although LISP introduces some overhead, the performance is quite stable for IPv4. However, the same conclusion does not hold for IPv6. We also observed that the interworking performance of the LISP-Lab platform is more reliable than the LISP Beta Network. In the remainder, Sec.~\ref{subsec:atlas} and Sec.~\ref{subsec:alexa} introduce the necessary resources on which our experiment leveraged. Sec.~\ref{sec:pxtr_methodology} describes the methodology we used to conduct the experiment. Sec.~\ref{sec:pxtr_ping_v4_2015} and Sec.~\ref{sec:pxtr_ping_v4_2016} respectively present the IPv4 ping results obtained from 2015 and 2016. Sec.~\ref{sec:pxtr_ping_v6} shows the IPv6 ping results and Sec.~\ref{sec:pxtr_traceroute} indicates the traceroute results for both IPv4 and IPv6. % Finally, Sec.~\ref{sec:pxtr_conclusion} concludes the chapter. %-< ABSTRACT >-------------------------------------------------------------------- %\section{Introduction} %\label{sec:pxtr_intro} %To improve LISP and promote the deployment of LISP, implementations and large scale flexible experimental platforms are indispensable. At the moment of this writing, two LISP platforms are deployed world-wide. One is the experimental LISP Beta Network testbed~\cite{lispbeta} deployed in 2008, and the other one is LISP-Lab platform~\cite{lisplab} open to external experimenters since 2015. Both of them have all LISP-required network entities and have been inter-connected. Although LISP has been evaluated in terms of scalability~\cite{lispCCR}, stability~\cite{yue2016stability}, LISP evolution~\cite{li2017lisp}, and delay resolving the bindings between EIDs and RLOCs (\cite{lispCCR}, \cite{coras2014performance}), LISP is never analyzed from the aspect of the interworking with the legacy Internet at large scale. The performance of such interoperation is very important since it determines the deployment speed of LISP networks~\cite{feng2017locator}. Hence, it is necessary to assess how such platforms integrate with the legacy Internet, evaluate the performance, and offer realistic experience to improve the testbeds themselves and provide hints to move the LISP technology forward. % %We conduct two experiments: one spans 6 hours in 2015 and the other one lasts 15 days in 2016. The results confirm that the use of proxies to connect LISP and non-LISP sites introduce negative effects, which are important for the nearby destinations but can be ignored for the intercontinental long-distance destinations. It also shows that the selection of the proxy location is very important, since having them close either to the sources or to the destinations can decrease a lot the negative stretch. Although LISP introduces some overhead, the performance is quite stable for IPv4. However, the same conclusion does not hold for IPv6. Furthermore, we observe that the interworking performance of LISP-Lab is more reliable than LISP Beta Network. %-< SECTION >-------------------------------------------------------------------- \section{Experiment resources} \subsection{RIPE Atlas} \label{subsec:atlas} % \begin{itemize}[noitemsep,topsep=0pt] % \item RIPE Atlas~\cite{atlas} is the largest Internet measurement infrastructure. % \item Consists of probes and anchors. % \item Measure the state of Internet in real time through a set of tools. % \item Probes and anchors have built-in measurements. % \item User can define the measurements through its provided API. % \end{itemize} %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{Pics/Atlas_probes_deployment.eps} \caption{Deployment of probes on RIPE Atlas in 2017} \label{Atlas_probes_deployment} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- RIPE Atlas~\cite{atlas} is the largest Internet measurement infrastructure, consisting of a global network of more than 9000 probes all over the world that measure Internet connectivity, reachability, and provide an understanding of the state of the Internet in real time. The deployment of worldwide probes is shown in Fig.~\ref{Atlas_probes_deployment}. From early 2013, the hardware of the probe is a modified TP-Link wireless router (model TL-MR 3020) with a small USB thumb drive in it, but this probe does not support WiFi. Atlas also has more than 200 worldwide anchors, which are enhanced probes, offering more process capacity and sufficient bandwidth to support the larger number of measurements. Thus, the anchors are normally more stable, and can be used as reference. All the probes and anchors whenever are set up, they automatically start executing a set of pre-defined measurements, called built-in measurements. The set of built-in measurements contain \emph{ping}, \emph{traceroute}, \emph{DNS}, \emph{SSL} and some \emph{HTTP} requests, mostly towards well-known targets such as DNS root servers, but also towards some of the RIPE Atlas infrastructure components. Besides, the probes and anchors also provide the user-defined active measurements with the same measurement types. Volunteers all over the world host these small hardware devices to actively measure Internet performance. To avoid malicious attacks, credits are required to launch experiments and the amount of required credits varies according to experiment types. In addition, the maximum allowed number of measurements towards the same target is 10. RIPE Atlas provides a set of RESTful API with which experiment campaign parameters (e.g.,type of measurement, query source and destination, the duration and interval of experiment etc.) are passed to probes and measurement traces can be retrieved. Based on the available API, one can schedule experiment campaign in an automatic manner. \subsection{Alexa} \label{subsec:alexa} Alexa~\cite{alexa}~\cite{alexatop} is a website created by Amazon which provides commercial web traffic data, global rankings, and other information on 30 million websites. The analytic such as website rankings is based on the data collected by a toolbar developed by Alexa and installed within users web browser. The toolbar provides functions such as popup blocker, a search engine,etc. In early 2015, Alexa stated that there had been 10 million downloads of the toolbar. Alexa ranks sites based primarily on tracking a sample set of Internet traffic — users of its toolbar for the Internet Explorer, Firefox and Google Chrome web browsers. Due to its huge sampling space, the website ranking that it published is widely used to evaluation the popularity of websites. %-< SECTION >-------------------------------------------------------------------- \section{Measurement methodology} \label{sec:pxtr_methodology} % Two experiments have been conducted. The mechanism of LISP interworking is presented in Sec.\ref{sec:background_Interworking} and Coras et Al.~\cite{coras2014performance} describe how the use of PxTRs introduces a stretch in the path between LISP and non-LISP networks. Thus, it is necessary to conduct a large scale experiment in the real world to quantify this overhead. The objective is to answer three questions through such an experiment: how much is the negative impact? Under which conditions such stretch has an important impact on the performance? Conversely, under which situation such overhead is so small that it can be ignored? The follows describe how we conduct our experiments. %-< SECTION >-------------------------------------------------------------------- \subsection{Dataset 2015} \label{sec:pxtr_meth_2015} % \begin{itemize}[noitemsep,topsep=0pt] % \item 4 probes ping to 50 IPv4 destinations % \item Interval: 10 minutes % \item Duration: 6 hours (on November 6\textsuperscript{th} 2015) % \end{itemize} As the purpose of our experiment campaign is to fully obtain the knowledge about the LISP interworking performance with legacy Internet, we deployed a probe (RIPE Atlas probe number is $\#22341$) with IPv4 and IPv6 address on the LISP-Lab platform inside an academic institute in Paris, France to conduct the LISP-enabled active measurements. For IPv4, it uses both PETR and PITR of LISP-Lab to communicate with the legacy Internet, and the connection between the xTR of probe and PxTR is via a MPLS VPN. However, the MPLS tunnel did not support IPv6 at that time. Thus, we configure the ITR of the LISP-Lab probe natively forward the packets towards the Internet core for IPv6 targets (i.e., PETR is not used for IPv6). As a result, the IPv6 packets outgoing from this probe are not encapsulated into LISP packets and natively forwarded in the traditional way, but the returned packets still pass through the PITR of LISP-Lab platform. %-< TABLE >----------------------------------------------------------------- \begin{table}[!tb] \centering \caption{Different configurations of probes in 2015} \label{Probes_config_2015}{ \resizebox{0.6\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c@{}} \hline\hline Name & using LISP & network type & probe/anchor \\ \hline LISP-Lab & yes & academic & probe\\ \hline mPlane & no & academic & probe \\ \hline rmd & no & industrial & probe \\ \hline FranceIX & no & industrial & anchor \\ \hline \hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- The mPlane probe (\#13842) resides in an academic network and uses the conventional routing. It allows to compare with non-LISP academic networks. While the rmd probe (\#16958) resides in industrial network and also uses the conventional routing. It is chosen in order to compare with non-LISP and non-academic networks. Further, a stable probe with much more measurement capacity is necessary as a reference with all other probes. To this end, the only anchor in Paris named FranceIX (\#6118) is selected. It resides in an IXP (Internet Exchange Point) network and does not use LISP. Thus, in this experiment, there are in total 4 probes used as sources to conduct the active measurements. Tab.~\ref{Probes_config_2015} summarizes the different probes. We are now in a setup phase, where we start with a reduced number of destinations so to first setup the automated experiments. In this first experiment, the selected 4 probes ping to the top 50 Alexa sites every 10 minutes during 6 hours. However, from the experiment we find that there are 14 websites resolve to the same IPv4 addresses. We filtered them, so the assessment presented in Sec.~\ref{sec:pxtr_ping_v4_2015} are analyzed by the results of 36 top popular websites. With the collected dataset, we evaluate the various performances leveraged on \acrshort{rtt}. %-< SECTION >-------------------------------------------------------------------- \subsection{Dataset 2016} \label{sec:pxtr_meth_2016} % \begin{itemize}[noitemsep,topsep=0pt] % \item 5 probes ping/traceroute to 500 IPv4 and 122 IPv6 destinations % \item Interval: 30 minutes for ping, 60 minutes for traceroute % \item Duration: 15 days (from December 15\textsuperscript{th} to 29\textsuperscript{th} 2016) % \end{itemize} The new experiment still leverages on the LISP-Lab probe and RIPE Atlas infrastructure, but provides the following new contributions: \begin{itemize}[noitemsep,topsep=0pt] \item Add another LISP probe connected to the LISP Beta Network as experimental source, to compare LISP-Beta Network and LISP-Lab. \item Enlarge the number of destinations to the first 500 most popular websites on the Alexa ranking~\cite{alexa}. \item Consider the performance of IPv6 besides IPv4. \item Use \emph{traceroute}, in complement to latency measurements, in order to have a deeper understanding of the observed behavior. \item Extend the experimental span to 15 days so to study potential periodicity of traffic. \end{itemize} %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{Pics/Probes_loc.eps} \caption{Locations of probes and anchor in 2016} \label{Probes_location_2016} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- %-< TABLE >----------------------------------------------------------------- \begin{table}[!tb] \centering \caption{Location of LISP Beta Network PxTRs in 2016} \label{PxTR_loc_2016}{ \resizebox{0.35\textwidth}{!}{% \begin{tabular}{@{}c|c|c@{}} \hline\hline Number & Continent & Country \\ \hline 1 & Europe & Netherlands \\ \hline 2 & Europe & Denmark \\ \hline 3 & Europe & Norway \\ \hline 4 & America & US \\ \hline 5 & America & US \\ \hline 6 & Asia & Japan \\ \hline \hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- % In order to ensure comprehensiveness and accuracy, we need some other probes geographically close to each other having both IPv4 and IPv6 address as comparative items. Ideally, they are connected through both the (non-) academic network provider and (non-) LISP platform. Thus, we selected 4 other probes located in Paris and their locations are shown in Fig.~\ref{Probes_location_2016}. The newly added LIP6 probe ($\#2403$) also connects to academic network behind both LISP Beta Network and LISP-Lab platform. When this probe communicates with the hosts in legacy Internet, it uses PETR of LISP-Lab by configuration, while return traffic goes through one of 6 PITRs of LISP Beta Network according to the BGP behavior. Depending on the BGP announcement to the legacy host, a different PITR is used. The more precise location of PITRs on LISP Beta Network is shown in Tab.~\ref{PxTR_loc_2016}: 3 are in Europe (Netherlands, Denmark and Norway), 2 are in US, and 1 is in Asia (Japan). Both LISP probes have a MPLS tunnel from their ITRs to the PETR at Lyon for IPv4, while for IPv6, the ITR of LIP6 probe is configured to use normal BGP routing to reach the PETR in Lyon (i.e., PETR is used but without MPLS VPN). The use of the MPLS tunnel IPv4 packets experience a shorter path compared to the IPv6 BGP-based one. Thus, theoretically the IPv4 packets sending from the LIP6 probe should arrive to PETR faster than those of IPv6. The configuration of two LISP probes are listed in Tab.~\ref{LISP_config}. %-< TABLE >----------------------------------------------------------------- \begin{table}[!tb] \centering \caption{Configuration of two LISP probes in 2016} \label{LISP_config}{ \resizebox{0.6\textwidth}{!}{% \begin{tabular}{@{}c|c|c@{}} \hline\hline Probe & PETR in Lyon & PITR \\ \hline LISP-Lab (IPv4) & Via MPLS Tunnel & Lyon \\ & & (Via MPLS Tunnel) \\ \hline LIP6 (IPv4) & Via MPLS Tunnel & LISP Beta Network \\ \hline LISP-Lab (IPv6) & not used & Lyon \\ \hline LIP6 for (IPv6) & Via BGP Routing & LISP Beta Network \\ \hline \hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- Since the rmd probe is not configured with an IPv6 address, we replace it by the Gandi probe (\#3141), which also resides in an industrial network and also uses the conventional routing for both IPv4 and IPv6. % Although it is a little further to the LISP-Lab probe than the rmd probe, it is the nearest probe having both IPv4 and IPv6. As the mPlane probe and the FranceIX anchor satisfy all the requirements of this experiment, we still keep them as the reference. The locations of all the probes and anchor are shown in Fig.~\ref{Probes_location_2016}. Tab.~\ref{Probes_config_2016} shows the different configurations of probes. %-< TABLE >----------------------------------------------------------------- \begin{table}[!tb] \centering \caption{Different configurations of probes in 2016} \label{Probes_config_2016}{ \resizebox{0.6\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c@{}} \hline\hline Name & using LISP & network type & probe/anchor \\ \hline LISP-Lab & yes & academic & probe\\ \hline LIP6 & yes & academic & probe \\ \hline mPlane & no & academic & probe \\ \hline Gandi & no & industrial & probe \\ \hline FranceIX & no & industrial & anchor \\ \hline \hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- What we care about is the delay performance and the path of LISP interworking with legacy Internet. Thus, we rely on the ping and traceroute tools provided by RIPE Atlas. As destinations of our measurements, we selected the 500 most popular websites (according to worldwide website ranking provided by Alexa [24]), which reply to IPv4 ping and traceroute. Among them, 122 websites are configured with IPv6 address. Thus, in our experiment, 500 IPv4 and 122 IPv6 addresses are the destinations of ping and traceroute. We develop a Python script to schedule the experiment campaign by using the API provided by RIPE Atlas. The parameters of experiment campaign are as follows: the experiment campaign lasts 15 days (from December 15\textsuperscript{th} to 29\textsuperscript{th} 2016). The 5 chosen probes ping and traceroute to 500 IPv4 and 122 IPv6 addresses. The intervals of ping and traceroute measurements are respectively 30 minutes and 60 minutes. We want to evaluate LISP interworking performance as frequently as possible, but each probe sequentially launches 622 traceroute measurements, which last more than 30 minutes. To avoid the heavy traffic burden and guarantee that all the measurements in a same experimental round can be finished before next round, we set the sampling interval of traceroute at 60 minutes. As a summary, the results presented in this journal come from the following 4 experiment campaigns: \begin{itemize}[noitemsep,topsep=0pt] \item 5 probes ping to 500 destinations during 15 days with an interval of 30 minutes (IPv4). \item 5 probes ping to 122 destinations during 15 days with an interval of 30 minutes (IPv6). \item 5 probes traceroute to 500 destinations during 15 days with an interval of 60 minutes (IPv4). \item 5 probes traceroute to 122 destinations during 15 days with an interval of 60 minutes (IPv6). \end{itemize} From the experiment, we find that some websites actually resolve to the same IPv4 (34 over 500) or IPv6 (47 over 122) addresses. Further, for some destinations, at least one probe (mainly the LIP6 probe) does not get any responses of \emph{ping} during the whole experiment (42 for IPv4 and 10 for IPv6). In this paper, we filtered the anycast destinations and only consider responding destinations. Thus, after cleaning the traces for the above reasons, all the results of \emph{ping} measurements used in the following subsections consists on the 5 chosen probes as the experimental sources and 424 IPv4 and 65 IPv6 responding addresses as the destinations. Differently from the ping measurements, for the \emph{traceroute} dataset, all the successful \emph{traceroute} responses from each probe are kept for further analysis. %-< SECTION >-------------------------------------------------------------------- \section{IPv4 Ping results from Dataset 2015} \label{sec:pxtr_ping_v4_2015} % \begin{itemize}[noitemsep,topsep=0pt] % \item CDF of average RTT between different probes % \item Correlation coefficient to FranceIX % \item Smallest mean and median RTT grouped by continent % \item Relative mean and median RTT clustered by continent % \end{itemize} %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{Pics/2015/CDF_RTT_avg.eps} \caption{CDF of average RTT between different probes from Dataset 2015} \label{CDF_RTT_avg_v4_2015} \end{figure} %-< END FIGURE >----------------------------------------------------------------- Fig.~\ref{CDF_RTT_avg_v4_2015} shows the cumulative distribution function (CDF) of average RTTs towards the selected 50 targets. Since the experimental destinations are located all over the world, the range of observed RTTs varies from few milliseconds (ms) to nearly five hundreds ms. When the RTT values are in the small range, especially when RTT is less than 50 ms, the latency from LISP-Lab probe is always higher than the three others and the difference is mainly around 10 ms. This latency difference is caused by the stretch introduced by the proxy technology (cf. Sec.\ref{sec:background_Interworking}), since every packet sent between LISP-Lab probe and the legacy Internet has to pass through the LISP-Lab PxTR, which is located in Lyon (approximately $400km$ away from the probe). As the RTT increases, actually the difference (surprisingly) decreases. When the RTT around 200 ms is reached, all probes show basically the same RTT. Around such RTT values, the destinations concentrate in North America. Going further in the high range RTT values, i.e., more than 350 ms, when destinations are mainly located in Asia, the probe in the LISP-Lab domain actually shows the lowest RTT. Thus, it indicates that the network connection between the LISP-Lab PxTR to the (asian) intercontinental destinations has a better performance and the stretch delay from LISP-Lab probe to PxTR can be ignored. We wanted to use traceroute to further investigate the reasons, but it cannot be natively used at that moment, since the LISP-encapsulation prevents its correct function. We explored a new way to find out what happens for Dataset 2016, which is described in Sec.~\ref{sec:pxtr_traceroute}. We tried to quantify the percentage of times that one probe's RTT is the smallest compared to three others, since the destinations that the different probes contact to are exactly the same. The result shows that in $52.8\%$ of the time, FranceIX shows the smallest RTT to reach the destinations. It is normal that its RTTs are the smallest, since FranceIX is an Internet Exchange Point (IXP), hence well connected, and also acts as one of the anchors of RIPE Atlas, thus with a more powerful hardware. While, only in the $5.6\%$ of the cases, the LISP-Lab probe has the smallest RTT. In this small percentage the main contribution comes from Intercontinental destinations, i.e., when the RTT values belong to high range. \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=.9\linewidth]{Pics/2015/4_probes_to_alexa_top50_proportion_mean_bar_geo.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=.9\linewidth]{Pics/2015/4_probes_to_alexa_top50_proportion_median_bar_geo.eps} \end{center} \end{minipage} \caption{Smallest mean RTT (left) and smallest median RTT (right) grouped by continent from Dataset 2015.} \label{4_probes_to_alexa_top50_proportion_bar_geo} \end{figure} Fig.~\ref{4_probes_to_alexa_top50_proportion_bar_geo} depicts the percentage of times that one probe's RTT is the smallest compared to three other probes grouped by continents where the selected targets are located. When the destinations are in Europe and America, FranceIX is the fastest most of the time. % This is reasonable, since FranceIX is an Internet Exchange Point (IXP), acting as Atlas' anchor, thus it is well connected with a more powerful hardware. Whereas, the percentage of LISP-Lab is always 0. Its higher RTT is caused by the proxy stretch. %, since traffic between LISP-Lab probe and the legacy Internet has to pass through the LISP-Lab PxTR, which is in Lyon (approx. $400km$ away from the probe). When the targets are in Asia, LISP-Lab becomes the fastest with a percentage of $20\%$ (mean RTT) and $10\%$ (median RTT). It indicates that such connection from the LISP-Lab PxTR is faster, so that the stretch can be ignored. The performance of FranceIX is not very stable to the Asian destinations, being the fastest $0\%$ in average, but it is $10\%$ looking at the median RTT. It shows that FranceIX sometimes has extremely high RTT values to Asian destinations. \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=.9\linewidth]{Pics/2015/4_probes_to_alexa_top50_diff_rtt_LISP-Lab_FranceIX_mean_geo.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=.9\linewidth]{Pics/2015/4_probes_to_alexa_top50_diff_rtt_LISP-Lab_FranceIX_median_geo.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{Relative mean (left) and median (right) RTT clustered by different continents from Dataset 2015.} \label{4_probes_to_alexa_top50_diff_rtt_LISP-Lab_FranceIX_geo} \end{figure} We also evaluate whether the performance of LISP-Lab is as stable as FranceIX. % We define the relative performance for each destination as: $RTT_{LISP-Lab}$ - $RTT_{FranceIX}$, and the results clustered by continents are shown in Fig.~\ref{4_probes_to_alexa_top50_diff_rtt_LISP-Lab_FranceIX_geo}. We define a metric called \emph{Relative RTT (rRTT)} for each destination as: \begin{equation} \label{rRTT_ll_2015} rRTT_{LL}(d)=RTT_{LL}(d) - RTT_{F}(d) \end{equation} where $d$ is the destination, subscriptions $LL$ and $F$ respectively refers to LISP-Lab and FranceIX. The results clustered by continents are shown in Fig.~\ref{4_probes_to_alexa_top50_diff_rtt_LISP-Lab_FranceIX_geo}. The left hand one shows the mean RTT, while the right hand one shows the median RTT. For the European and American targets, LISP-Lab is a little slower than FranceIX but with a stable behavior. On the contrary, for half of the Asian destinations, LISP-Lab is significantly faster than FranceIX. It shows that the network connection between LISP-Lab PxTR and Asian destinations has better performance. Comparing the two subfigures, there is no negative values at all in Europe and America area in left hand figure, but there are some in the right hand one. It indicates that LISP-Lab RTTs to these destinations are very unstable and the variance is quite high. %-< TABLE >----------------------------------------------------------------- \begin{table}[!tb] \centering \caption{Correlation coefficient to FranceIX from Dataset 2015} \label{correlation_v4_2015}{ \resizebox{0.55\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c|c@{}} \hline\hline & LISP-Lab & mPlane & rmd & FranceIX \\ \hline Coefficient & 0.9733 & 0.9784 & 0.9646 & 1.0 \\ \hline\hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- Since FranceIX shows the best RTT most of the time, we tried to evaluate the correlation between the RTT of the other 3 probes and FranceIX. The purpose is to see if RTT measurements are correlated or totally independent. We compute the correlation coefficient for every RTT series of different probes to all destinations with the ones of FranceIX. Tab.\ref{correlation_v4_2015} shows the results. The absolute value of coefficient is 1 in the case of a perfect direct linear relationship, whereas 0 in the case of no correlation at all. The correlation coefficient of LISP-Lab is 0.9733 (> 0.8), showing a very high correlation with FranceIX, like the other 2 probes. This means that LISP while certainly introducing some overhead, has quite stable performance that does not deviate from normal network operation. %-< SECTION >-------------------------------------------------------------------- \section{IPv4 Ping results from Dataset 2016} \label{sec:pxtr_ping_v4_2016} % \begin{itemize}[noitemsep,topsep=0pt] % \item CDF of median RTT between different probes % \item Smallest median RTT grouped by continent % \item Correlation coefficient to FranceIX % \item Relative median RTT clustered by continent for LISP-Lab and LIP6 % \item Reliability of each probe % \item The periodicity check % \end{itemize} % With the collected dataset from IPv4 ping experiment, we assess the performance of LISP interworking with legacy Internet in terms of round-trip time (RTT). %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{Pics/v4/CDF_avg(RTT)_median_4_20.eps} \caption{CDF of median RTT between different probes (IPv4) from Dataset 2016} \label{CDF_of_median_RTT_between_different_probes_v4_2016} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- Fig.~\ref{CDF_of_median_RTT_between_different_probes_v4_2016} shows the CDF of the average RTT toward the selected IPv4 destinations for Dataset 2016. % Because of the experimental targets are worldwide, the range of observed RTTs varies from few milliseconds (ms) to nearly 500ms. As an anchor, the FranceIX probe outperforms other probes in most cases, hence, showing the smallest RTT. With RTT range $[0, 200]$ ms, the latency from two LISP probes, LISP-Lab and LIP6, are respectively 25 ms and 60 ms slower than three other non-LISP probes. Such a RTT performance degradation is caused by the path length stretch introduced by the the proxy technology (as mentioned in Sec.~\ref{sec:background_lisp}), since every packet sent between LISP probe and the legacy Internet has to pass through a PxTR. Although both LISP probes introduce the stretch, LIP6 probe is even 50ms slower than the LISP-Lab probe. Given that both LISP-Lab and LIP6 probes pass through the same PETR located in Lyon, the performance degradation of LIP6 probe is due to the PITR selection for traffic from legacy Internet to LISP network. The PITRs used by LIP6 probe belongs to the LISP Beta Network and there are in total 6 available PITRs. The PITRs announce the LISP Beta prefixes to their AS peers using BGP. So different destinations select different PITRs depending on where the destinations are and lead to the reply goes through the different paths. For example, an Asian destination normally uses the PITR in Japan instead of selecting one in Europe. As listed in Tab.~\ref{PxTR_loc_2016}, LISP Beta Network has not deployed PITR in France, while the PITR of LISP-Lab is in the same country to its probe. Thus, the return path that we measured for the LIP6 probe is longer than for the LISP-Lab probe. As a result, the closer the destination is located to the probes, the bigger the difference of RTT between the two LISP probes. Within RTT range $[200, 500]$ ms, all probes except LIP6 show basically the same RTT. It indicates that LISP-Lab has a better performance in scenario of long distance network connection. The reason is that stretch delay from LISP-Lab probe to PxTR (in Lyon) can be ignored. Fig.~\ref{CDF_of_median_RTT_between_different_probes_v4_2016} confirms this situation, within RTT range $[200, 500]$ ms, the difference between two probes becomes significantly smaller than that in range $[0, 200]$ ms. The aforementioned results and analysis coincide with the Fig.~\ref{CDF_RTT_avg_v4_2015} from the dataset of 2015, which just covers comparison related to LISP-Lab probe. This indicates that the LISP PxTR performance is stable: the measurement results do not change with the different destinations at the different time. %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{Pics/v4/Smallest_median_avg(RTT)_proporation.eps} \caption{Smallest median RTT grouped by continent (IPv4) from Dataset 2016.} \label{Smallest_median_avg(RTT)_proporation_v4_2016} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- Even with stretch brought by the PxTR, LISP probes still outperform non-LISP probes in some cases according to the measurement results. To identify these cases, worldwide destinations are classified by continent. Fig.~\ref{Smallest_median_avg(RTT)_proporation_v4_2016} depicts the percentage of times that one probe has the smallest RTT compared to the other probes grouped by continents where the selected targets are located. The experimental destinations are located in 4 continents, i.e., Europe, North America, South America, and Asia. The \emph{ping} measurement is repeated 720 times as the experiment lasts 15 days and its interval is 30 minutes. The RTTs of the FranceIX anchor and mPlane probe are the smallest most of the time, especially for the European and South American destinations. Only for the Asian destinations the RTT of LISP-Lab probe experiences sometimes the smallest latency for $8.82\%$ of the targets, while the LIP6 probe experiences the smallest latency in $0.98\%$ of the cases. Among the North American destinations, there is just one destination for which the RTT of the LISP-Lab probe is the smallest, while there is no destination for which LIP6 experiences the smallest latency. Fig.~\ref{CDF_of_median_RTT_between_different_probes_v4_2016} shows that for RTT above 200 ms LISP-Lab performs like the other probes, so that we can question whether it actually has the best performance in some cases, and for which destinations. Fig.~\ref{Smallest_median_avg(RTT)_proporation_v4_2016} shows the geographic location of destinations for which LISP-Lab actually is the best performing probe. In particular, the LISP-Lab probe has no additional delay compared to other non-LISP probes for the intercontinental transmission to the Asian destinations. It means that the negative effect of LISP stretch delay is evident for communications with European and American destinations, but can be neglected for Asian intercontinental destinations. This phenomenon is similar to the result shown in~\cite{li2016performance}, which shows only in $5.6\%$ of the cases that the LISP-Lab probe experiences the smallest latency and all these destinations are in Asia. It proves again that the negative effect of LISP PxTR can be ignored for the intercontinental transmission to the Asian destinations (when considering the European vantage point). %-< TABLE >----------------------------------------------------------------- %%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%% \begin{table}[!tb] \centering \caption{Correlation coefficient to FranceIX (IPv4) from Dataset 2016} \label{correlation_v4_2016}{ \resizebox{0.65\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c|c|c@{}} \hline\hline & Gandi & mPlane & LISP-Lab & LIP6 & FranceIX \\ \hline Coefficient & 0.9647 & 0.9707 & 0.9766 & 0.8547 & 1.0 \\ \hline\hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- % As FranceIX is an enhanced probe with more measurement capacities presenting more stable performance, Also for the Dataset 2016, we try to evaluate the correlation in term of RTT between FranceIX and other probes. % The purpose is to see if RTT measurements are correlated or totally independent. The method is to compute the correlation coefficient for every RTT series of different probes to all destinations with the ones of FranceIX. The results are shown in Tab.~\ref{correlation_v4_2016}. The absolute value of coefficient is 1 in the case of a perfect direct linear relationship, whereas 0 in the case of no correlation at all, i.e., independent. The correlation coefficient of LISP-Lab is 0.9766 (> 0.8), showing a very high correlation level with FranceIX, and even higher than the correlation coefficient of all the other probes. The correlation coefficient of LIP6 is 0.8547, shows a high correlation with FranceIX as well. Both LISP probes having a high correlation with FranceIX means that while certainly introducing some overheads, LISP has quite stable performance that does not deviate from normal network operation. Such result has two sides for the performance of LISP. Good because LISP shows stability. Bad because it means that LISP experiences the same performance and failures like the normal Internet. %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v4/Relative_median_avg(RTT)_LISP-Lab-FranceIX_changed_60.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v4/Relative_median_avg(RTT)_LIP6-FranceIX_changed_60.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{IPv4 Relative median RTT clustered by different continents for LISP-Lab (left) and LIP6 (right) from Dataset 2016} \label{Relative_median_avg(RTT)_v4_2016} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- Another interesting point is to asses whether the two LISP probes have a stable performance as FranceIX according to the destinations. To this end, we leverage on the same metric of \emph{Relative RTT (rRTT)} (formula~\ref{rRTT_ll_2015}), defined in Sec.~\ref{sec:pxtr_ping_v4_2015} for each destination as: \begin{equation} \label{rRTT_ll_2016} rRTT_{LL}(d)=RTT_{LL}(d) - RTT_{F}(d) \end{equation} \begin{equation} \label{rRTT_l6_2016} rRTT_{L6}(d) = RTT_{L6}(d) - RTT_{F}(d) \end{equation} where $d$ is the destination, subscriptions $LL$, $L6$ and $F$ respectively refers to LISP-Lab, LIP6 and FranceIX. The results clustered by continents are shown in Fig.~\ref{Relative_median_avg(RTT)_v4_2016}, where the relative RTT for LISP-Lab probe is on the left and the relative RTT for LIP6 probe is on the right. The positive relative RTT indicates FranceIX faster, while the negative values indicating that LISP-Lab or LIP6 are faster. In the left side of Fig.~\ref{Relative_median_avg(RTT)_v4_2016}, for the European and American targets, the values are between 10 ms and 20 ms in most cases (i.e., 31.37\%), showing that LISP-Lab is a little slower than FranceIX but with a stable behavior. On the contrary, for some Asian destinations, LISP-Lab is significantly either faster (in $20.6\%$ of the cases) or slower than FranceIX and the largest difference reaches 150 ms. It shows as well that for the European and American targets, LISP-Lab probe is stable compared to FranceIX, since the difference is mainly caused by the transmission delay between Paris (location of probe) and Lyon (location of PxTR). But when considering the Asian destinations, LISP-Lab does not show a very stable behavior, where the difference might be very large, since the path diversity for long-distance intercontinental transmission is higher, with every path having very different performance. The right figure of Fig.~\ref{Relative_median_avg(RTT)_v4_2016} shows that for LIP6 probe the difference is almost always positive. For the European destinations, the average relative RTT is around 130 ms and much higher than those in the other 3 continents. All European targets show that the LIP6 probe is significantly slower than the FranceIX anchor. For the American destinations, most relative RTT decreases to around 50 ms but some of them are around 150 ms. For the Asian targets, the relative RTT presents big differences. Some still stay at around 150 ms, some drop to below 50 ms, and there are 5 destinations even show negative relative RTTs. The reasons causing the relative RTTs for LIP6 probe varying a lot are similar to that for LISP-Lab but not totally the same. The same point is that the longest transmission of traffic in our experiment are the destinations located in Asia. Since the stretch can be ignored in the case of intercontinental transmission, both of two LISP probes are not always slower than FranceIX to Asian targets. The different performance between LISP-Lab probe and LIP6 probe is caused by the location of each PxTR. Although the PxTR of LISP-Lab is not close to its probe, but at least they are in a same country. While the LIP6 probe uses the same PETR to LISP-Lab probe but its PITR is not in France, and is even quite far away. As a result, for the shorter transmission, i.e., to the European destinations, the LIP6 probe introduces extremely higher RTTs than the non-LISP probes and even higher than LISP-Lab probe. Thus, the location of PxTR is very important. The PxTR near to either the sources or the destinations can obviously decrease the stretch. At least the probe having a PxTR in the same country shows a better performance compared to the PxTR just in the same continent. %-< TABLE >----------------------------------------------------------------- %%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%% \begin{table}[!tb] \centering \caption{Reliability of each probe (IPv4) from Dataset 2016} \label{reliability_v4_2016}{ \resizebox{0.65\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c|c|c@{}} \hline\hline & Gandi & mPlane & LISP-Lab & LIP6 & FranceIX \\ \hline Reliability (\%) & 99.66 & 99.65 & 99.43 & 77.78 & 99.62 \\ \hline\hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- We also want to evaluate the \emph{Reliability} when having the response to each ping measurement. We measure reliability, in our experiment, by simply calculating the percentage of replies over the total number of requests. If every ping measurement is successful, i.e., there is a response having the RTT value for a probe at every experiment round, its reliability is $100\%$. As shown in Tab.~\ref{reliability_v4_2016}, except for the LIP6 probe with $77.78\%$, all the other probes are more than $99\%$. The lower reliability of LIP6 probe indicates that sometimes its ping measurement is not successful. Since it is much lower than the others, to make sure that the LIP6 probe works well and there is no congestion or misconfiguration on the probe or any of its connected routers, we conduct another experiment letting the LIP6 probe use a normal public IPv4 address pinging the same 500 destinations. In this case, the LIP6 probe is non-LISP-speaking. The results show that the reliability of LIP6 probe is 98.91\%, which is very close to the other probes. Thus, we eliminate the possibility of the problem on the LIP6 probe itself. As the LIP6 probe uses the same PETR to the LISP-Lab probe, while the reliability of LISP-Lab is very high, the difference is caused by the PITRs. The LISP-Lab probe uses the PITR of LISP-Lab platform but the LIP6 probe leverages one of LISP Beta Network. On one hand, LIP6 probe has longer transmission distance causing more risk of losing the packets and thus leads to more losses. On the other hand, we can conclude that the performance of LISP-Lab PxTR is more reliable than that of LISP Beta Network. Since the experiment in 2015 just lasts 6 hours and shows no periodicity. Thus, this experiment campaign is extended to conduct with a span of 2 weeks to assess whether the RTT measurements have periodicity. Two methods are used to check: Fast Fourier transform (FFT) and Auto-correlation. However, the analysis results also show that none of RTT series from probes to any destinations exhibits periodicity. It means that the traffic does not periodically fluctuate with a certain interval. \section{IPv6 Ping results} \label{sec:pxtr_ping_v6} % \begin{itemize}[noitemsep,topsep=0pt] % \item CDF of median RTT between different probes % \item Smallest median RTT grouped by continent % \item Correlation coefficient to FranceIX % \item Relative median RTT clustered by continent for LISP-Lab and LIP6 % \item Reliability of each probe % \item The periodicity check % \end{itemize} %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{Pics/v6/CDF_avg(RTT)_median_4_20.eps} \caption{CDF of median RTT between different probes (IPv6) from Dataset 2016} \label{CDF_of_median_RTT_between_different_probes_v6_2016} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- In this section, LISP interworking performance of IPv6 is evaluated with the same metrics as those used in Sec.~\ref{sec:pxtr_ping_v4_2016}. The CDF of average RTTs to the selected IPv6 destinations is shown on Fig.~\ref{CDF_of_median_RTT_between_different_probes_v6_2016}, which is generally similar to the Fig.~\ref{CDF_of_median_RTT_between_different_probes_v4_2016} for IPv4. The FranceIX anchor still shows the best performance with the smallest delay for almost all the destinations. The RTT of LISP-Lab probe is always higher than the other non-LISP probes with RTT range $[0, 30]$ ms and the difference is around 7 ms. The RTTs are almost the same when the values are more than 30 ms. Hence, the performance of IPv6 is similar to IPv4 that the stretch introduced by PxTR is significant when the range of RTT value is small, but can be ignored when the range of RTT value becomes large. However, in the experiment for IPv6 targets, the traffic produced by LISP-Lab probe are natively forwarded to the destinations instead of going to the PETR in Lyon first, thus the RTT difference compared to the other non-LISP probes becomes smaller compared to those for IPv4. But the latency still exists, caused by the returning traffic that still need to pass through a PITR. Since the latency decreases, it is easier to be ignored for IPv6. That is the reason why the stretch can be ignored when RTT values are higher than just 30 ms for IPv6 while it needs to be higher than 200 ms for IPv4. While the LIP6 probe is always the slowest compared to the other probes, the difference is even 160 ms to the non-LISP probes and 150 ms to the LISP-Lab probe. The reason having such a big difference is not only caused by the bidirectional traffic should pass through the PxTR, but also caused by the route between xTR of LIP6 probe to the PETR at Lyon, which has no tunnel indicating the used routing path longer than the case of having an MPLS tunnel. Similar to IPv4, when RTT values are higher than 250 ms, the difference decreases to 40 ms in most cases. %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v6/Relative_median_avg(RTT)_LISP-Lab-FranceIX.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v6/Relative_median_avg(RTT)_LIP6-FranceIX_changed_60.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{IPv6 Relative median RTT clustered by different continents for LISP-Lab (left) and LIP6 (right) from Dataset 2016} \label{Relative_median_avg(RTT)_v6_2016} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- To further explore why two LISP probes are slower, especially to understand why the LIP6 probe has an extremely large latency compared to the others, we leverage on the Relative RTT between both LISP probes and the FranceIX anchor grouped in four continents as mentioned in Sec.~\ref{sec:pxtr_ping_v4_2016}. In the left figure of Fig.~\ref{Relative_median_avg(RTT)_v6_2016}, which is the relative RTT between LISP-Lab and FranceIX, all the values are positive, indicating that the LISP-Lab probe is slower than the FranceIX anchor no matter to which destination. For the European and North American targets, the relative RTTs are mostly between 5 ms and 17 ms. But the average relative RTTs decrease to 7 ms for the 25 Asian destinations. It proves the fact that for the nearby transmission, the delay introduced by PxTR is significant, while it effects less for the intercontinental transmission to the Asian destinations. What's more, the absolute relative RTT values for IPv6 are smaller than those for IPv4, because there is only returning traffic passing PITR for IPv6 so to reduce the latency. Thus, natively forwarding is faster than using PETR, but not so much. From the Fig.~\ref{Relative_median_avg(RTT)_v6_2016} we also know that there are 32 destinations located in Europe and just 7 in North America. It explains the reason why the curve of CDF for LISP-Lab sharply increases when the RTT is in the small range and mixes with the other non-LISP lines so quickly. It is because a half of destinations are not far away from the probes. Thus, a high percentage (around $50\%$) consists of small RTT values and these RTTs of LISP-Lab have a consistent delay compared to the other non-LISP probes. The mix part is mainly produced by the Asian destinations, to which the RTT values are large and the delay introduced by only the PITR of LISP-Lab is less significant. The relative RTT for the LIP6 probe is shown on the right hand side of Fig.~\ref{Relative_median_avg(RTT)_v6_2016}. The relative RTTs are all positive and much higher, indicating that the LIP6 probe is always quite slower than the FranceIX anchor to whichever target. Especially for the European destinations, the values are around 170 ms. But to the North American and Asian targets, the relative RTTs are obviously smaller. As mentioned in Sec.~\ref{sec:pxtr_meth_2016}, the outgoing packets from LIP6 probe still need to go to PETR first and especially the path between its xTR and PETR has no MPLS VPN tunnel. Besides, the main reason that the higher relative RTT values for IPv6 than IPv4 is caused by the LIP6 probe using the PITRs of LISP Beta Network. Different from IPv4~\cite{bgpv4}, there are only two ASes of IPv6 PITRs and both of them are located in US~\cite{bgpv6}. As a result, no matter from which destination, all the returning packets need to pass the PITRs in US first and then forward back to the LIP6 probe located in Paris. By consequence, the relative RTTs to the North American destinations are extremely smaller than those to European destinations. Although the traffic returning back from the Asian destinations also pass through the PITRs in US, the distance between the source and destination is quite far, thus it does not effect a lot to the relative RTTs for Asian targets. Further, in the $84.61\%$ of cases the FranceIX anchor has the smallest RTT in the IPv6 experiment. The LISP-Lab probe and LIP6 are not the fastest to any destinations, even to the Asian targets. It shows that the LISP performance of IPv6 is a little bit worse than IPv4. Since FranceIX has the smallest RTTs in most situations and as an anchor, its higher measurement capacities lead to its higher stability, we also use the correlation between the RTT of the other 4 probes and FranceIX to see if RTT measurements of each probe are as stable as FranceIX. As shown in Tab.~\ref{correlation_v6_2016}, the correlation coefficient of LISP-Lab is 0.967, which is almost one, indicating although LISP over IPv6 has higher latency than FranceIX caused by the introducing of PxTR, the performance is still stable. However, the correlation coefficient of LIP6 probe is just 0.3734. It is higher than 0.2, but much lower than 0.8, showing that the LIP6 probe is not totally independent from FranceIX, but it has quite low correlation. In fact, the IPv6 RTT series of LIP6 fluctuate a lot over time during the experiment and are not stable. %%-< FIGURE >-------------------------------------------------------------------- %\begin{figure}[!t] % \centering % \includegraphics[width=0.5\textwidth]{Pics/v6/Smallest_median_avg(RTT)_proporation.eps} % \caption{Smallest median RTT grouped by continent (IPv6)} % \label{Smallest_median_avg(RTT)_proporation_v6} %\end{figure} %%-< END FIGURE >-------------------------------------------------------------------- % %%-< TABLE >----------------------------------------------------------------- %-< TABLE >----------------------------------------------------------------- %%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%% \begin{table}[!tb] \centering \caption{Correlation coefficient to FranceIX (IPv6) from Dataset 2016} \label{correlation_v6_2016}{ \resizebox{0.65\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c|c|c@{}} \hline\hline & Gandi & mPlane & LISP-Lab & LIP6 & FranceIX \\ \hline Coefficient & 0.9565 & 0.968 & 0.967 & 0.3734 & 1.0 \\ \hline\hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- %-< TABLE >----------------------------------------------------------------- %%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%% \begin{table}[!tb] \centering \caption{Reliability of each probe (IPv6) from Dataset 2016} \label{reliability_v6_2016}{ \resizebox{0.65\textwidth}{!}{% \begin{tabular}{@{}c|c|c|c|c|c@{}} \hline\hline & Gandi & mPlane & LISP-Lab & LIP6 & FranceIX \\ \hline Reliability (\%) & 98.43 & 99.97 & 99.98 & 82.24 & 99.99 \\ \hline \hline \end{tabular} }} \end{table} %-< END TABLE >----------------------------------------------------------------- The Reliability of every probe for IPv6 is almost the same of IPv4 as shown in Tab.~\ref{reliability_v6_2016} except for the LIP6 probe. The higher reliability of LISP-Lab probe than the LIP6 probe confirms again that even for IPv6, the PxTR of LISP-Lab is still more reliable than that of LISP Beta Network, although the latter one has 6 world-wide PxTRs in total, from which only 2 can be used for IPv6. The reliability of LISP-Lab probe for IPv6 is a little bit higher than itself for IPv4, showing that using PxTR may decrease the reliability but the effect is very small. The reliability of the LIP6 probe for IPv6 is also higher than itself for IPv4, indicating that the IPv6 performance of PxTR on LISP Beta Network is better than IPv4, mainly thanks to the use of PxTRs in US. Thus from this comparison, we get a conclusion that the two PxTRs in US are more reliable than the other 4 PxTRs of LISP Beta Network. The periodicity check for IPv6 is also conducted, but no probes show that their RTT measurements to any destinations have periodicity, either. It indicates that the latency of traffic has no relationship with the specified experiment time, i.e., the RTT measurements generally have the same results regardless the experiment time and duration. Thus, the results of these experiments are not occasional phenomenon with the special results, they are reproducible instead. %-------------------------------------------- <Subsection> -------------------------------------------- \section{Traceroute-related results} \label{sec:pxtr_traceroute} % \begin{itemize}[noitemsep,topsep=0pt] % \item Distribution of IPv4 relative hops number for LISP-Lab and LIP6 % \item IPv4 relative hops number clustered by different continents for LISP-Lab and LIP6 % \item Distribution of IPv6 relative hops number for LISP-Lab and LIP6 % \item IPv6 relative hops number clustered by different continents for LISP-Lab and LIP6 % \end{itemize} %%-------------------------------------------- <Subsection> -------------------------------------------- %\subsection{Traceroute-related results} %\label{sec:results_traceroute} In this section, we use hops number obtained via traceroute from sources (i.e., probes) to destinations as the metric to further evaluate the LISP performance and also further investigate the reasons for the results presented in the ping-related experiments. Since the traceroute can only present the outgoing path, i.e., the routing path from probes to destinations, we focus on the use of the PETR and existence of a VPN between the xTR and the PETR. When the IPv4 targets are in Europe and America, there are very few cases that the hops number of LISP-Lab or LIP6 is the smallest. Precisely, there is $7.65\%$ of cases that LISP-Lab has the shortest path by hops number to the Asian destinations and $6.63\%$ of cases that LIP6 has the shortest path in the experiment. This result coincides to the percentage of the smallest RTT shown in Fig.~\ref{Smallest_median_avg(RTT)_proporation_v4_2016}. This consistent relationship between smallest RTT and shortest path exists as well for the IPv6 experiment, where there is no destination at all for which the hop number or the RTT are the smallest (for both LISP-Lab and LIP6 probes). For IPv6, it is FranceIX or Gandi that always have the shortest path to all the targets. %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v4/Relative_hops_num_LISP-Lab-FranceIX_hist_changed_60.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v4/Relative_hops_num_LIP6-FranceIX_hist_changed_60.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{Distribution of IPv4 Relative Hops Number for LISP-Lab (left) and LIP6 (right) from Dataset 2016} \label{Distribution_v4_relative_hops_num_proporation_LISP-Lab_LIP6} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- As FranceIX is always the most stable probe and has the shortest path in most cases, we define \emph{Relative Hops Number (rHN)} clustered by different continents to look at the difference between the hops number of LISP probes and FranceIX. The definition is as: % $Hops Num_{LISP-Lab}$ - $Hops Num_{FranceIX}$ and $Hops Num_{LIP6}$ - $Hops Num_{FranceIX}$. \begin{equation} \label{rHN_ll_2016} rHN_{LL}(d)=HN_{LL}(d) - HN_{F}(d) \end{equation} \begin{equation} \label{rHN_l6_2016} rHN_{L6}(d)=HN_{L6}(d) - HN_{F}(d) \end{equation} where $d$ is the destination, subscriptions $LL$, $L6$ and $F$ respectively refers to LISP-Lab, LIP6 and FranceIX. The positive value indicates that the hops number of FranceIX is smaller than LISP probes. Fig.~\ref{Distribution_v4_relative_hops_num_proporation_LISP-Lab_LIP6} provides an IPv4 distribution of relative hops number for LISP-Lab on the left and for LIP6 on the right. This figure shows the probability of each relative hops number. For the LISP-Lab probe, the most common relative hops number is 12 with a percentage of $21.77\%$, hence remaining limited in most of the cases. From the ping-related experiment in Sec.~\ref{sec:pxtr_ping_v4_2016} and Sec.~\ref{sec:pxtr_ping_v6} we know that using PxTR introduces some overhead. So the Relative Hops Number being generally positive is reasonable. The very large relative hops number just appears once for 19 and 25, so we regard them as outliers. Similar to the LISP-Lab case, for the LIP6 probe, the most frequent relative hops number is 13 in $19.53\%$ of cases. %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v4/Relative_hops_num_LISP-Lab-FranceIX_changed_60.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v4/Relative_hops_num_LIP6-FranceIX_changed_60.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{IPv4 Relative hops number clustered by different continents for LISP-Lab (left) and LIP6 (right) from Dataset 2016} \label{v4_Relative_hops_num} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- In order to understand where the most common relative hops number appears, we produce the relative hops number for each destination clustered by different continents in Fig.~\ref{v4_Relative_hops_num} to complete Fig.~\ref{Distribution_v4_relative_hops_num_proporation_LISP-Lab_LIP6}. The left figure is for the LISP-Lab probe, indicating that the relative hops number is almost the same for the European and North American destinations and contributes to the relative hops number with high probabilities in Fig.~\ref{Distribution_v4_relative_hops_num_proporation_LISP-Lab_LIP6}. The small relative hops number is mostly produced from the Asian targets. Whereas the outliers are all from the European destinations. The right figure of Fig.~\ref{v4_Relative_hops_num} is for the LIP6 probe, mainly presenting the same result as the LISP-Lab probe, except that the latter rarely has the relative hops number higher than 16, but LIP6 has more. %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v6/Relative_hops_num_LISP-Lab-FranceIX_hist_changed_60.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v6/Relative_hops_num_LIP6-FranceIX_hist_changed_60.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{Distribution of IPv6 Relative Hops Number for LISP-Lab (left) and LIP6 (right) from Dataset 2016} \label{Distribution_v6_relative_hops_num_proporation_LISP-Lab_LIP6} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- With the same evaluation metrics of IPv4, the distribution of Relative Hops Number and the Relative Hops Number clustered by different continents for IPv6 are respectively shown in Fig.~\ref{Distribution_v6_relative_hops_num_proporation_LISP-Lab_LIP6} and Fig.~\ref{v6_Relative_hops_num}. The left figure of Fig.~\ref{Distribution_v6_relative_hops_num_proporation_LISP-Lab_LIP6} presents that the most common relative hops number between the FranceIX anchor and the LISP-Lab probe is 7 in $56.72\%$ of cases, more than a half, much higher than the probability of other relative hops numbers. The range of relative hops number is smaller compared to IPv4 and the most common relative hops number is also smaller, since the traffic does not need to pass the PETR, but is just natively forwarded so the overhead is reduced. The left figure of Fig.~\ref{v6_Relative_hops_num} reveals the truth that 7 relative hops numbers are produced by the majority of targets regardless where the destinations are, instead of like the one of IPv4, where the relative hops number for European and American destinations are more than the Asian's. As the relative hops number from PETR is only 3, but 7 for natively forwarding, it is likely that the PxTR has shorter paths to most destinations, leads to the decreasing of hops number for the LISP-Lab probe. The relative RTTs higher for European and American destinations than those for Asia shown in left figure of Fig.~\ref{Relative_median_avg(RTT)_v6_2016} are mainly caused by the returning path, since it is natively forward for outgoing and there is no difference by continents. For IPv6, the LIP6 probe has extremely high relative hops number in a range from 12 to 24. The majority of relative hops number is 23 in $19.57\%$ of the cases and followed by 13 with a percentage of $17.39\%$. The higher relative hops number is caused by not using the VPN between xTR and PETR compared to the IPv4 case. The right figure of Fig.~\ref{v6_Relative_hops_num} visualizes the 23 relative hops number coming from the European destinations and 13 coming from the Asian targets. The form coincides to the relative RTT for IPv6 shown in the right hand part of Fig.~\ref{Relative_median_avg(RTT)_v6_2016}, indicating that the high relative RTTs are caused by the high relative hops number. Concerning the RTT performance of the LISP-Lab probe to Asian destinations, which sometimes shows even better than FranceIX, we analyze the AS-path (Autonomous System path) from FranceIX and LISP-Lab to Asian destinations that the packets traverse. Since all the IPv4 packets from LISP-Lab probe are sent to PETR at Lyon first, so the AS-path discussed in the followings is actually from PETR to the Asian destinations. By comparing the AS-path, we find that FranceIX and LISP-Lab very often take different paths, with only the last 1 or 2 hops in common. Even further, by looking at the geographic location of each hop of \emph{traceroute}, it can be observed that FranceIX and LISP-Lab take the path even in the different directions in most of the cases. In particular, there are 11 destinations out of 117 in total (i.e., 9.4\%) for which the packets sent by LISP-Lab pass through US, while the packets sent by FranceIX pass through Eastern Europe. There are 68 destinations (i.e., 58.1\%), which show exatly the opposite situation, i.e., packets sent by FranceIX pass through US, while packets sent by LISP-Lab pass through Eastern Europe. This last case is when LISP-Lab has smaller RTT (compared to FranceIX), almost all of the times (with only few exceptions). %-< FIGURE >-------------------------------------------------------------------- \begin{figure}[!t] \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v6/Relative_hops_num_LISP-Lab-FranceIX_changed_60.eps} \end{center} \end{minipage} \begin{minipage}[c]{.49\linewidth} \begin{center} \includegraphics[width=\textwidth]{Pics/v6/Relative_hops_num_LIP6-FranceIX_changed_60.eps} \end{center} \end{minipage} \vspace{-0.5mm} \caption{IPv6 Relative hops number clustered by different continents for LISP-Lab (left) and LIP6 (right) from Dataset 2016} \label{v6_Relative_hops_num} \end{figure} %-< END FIGURE >-------------------------------------------------------------------- %-< SECTION >-------------------------------------------------------------------- %\section{LISP-related Discussion and Conclusion} \section{Summary} \label{sec:pxtr_conclusion} % \begin{itemize}[noitemsep,topsep=0pt] % \item PxTR indeed introduces the negative effects for the near destinations, but can be ignored for the intercontinental long-distance transmission. % \item Position of PxTR is very important. % \item LISP is generally stable, except for the IPv6 performance of LISP Beta Network. % \item LISP-Lab PxTR is more reliable than the ones of LISP Beta Network. % \item Encapsulating into LISP packets has only 4 hops more compared to the packets being natively forward by xTR. % \end{itemize} % In the last years, the Locator/ID Separation Protocol (LISP) has gained attention as promising solutions of several scaling issues the Internet is facing. Various inter-operable implementations, and deployment exist, however, to promote the growth of this relatively novel technology and improve its performance, it requires large scale measurements in real deployments. In this chapter, we conduct a six-hours as well as a two-weeks real network measurements with LISP Beta Network, LISP-Lab platform and RIPE Atlas to provide a comprehensive performance evaluation of LISP interworking. Concretely, we provide a first thorough sight on the performance of LISP PxTR. Since the results of experiment in 2016 coincides to the one in 2015 and are more comprehensive. Thus, we conclude this chapter by presenting the observations of the Dataset 2016. In our large scale measurement campaign, we take into account 5 probes as sources, 500 IPv4 and 122 IPv6 addresses as destinations, conducting ping and traceroute experiments. We find that the PxTR indeed introduces the negative effects for the destinations located in Europe and America, but the negative impact of PxTR can be ignored for the intercontinental long-distance transmission to Asia destinations. From the experiment, the results show that the position of PxTR is very important. The PxTR either near to the sources or the destinations can decrease the latency a lot. Generally speaking, LISP is stable compared with the reference anchor FranceIX, except for the IPv6 performance of LISP Beta Network. Further, the performance of LISP-Lab PxTR is more reliable than the one of LISP Beta Network, although the latter has 6 worldwide PxTRs used for IPv4 and 2 located in US used for IPv6, whereas LISP-Lab has only 1 PxTR for both IPv4 and IPv6. Compared to leveraging on PxTR, natively forwarding without using LISP decreases the latency, but not much. The traceroute experiment shows that introducing PxTR of course brings more hops, but if the PETR is well configured, so to always have peers to the destinations, there are only 4 more hops compared to the packets being natively forward by xTR without encapsulating with LISP.
{ "alphanum_fraction": 0.7326409786, "avg_line_length": 117.5115131579, "ext": "tex", "hexsha": "733d8b9864e2b4e0cab4583574e6f66d002c2d34", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f2ae3525afe1e4f5be42daca2e932addbc66e00d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SeleneLI/YueLI_thesis", "max_forks_repo_path": "Chapter6/chapter6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f2ae3525afe1e4f5be42daca2e932addbc66e00d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SeleneLI/YueLI_thesis", "max_issues_repo_path": "Chapter6/chapter6.tex", "max_line_length": 3279, "max_stars_count": null, "max_stars_repo_head_hexsha": "f2ae3525afe1e4f5be42daca2e932addbc66e00d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SeleneLI/YueLI_thesis", "max_stars_repo_path": "Chapter6/chapter6.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 17645, "size": 71447 }
\documentclass[paper=a4, fontsize=11pt]{scrartcl} \usepackage[bottom=1.22in, left=1.22in, right=1.22in, top=1.22in]{geometry} \usepackage{layouts} \usepackage[usenames,dvipsnames,x11names]{xcolor} \usepackage[T1]{fontenc} \usepackage{fourier} \usepackage[english]{babel} \usepackage{amsmath,amsfonts,amsthm} \usepackage{sectsty} \allsectionsfont{\centering \normalfont\scshape} \usepackage{tikz} \usepackage{pgfplots} \usetikzlibrary{plotmarks} \usepackage{booktabs} \usepackage{longtable} \usepackage{tabularx} \usepackage{ragged2e} \newcolumntype{Y}{>{\RaggedRight\arraybackslash}X} \usepackage{paralist} \usepackage{acronym} \usepackage[inline]{enumitem} \usepackage{fancyhdr} \usepackage{graphicx} \usepackage{lastpage} \usepackage{listings} \usepackage{multirow} \pagestyle{fancyplain} \fancyhead{} \fancyfoot[L]{} \fancyfoot[C]{} \fancyfoot[C]{\thepage~of~\pageref{LastPage}} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} \setlength{\headheight}{13.6pt} \newcommand{\horrule}[1]{\rule{\linewidth}{#1}} \usepackage{float} \usepackage{pgfplotstable} \usepgfplotslibrary{fillbetween} \pgfplotsset{compat=1.5} \pgfplotsset{ every axis/.append style={ scale only axis, width=0.40\textwidth,height=0.3\textwidth, }, /tikz/every picture/.append style={ trim axis left, trim axis right, baseline } } % http://tex.stackexchange.com/questions/67895/is-there-an-easy-way-of-using-line-thickness-as-error-indicator-in-a-plot % Takes six arguments: data table name, x column, y column, error column, % color and error bar opacity. % --- % Creates invisible plots for the upper and lower boundaries of the error, % and names them. Then uses fill between to fill between the named upper and % lower error boundaries. All these plots are forgotten so that they are not % included in the legend. Finally, plots the y column above the error band. \newcommand{\errorband}[6]{ \pgfplotstableread{#1}\datatable \addplot [name path=pluserror,draw=none,no markers,forget plot] table [x={#2},y expr=\thisrow{#3}+\thisrow{#4}] {\datatable}; \addplot [name path=minuserror,draw=none,no markers,forget plot] table [x={#2},y expr=\thisrow{#3}-\thisrow{#4}] {\datatable}; \addplot [forget plot,fill=#5,opacity=#6] fill between[on layer={},of=pluserror and minuserror]; \addplot [#5,thick,no markers] table [x={#2},y={#3}] {\datatable}; } \title{ \normalfont \normalsize \textsc{Norwegian University of Science and Technology\\IT3708 -- Subsymbolic Methods in AI} \horrule{0.5pt} \\[0.4cm] \huge Project 3:\\ Evolving Neural Networks for a Flatland Agent\\ \horrule{2pt} \\[0.5cm] } \author{Per Magnus Veierland\\[email protected]} \date{\normalsize\today} \newacro{ANN}{Artificial Neural Network} \newacro{EANN}{Evolutionary Artificial Neural Network} \newacro{EA}{Evolutionary Algorithm} \begin{document} \fancyfoot[C]{} \maketitle \newpage \fancyfoot[C]{\thepage~of~\pageref{LastPage}} % Page numbering for right footer \setcounter{page}{1} \section{\ac{EA} parameters} Table~\ref{table:ea_parameters} shows the main \ac{EA} parameters which were used for all evaluations described in this document. All parameters except the adult selection mechanism and crossover has been experimented with. By employing elitism with full generational replacement, it can be ensured that the best solutions are not lost while at the same time being able to tune some of the selection pressure in the system. Using an elitism count of 5~individuals~(5\% of the population size) was found to be helpful in helping guide exploitation, while allowing for exploration. Using fitness proportionate- or rank adult selection was found to limit the population diversity greatly, and although sigma selection worked, tournament selection was found to yield the best results, and allows for more selection pressure tuning. A tournament group size of 10 and a random selection probability of 0.1 was found to adequately balance exploitation and exploration. A single uniformly random crossover point was used for all experiments. The mutation rate was not tuned much, and values in the range of $0.005$ to $0.01 \frac{\text{mutations}}{\text{bit}}$ provided enough exploration for successful results. Increasing the population size notably did not prove beneficial, and a size of 100 was found to be sufficient in sustaining necessary diversity without too much cost. A good solution is usually found comfortably within an evolutionary run of 1000~generations, which takes about 30~seconds using a single Javascript webworker in a Firefox browser. \begin{table} {\scriptsize \centering \begin{tabular}{ll} \toprule Parameter & Value \\ \midrule Population size & 100 \\ Generation count & 1000 \\ Adult selection & Full replacement \\ Elitism count & 5 \\ Parent selection & Tournament \\ Tournament group size & 10 \\ Tournament random probability & 0.1 \\ Mutation rate & $0.01 \frac{\text{mutations}}{\text{bit}}$ \\ Crossover points & 1 \\ \bottomrule \end{tabular} \caption{Main \ac{EA} parameters} \label{table:ea_parameters} } \end{table} \section{Fitness Function} The fitness function is input by the user as a text string and compiled to a function by the \texttt{math.js} library. When evaluating the function, the values for the number of \begin{enumerate*}[label={\alph*)}] \item food eaten \item total food \item poison eaten \item total poison \end{enumerate*}, are exposed in its scope, such that various fitness strategies can easily be experimented with. It was found that a simple fitness function (Equation~\ref{eq:fitness_function}) which subtracts the amount of poison eaten from the amount of food eaten, summed over the scenarios the individual is exposed to, works well in this context and is able to produce objectively fit individuals. A constant $k_\mathit{pe}$ is used to tune the severity of eating poison. If eating poison is punished too harshly, exploration will be deterred, and no constructive evolution will take place. If it is too low, evolution will only value food eaten, no matter how much poison is also eaten. The consequences of eating poison is not described in the assignment but it is assumed that it should be avoided. \begin{equation} \label{eq:fitness_function} \textsc{Fitness}(i) = \sum_{s\:\in\:\mathit{scenarios}} \Big(\: \textsc{FoodEaten}(i, s) - k_{\mathit{pe}} \cdot \textsc{PoisonEaten}(i, s)\Big) \end{equation} A constant of $k_{\mathit{pe}} = 2$ was shown to work well in trials. To make comparisons between trials easier, all fitness values are normalized by the number of scenarios and the number of time steps. Given a $10 \times 10$ grid, $\frac{1}{3}$ food coverage, and 60 time steps, the normalized upper fitness boundary with this fitness function is $0.55$. \section{\ac{ANN} implementation} For analysis, each hidden node can be treated as an \textsc{AND}-gate which matches against a conjunction of the binary inputs. With 6~binary inputs, there is a total of $2^6 = 64$ possible input cases. With one \textsc{AND}-gate to recognize each case, a maximum of 64~hidden nodes would be needed to match every input case, and each output node could be viewed as an \textsc{OR}-gate which matches a disjunction of the activated hidden nodes. Attempting to match every distinct input case is described in \textit{Intelligence Emerging} as an \textit{extensional} strategy, and although it can represent all possible mappings, it requires a large representation and it would be hard to discover \textit{intensional} or \textit{general} behavioral mappings. Based on the analysis, a better intentional approach was designed. The genotype follows a \textit{fixed-length, direct} encoding. Each weight is represented using 1 bit describing a value of $-1$ or $1$, together with one bit per weight which turns the weight on or off. This switch bit allows both \textit{neutral complexification} and intensional mappings to be represented. As both agent inputs and outputs are binary, both the hidden- and output neurons uses the \textit{Heaviside} activation function, which will activate the neuron whenever the binary inputs are matched. Given that there are 6~binary inputs, the hidden layer bias has a range of $-6$ to $+6$, using a scaled gray code representation of $\lceil \log_2 (6 + 6 + 1) \rceil = 4$~bits. Through trials, it was found that 6~hidden nodes were able to represent successful agents, such that the output layer bias also has a range of $-6$ to $+6$ represented as a scaled gray code using 4~bits. Gray coding is used to improve the correlation between genotypes and phenotypes, and to give the effect of mutations a more gradual effect. Scaled integers are used such that values outside the effective ranges are not encountered. If a single output neuron is activated, the corresponding action is chosen, otherwise no action is made by the agent. The agent input describes the contents of the forward, left, and right locations relative to the agent. As each location can either be empty, contain food, or contain poison; there are 3 possible configurations for each location. Since there are three locations described by the input; there are a total of $3^3 = 27$ possible input combinations. For each of these 27 cases, an agent must decide one of three responses, (discounting the possibility of doing nothing), which yields a total of $3^{3^3} \approx 7.6 \cdot 10^{12}$ possible functionally distinct agents for the Flatland environment. With 6~inputs, 6~hidden nodes, and 3~output nodes, there are 54~weight values (108~bits) and 9~bias values (36~bits), for a total genome length of 144~bits which has $2.2 \cdot 10^{43}$ possible permutations. Even discounting bloat in input representation and the 0.5~bit overhead per weight, this shows that the genome can represent a significant fraction of the number of possible agents. \section{Performance of the \ac{EA}} \begin{figure}[H] \centering \begin{tabularx}{\textwidth}{XcXc} ~ & \begin{tikzpicture} \begin{axis}[xlabel={Generations},ylabel={Fitness / time step}] \errorband{../data/performance-scenario-1-static.txt}{0}{1}{2}{Cyan}{0.4} \addplot +[mark=none, color=Magenta,very thick] table[x index=0,y index=3,col sep=space] {../data/performance-scenario-1-static.txt}; \end{axis} \end{tikzpicture} & ~ & \begin{tikzpicture} \begin{axis}[xlabel={Generations},ylabel={Fitness / time step}] \errorband{../data/performance-scenario-5-static.txt}{0}{1}{2}{Cyan}{0.4} \addplot +[mark=none, color=Magenta,very thick] table[x index=0,y index=3,col sep=space] {../data/performance-scenario-5-static.txt}; \end{axis} \end{tikzpicture} \\ \end{tabularx} \caption{\ac{EANN} performance when trained on 1~static~scenario~(left), and on 5~static~scenarios~(right). Mean population fitness is shown in blue with standard deviation shown in light blue, and the fitness of the best individual is shown in red.} \label{fig:performance_static} \end{figure} \begin{figure}[H] \centering \begin{tabularx}{\textwidth}{XcXc} ~ & \begin{tikzpicture} \begin{axis}[xlabel={Generations},ylabel={Fitness / time step}] \errorband{../data/performance-scenario-1-dynamic.txt}{0}{1}{2}{Cyan}{0.4} \addplot +[mark=none, color=Magenta,very thick] table[x index=0,y index=3,col sep=space] {../data/performance-scenario-1-dynamic.txt}; \end{axis} \end{tikzpicture} & ~ & \begin{tikzpicture} \begin{axis}[xlabel={Generations},ylabel={Fitness / time step}] \errorband{../data/performance-scenario-5-dynamic.txt}{0}{1}{2}{Cyan}{0.4} \addplot +[mark=none, color=Magenta,very thick] table[x index=0,y index=3,col sep=space] {../data/performance-scenario-5-dynamic.txt}; \end{axis} \end{tikzpicture} \\ \end{tabularx} \caption{\ac{EANN} performance when trained on 1~dynamic~scenario~(left), and on 5~dynamic~scenarios~(right). Mean population fitness is shown in blue with standard deviation shown in light blue, and the fitness of the best individual is shown in red.} \label{fig:performance_dynamic} \end{figure} \begin{enumerate} \item \textbf{Agent evolved using 1 static scenario:} The best agent evolved achieved a fitness score of $0.35$. When observing the scenario it was evolved with it performs quite well, consuming 25~(76~\%) of the food and 2~(9\%) of the poison over 60~time~steps. Observing the behavior, it is clear that the agent has developed very basic behavior, where it mostly moves forward until some poison is met, then turning right and continuing. Due to the static environment and little amount of testing, the agent makes bad decisions such as moving to the right when there is food on the left and poison in front and on the right, however since it performs acceptably overall in the scenario, such behavior is sustained in the population. Testing the agent in a random scenario reveals significant deficiencies in the agent's strategy. It ended up getting stuck moving forwards after consuming 7~food and 1~poison, despite there being more food available on both sides of its path, resulting in a fitness score of $0.083$. \item \textbf{Agent evolved using 5 static scenarios:} The best agent evolved achieved a fitness of $\approx{}0.46$, consuming 29~food~(88\%) and 1~poison~(5\%), 29~food~(88\%) and 1~poison~(5\%), 32~food~(97\%) and 0~poison~(0\%), 27~food~(82\%) and 1~poison~(5\%), 27~food~(82\%) and 0~poison~(0\%), respectively in the 5~static scenarios. After observing the behavior of the agent in the five static scenarios, it can be seen that in all the instances where poison was consumed it was due to being surrounded by poison in all directions. There were no examples of bad behavior found, such as consuming poison when other options existed. The agent uses a strategy which involves a lot of agility, and instead of moving in a straight line for as long as possible, it chooses more rapid snake-like motions. This results in good coverage and the ability to consume most of the food available. Testing the agent in a random scenario shows that it has been able to generalize effective behavior, and it was able to consume 28~food~(85\%) and 0~poison~(0\%) through exploring a large section of the grid. \item \textbf{Agent evolved using 1 dynamic scenario:} The best agent evolved achieved a fitness of $0.53$, which is close to the theoretical maximum of $0.55$. Since the agent was only tested on a single scenario, this high fitness is likely to be based on some luck rather than just a good strategy. When tested with a random scenario it performs fairly well, consuming 28~food~(85\%) and 3~poison~(14\%). It follows a strategy which involves agile movement without moving only in straight lines, and covers a large area in the grid without repeating locations. It does however perform a directly bad move as it consumes a poison in a situation where it would be possible to move to an empty cell. \item \textbf{Agent evolved using 5 dynamic scenarios:} The best agent evolved achieved a fitness of $0.45$. When evaluating across multiple scenarios, the fitness value becomes less dependent on luck and is more meaningful since the agent performs well in several environment configurations. Testing the best agent on a random scenario shows exploring behavior where the agent is able to cover a large part of the grid while consuming 26~food~(79\%) and 1~poison(5\%). The single poison consumed occurred in a situation where the agent was surrounded by poison. \end{enumerate} When comparing the four performance cases, it is clear that using multiple scenarios is necessary to achieve generalized behavior which will work in new scenarios. Using dynamic scenarios instead of static scenarios offers much greater opportunity to expose and test the population through evolution on a variety of edge cases. The behavior of the agent evolved using one dynamic scenario clearly shows more general behavior compared to the agent evolved using one static scenario. The fitness plots, see Figure~\ref{fig:performance_static} and Figure~\ref{fig:performance_dynamic}, show that the fitness development is more gradual in the dynamic cases compared to the static cases, indicating that learning is more gradual and that new individuals in the population gradually performs better as the population adapts to different scenarios. The standard deviation for the mean population fitness is also visibly lower in the dynamic cases, with the best individual performing more than one standard deviation better than the mean, indicating that the population fitness is more stable than in the static cases. \textit{NB: Due to an error in testing, all results described in this document uses 3~bits to represent the hidden- and output layer bias values, instead of the 4~bits determined by the analysis. This shows that a value of 3~bits per bias value is sufficient to achieve good results.} \end{document}
{ "alphanum_fraction": 0.7683315622, "avg_line_length": 66.6850393701, "ext": "tex", "hexsha": "edc9176f237d9df94e7ce05f8bf0e8a06ceb8897", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1066d5c1af5c953dbaf129d7e05ce32f2d4292aa", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "pveierland/permve-ntnu-it3708", "max_forks_repo_path": "project_3/report/permve-ntnu-it3708-project-3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1066d5c1af5c953dbaf129d7e05ce32f2d4292aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "pveierland/permve-ntnu-it3708", "max_issues_repo_path": "project_3/report/permve-ntnu-it3708-project-3.tex", "max_line_length": 958, "max_stars_count": null, "max_stars_repo_head_hexsha": "1066d5c1af5c953dbaf129d7e05ce32f2d4292aa", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "pveierland/permve-ntnu-it3708", "max_stars_repo_path": "project_3/report/permve-ntnu-it3708-project-3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4338, "size": 16938 }
%%% PREAMBLE - Do not touch %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[10pt,twocolumn,letterpaper]{article} \usepackage[ansinew]{inputenc} \usepackage[portuges,brazil,english]{babel} \usepackage{model} \usepackage{times} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{color} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \input{pics/abaco} \cvprfinalcopy % *** Uncomment this line for the final submission \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \ifcvprfinal\pagestyle{empty}\fi \newcommand{\TODO}[1]{TODO: #1} \newcommand{\CITEONE}[2]{\mbox{#1 \cite{#2}}} \newcommand{\CITETWO}[3]{\mbox{#1 and #2 \cite{#3}}} \newcommand{\CITEN}[2]{\mbox{#1 et al. \cite{#2}}} %%% Report beginning %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %%% Title and authors %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{Here goes the report title} \author{Fulano da Silva\thanks{Is with the Institute of Computing, University of Campinas (Unicamp). \textbf{Contact}: \tt\small{[email protected]}}\\ Anderson Rocha\thanks{Is with the Institute of Computing, University of Campinas (Unicamp). \textbf{Contact}: \tt\small{[email protected]}} } %%% Abstract %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \maketitle \begin{abstract} Here goes the abstract. \end{abstract} %%% Introduction %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} Here goes the introduction and motivation of the work. Some directions for the paper: \begin{itemize} \item Diagrams and figures are encouraged for making the paper richer \item The sections proposed here are not hard-constrained. It means, you can propose other sections as well as change the existing ones. \end{itemize} %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Activities} Here goes the state-of-the-art research (talk about prior work for solving the same problem). %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Proposed Solutions} Talk about the proposed solution for the selected problem. %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experiments and Discussion} Talk about the experiments carried out and the obtained results. Examples of citations~\cite{Ni_2008, Ni_2009}. For direct citations use something like: \CITEONE{Silva}{Silva_2010} for papers with one author. \CITETWO{Silva}{Souza}{Silva_2010b} for papers with two authors. \CITEN{Silva}{Silva_2010c} for papers with three or more authors. Example of a figure of one column. \begin{figure} \begin{center} \includegraphics[width=0.99\columnwidth]{pics/example-figure} \caption{A figure example spanning one column only.\label{fig:label}} \end{center} \end{figure} Example of a figure spanning two columns. \begin{figure*} \begin{center} \includegraphics[width=0.99\textwidth]{pics/example-figure-spanned} \caption{A figure example spanning two columns.\label{fig:label2}} \end{center} \end{figure*} Example of a table spanning only one column: \begin{table} \begin{center} \begin{tabular}{l*{6}{c}r} Team & P & W & D & L & F & A & Pts \\ \hline Manchester United & 6 & 4 & 0 & 2 & 10 & 5 & 12 \\ Celtic & 6 & 3 & 0 & 3 & 8 & 9 & 9 \\ Benfica & 6 & 2 & 1 & 3 & 7 & 8 & 7 \\ FC Copenhagen & 6 & 2 & 1 & 2 & 5 & 8 & 7 \\ \end{tabular} \end{center} \end{table} Example of a table spanning two columns: \begin{table*} \begin{center} \begin{tabular}{ | l | l | l | p{8cm} |} \hline Day & Min Temp & Max Temp & Summary \\ \hline Monday & 11C & 22C & A clear day with lots of sunshine. However, the strong breeze will bring down the temperatures. \\ \hline Tuesday & 9C & 19C & Cloudy with rain, across many northern regions. Clear spells across most of Scotland and Northern Ireland, but rain reaching the far northwest. \\ \hline Wednesday & 10C & 21C & Rain will still linger for the morning. Conditions will improve by early afternoon and continue throughout the evening. \\ \hline \end{tabular} \end{center} \end{table*} %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions and Future Work} Present the main conclusions of the work as well as some future directions for other people interested in continuing this work. %%% References %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\small \bibliographystyle{unsrt} \bibliography{refs} } \end{document}
{ "alphanum_fraction": 0.6182503139, "avg_line_length": 36.7538461538, "ext": "tex", "hexsha": "157c9cc2aa82e6489949526542bc409e450afa87", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "67948c17926738cc020904091ae082b0115a2efc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gmCrivelli/share-prediction", "max_forks_repo_path": "Report/2018-mo444-practical-assignment-01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "67948c17926738cc020904091ae082b0115a2efc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gmCrivelli/share-prediction", "max_issues_repo_path": "Report/2018-mo444-practical-assignment-01.tex", "max_line_length": 153, "max_stars_count": null, "max_stars_repo_head_hexsha": "67948c17926738cc020904091ae082b0115a2efc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gmCrivelli/share-prediction", "max_stars_repo_path": "Report/2018-mo444-practical-assignment-01.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1279, "size": 4778 }
\chapter{The Coq Proof Assistant} \label{ch:coq} The \TMmodelName{} verification is formalized using the Coq proof assistant. This chapter begins by motivating Coq as a good candidate for a verification system. It then presents a brief overview of the term language of Coq with a focus on some the features used in \TMmodelName{}. \section{Why Coq} This section discusses the reasons Coq was chosen as the mechanized verifier for \TMmodelName{}. It describes how higher-order logics can simplify first-order problems by permitting developers to generalize proofs. It argues that intuitionistic logics are natural choices for automated verification, as truth values are represented as programs inhabiting a type. It discusses how Coq implements both of these features and articulates the useful features of the Coq standard library for the \TMmodelName{} verification. The high degree of automation available in first-order systems makes them appear very desirable. However, the ability to generalize theorems in higher-order proof systems makes them more practical than first-order systems. Any finite problem can be expressed in a first-order logic. However, without the ability to generalize theorems, developers quickly become burdened with a multitude of specialized proof obligations. Many first-order proof systems offer the ability to write functions to generalize these proofs. However, these functions cannot be verified, increasing the unverified surface of the proof and consequently undermining confidence. Although the safety problem and confinement policy for capability-based systems can be expressed as first-order problems, this verification effort uses Coq because it is a higher-order proof assistant. Most higher-order logics available fit into two categories: classical higher-order predicate logic and intuitionistic type theory. For proofs that can be automatically verified, intuitionistic and classical logics have the same expressive power. They differ in how they view the concept of truth. Classical logics reason about truth and falsehood directly, relying upon meta-analysis like term rewriting to prove a theorem. Intuitionistic logic reasons about construction and refutation directly, requiring a witness for every truth. These witnesses make statements in intuitionistic logic stronger than classical logic. The concept of how truth is constructed directly informs computational verification. Proofs as constructions and proofs as meta-inferences necessarily operate in different ways. Constructions and theorems have a direct relationship with software terms and types according to the Curry-Howard isomorphism. \msdnote{Not citing} In Coq, proofs as programs can be directly written by the developer and can be directly manipulated by other programs without resorting to meta-theory, simple type checking may suffice. This reduces the amount of meta-logic necessary to express properties and forms very natural proofs. Verifying a theorem in Coq is the act of constructing a program satisfying a type representing the theorem. To any developer acquainted with parametric polymorphism and (co)inductive data types, reading and manipulating these programs will be familiar. Base definitions and functions are executable programs that can be readily understood by developers unfamiliar with proofs, reducing mental overhead. While specifying a program precisely can give the developer a great deal of power, Coq also includes a wide array of tools to help construct programs automatically. Coq includes a tactic meta-language and pattern-matching system to assist the developer when searching for a program. These are combined into a hint system that can be combined to produce a high degree of automation for domains that are highly syntactically driven. The \TMmodelName{} verification utilizes a number of features of the Coq standard library. Boolean decidability is used to place most propositions into Boolean logic, making them computationally decidable. Decidability is a critical problem because undecidability hides the unknowable; a theorem with an unknowable assumption can still be verified. By ensuring that all propositional hypotheses are isomorphic to Boolean functions, \TMmodelName{} ensures that the results are always known. The Coq standard library also provides meta-theory support for equivalence relations. The rewrite tactic in Coq can perform automated rewriting of any equivalence relation, not just built-in equality. It requires only that relevant terms respect the equivalence relation for relevant types. This permits the model to reasoning about potentially different, but semantically identical, types and terms. The last major features of the Coq standard library utilized by \TMmodelName{} are the axiom-free finite set and map libraries. \cite{CoqFSet} These are very large productions modeled after the OCaml Set and Map libraries. They include a collection of supplemental libraries containing useful theorems pertinent to their interface as well as implementations including a fully concrete definition using lists. \section{The Coq Term Language} \Cref{ch:embed,ch:access,ch:safety,ch:flow,ch:confinement} contain a detailed theorem walk-through of the confinement verification in Coq. As previously stated in \Cref{sect:intro:confidence}, one goal of this dissertation is to produce confidence in the confinement verfification result. This dissertation presumes that reviewers have confidence in the proof assistant and it therefore does not discuss the mechanics of the proof construction. As such, this section focuses exclusively on the useful portions of the Gallina term language of Coq. Gallina is a higher-order functional language and constructive dependent type system that implements higher-order intuitionistic logic. The syntax and semantics of Gallina are based on OCaml functions and data-types, but with dependently typed parametric polymorphism. It contains the usual functions, anonymous functions, and fixpoints along with cofixpoints (sometimes called lazy fixpoints). (Co)Inductive types are a generalized notion of (co)data-type with (co)constructor definitions. Gallina also includes a highly type-generalized pattern matching system for (co)constructors. All terms in Coq belong to one of three sorts: \coqkw{Set}, \coqkw{Prop}, and \coqkw{Type}. \coqkw{Set} is the sort of ``specifications'' or programs, and \coqkw{Prop} is the sort of ``propositions'' or theorems. The difference between the two is how they handle the type \mbox{\coqvar{*} \(\rightarrow\) \coqvar{*}}. The type \mbox{\coqkw{Set} \(\rightarrow\) \coqkw{Set}} is not in the sort \coqkw{Set}, but the type \coqkw{Prop} \(\rightarrow\) \coqkw{Prop} is in the sort \coqkw{Prop}. This makes \coqkw{Prop} impredicative, in that its terms may be self-defining, and \coqkw{Set} predicative, in that it is not. The sort \coqkw{Type} is stratified and somewhat complex. \coqkw{Type} is used very little in this effort and may be considered parametric for either \coqkw{Prop} or \coqkw{Set}. Unlike general programming languages, all functions in Coq are obliged to terminate. Definitions and anonymous functions do not permit recursion and simply terminate. Fixpoints permit structural recursion that can be automatically inferred. General functions permit the developer to specify both the function and the termination measure, though it may be possible for Coq to infer this as well. Because all functions must terminate, (co)inductive data types are often used to express general propositions as dependent type families. The connection between programs and proofs can be summarized in relationships with functions and inductive types. Implication and universal quantification are expressed by dependent function types, respectively with or without a named parameter. \coqvar{A} \(\rightarrow\) \coqvar{B} is syntactically the same as \(\forall\) (\coqkw{\_}:\coqvar{A}), \coqvar{B} where \coqkw{\_} is any free variable. \coqvar{True} is the universally inhabited type and \coqvar{False} is the universally uninhabited type. Negation, written \(\neg\) \coqvar{A} is syntactically \coqvar{A} \(\rightarrow\) \coqvar{False}. Most other constructions are inductive types or involve pattern matching. Conjunction, disjunction, and existential quantification are all inductive types. The type (\coqvar{and} \coqvar{A} \coqvar{B}), written \coqvar{A} \(\wedge\) \coqvar{B}, has one constructor \coqvar{conj} requiring terms of types \coqvar{A} and \coqvar{B}. The type (\coqvar{or} \coqvar{A} \coqvar{B}), written \coqvar{A} \(\vee\) \coqvar{B}, has two constructors \coqvar{proj1} and \coqvar{proj2} requiring only one term of type \coqvar{A} or \coqvar{B}, respectively. Existential quantification over a predicate has one introduction constructor, \coqvar{ex\_intro}, that can only be constructed by a witness satisfying the quantified proposition. \section{Model Abstraction} \label{sect:coq:modelAbstraction} \TMmodelName{} is constructed as an abstract implementation to allow it to be used as a framework for future system verifications. Operating systems are not the only capability-based system; capability-based systems also include virtual machines, language runtimes, and distributed systems. As an abstract model, \TMmodelName{} focus on the heart of the confinement problem, producing a result applicable across all domains. \TMmodelName{} utilizes the Coq module type system as an abstraction mechanism. This decision was motivated by the use of module type abstraction in the axiom-free finite set library. Because abstractions and axioms are the same structure in Coq, \TMmodelName{} also provides a trivial implementation to produce an axiom-free result. It is often the case that the abstract module types are verbatim software from implementation modules. However, it is not possible to produce them by type inference in Coq version 8.3pl2. As a work-around, the project includes simple tool to syntactically create a module type based on each trivial implementation through very simple annotations. This verification includes the very primitive Perl script ``typeify.pl'' to automatically produce precise module types from module functors. It does not process the full language of Coq, but processes the commands line-by-line using a very small state transition routine over a very strict module format. The module must contain only one internal module functor, declared on a single line, and the functor parameter list must match the module type parameter list. Theorems are abstracted by replacing the keyword ``Theorem'' with ``Parameter'' and removing all lines between commands ``Proof.'' and ``Qed.'' Although Coq allows theorems to nest and elide the ``Proof'' command, we do not handle these cases. Two commands are supported as comments to provide better abstraction in the generated types. The ``(* ABSTRACT *)'' command processes a ``Definition'' into an appropriate ``Parameter'' allowing other modules to override these definitions with other implementations. The ``(* TYPE\_REMOVE *)'' command eliminates the subsequent line in the generated type altogether and is often used to eliminate helper theorems and lemmas. Any potential errors introduced by this transformation will be caught when the original module against the generated signature. It is necessary that each module functor and signature be pure, in that all dependencies are completely captured by parametricity. The use of existentially declaring a module type loses type information in Coq 8.3 in ways that would be available in Coq 8.2. Therefore, updated versions of the \COQFMap{} finite map libraries have been constructed to produce appropriate types. The pattern of constructing pure functors from the trivial implementation is prevalent throughout \TMmodelName{} and produces an axiom-free proof with a type signature that can be satisfied in future efforts, While this syntactic type construction is used wherever possible, there are certain portions of the proof which are encoded manually. The following conventions regarding module names are used throughout this dissertation. Modules that are also files have the \coqfilemodule{FileModule} font face where as inner modules have the \coqvar{InnerModule} font face. The locations of each inner module should be clear from the surrounding context. File modules containing functor implementations are suffixed with -\coqfilemodule{Impl}, while modules constructing a fully complete implementation by functor application are suffixed with -\coqfilemodule{Appl}. File modules containing type signatures, or those which have no abstraction, are given no suffix. Convenience file modules with supplemental libraries are suffixed with -\coqfilemodule{\_Conv} and are further suffixed as above. All abstract module types passed as parameters and convenience modules share the same naming convention throughout the proof, which is summarized in \cref{table:coq:moduleConvention}. \begin{table} \centering \begin{tabular}{| l | l |} \hline Instance & Declaration and location \\ \hline \hline \COQARSet{} & \COQFSet{} of \COQAccessRight{} \\ \COQRef{} & \COQReferenceType{} module of \COQReferences{}.v\\ \COQRefS{} & \COQRefSetType{} module of \COQRefSets{}.v\\ \COQRefSet{} & \COQReference{} \COQFSet{} of \COQRefS{} \\ \COQEdges{} & \COQAccessEdgeType{} module of \COQAccessEdges{}.v \\ \COQAccessGraph{} & \COQAccessGraphType{} module of \COQAccessGraphs{}.v\\ \COQAG{} & \COQFSet{} of \COQAccessGraphType{} \\ \COQSeq{} & \COQSeqAccType{} module of \COQSequentialAccess{}.v\\ \COQCap{} & \COQCapabilityType{} of \COQCapabilities{}.v\\ \COQCC{} & \COQCapabilityConv{} of \COQCapabilitiesUConv{}.v \\ \COQCapS{} & \COQCapSetType{} \\ \COQCapSet{} & \COQCapability{} \COQFSet{} of \COQCapS{} \\ \COQInd{} & \COQIndexType{} of \COQIndicies{}.v\\ \COQObj{} & \COQObjectType{} of \COQObjects{}.v \\ \COQOC{} & \COQObjectConv{} of \COQObjectsUConv{}.v \\ \COQSys{} & \COQSystemStateType{} of \COQSystemState{}.v \\ \COQSC{} & \COQSystemStateConv{} of \COQSystemStateUConv{}.v \\ \COQSemDefns{} & \COQSemanticsDefinitionsType{} of \COQSemanticsDefinitions{}.v \\ \COQSem{} & \COQSemanticsType{} of \COQSemantics{}.v \\ \COQSemConv{} & \COQSemanticsConv{} of \COQSemanticsUConv{}.v \\ \COQExe{} & \COQExecutionType{} of \COQExecution{}.v\\ \COQMut{} & \COQMutationType{} of \COQMutation{}.v \\ \COQSub{} & \COQSubsystemType{} of \COQSubsystem{}.v \\ \hline \end{tabular} \caption{Declaration and location of common module names.\label{table:coq:moduleConvention}} \end{table}
{ "alphanum_fraction": 0.79359044, "avg_line_length": 86.6352941176, "ext": "tex", "hexsha": "05904428bea12c60fdfcde813e96fdc0227e4213", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7696128aadd65332194836dd989ee7ce9128ac14", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "doerrie/dissertation", "max_forks_repo_path": "ch-coq.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7696128aadd65332194836dd989ee7ce9128ac14", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "doerrie/dissertation", "max_issues_repo_path": "ch-coq.tex", "max_line_length": 212, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7696128aadd65332194836dd989ee7ce9128ac14", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "doerrie/dissertation", "max_stars_repo_path": "ch-coq.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-20T15:33:23.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-20T15:33:23.000Z", "num_tokens": 3339, "size": 14728 }
\section{The comparison} Once compiled in JavaScript it is possible to run both algorithms, with \textit{NodeJs} in the terminal. We have therefore exploited this possibility to launch the algorithms over some chosen \textit{Regular Expression} to test the Learners' performances. A comparison between the two algorithms is proposed in paper \cite{NLPaper}: Bollig \textit{et al.} count the number of states of the final automata given by the Learners and accepted by the Teacher along with the number of membership and equivalence queries. We added some personal criteria of comparison in order to also see how many time the \OT is found not close or not consistent (We will call them closedness and consistency problems) and the number of transitions of the automata. All of these statistics can be obtained by executing the files into the \textit{test\_nodejs} folder. The script creates a CSV file in which all the comparison values are stored and then a \textit{Python} file can parse the CSV to finally transform the resulting CSV into plots\footnotetext{To display plots we used the \textit{matplotlib} library \cite{Matplotlib}}. The statistics aim to see the strength and the weaknesses of the to Learners, therefore we have tested them over specific \textit{Regular Expressions (RegEx)}. These \textit{RegEx} take into account the size of the \textit{cRFSA}: since it \textit{can} be exponentially smaller than the size of the \textit{mDFA}. \subsection{The cRFSA is exponentially smaller then the mDFA} Languages recognized by the \textit{RegEx} $\U = (a+b)^*a(a+b)^n$ for a fixed $n$ are known to build \textit{mDFA} whose number of states increase exponentially every time we increase $n$. It is also known that these same \textit{RegEx} can be represented by \textit{non-deterministic} automata (\textit{NFA}) whose number of states grows linearly with $n$. If we calculate the \textit{prime} residuals of $\U$ we can see that their number equals to $n+1$. We can intuitively understand that $\U$ can be represented by a \textit{cRFSA} with the same number of states as the \textit{NFA}. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostDFA/State nb in A.png} \caption{Comparing State Number} \label{fig:StateWrostDFACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostDFA/Membership queries.png} \caption{Membership queries Number} \label{fig:MemberWrostDFACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostDFA/Equivalence queries.png} \caption{Equivalence queries Number} \label{fig:EquivWrostDFACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostDFA/Transition nb in A.png} \caption{Transition Number} \label{fig:TransitionWrostDFACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostDFA/Closedness.png} \caption{Closedness Problem Number} \label{fig:ClosednessWrostDFACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostDFA/Consistence.png} \caption{Consistency Problem Number} \label{fig:ConsistenceWrostDFACompare} \end{subfigure} \caption{mDFA vs cRFSA on $U = (a+b)^*a(a+b)^n)$} \label{fig:wrostDFA} \end{figure} As expected the number of states of the \textit{mDFA}'s curve is exponential compared to $n$ and this also impacts the number of transitions, membership queries and closedness problems. In \cref{fig:ConsistenceWrostDFACompare}, we see that the number of Consistency Problems grows linearly for \textit{L*} and it is close to zero for \textit{NL*}. Let's analyze at first L*. Let $\U = (a+b)^*a(a+b)^n$. \textit{L*} sends the three membership queries for $\E, a$ and $b$. After that the table is closed and consistent and the first conjecture is an automaton $\A$ such that $\LA = \varnothing$. This is not the good automaton and the Teacher gives back a counter-example $\omega = a^{n+1}$ of the shortest length\footnote{The Teacher is supposed to be the \textit{Minimal Adequate Teacher}}. All prefixes $p$ of $\omega$ are added in $\U$ and $p \notin \U$. We have that the table is not consistent: $row(\E) = row(a^n)$ but $row(\E \cdot a) \neq row(a^n \cdot a)$. So \textit{L*} adds the new column $a$. This new column will make $row(\E) = row(a^{n-1})$ but $row(\E \cdot a) \neq row(a^{n-1} \cdot a)$. We continue in this way $n$ times until $E = \{\E, a, \dots, a^n\}$. This will stop the table to be not consistent since there will be no more similar rows in $S$. Since the automaton associated to $\U$ has precisely $2^{n+1}$ states therefore $S$ will have to contain $2^{n+1}$ different rows. We know that after the first counter-example $\omega$, $|S| = n + 2$\footnote{$\E$ plus the $n+1$ prefixes of $\omega$} will have to find $2^{n+1}-(n+1)$ closedness problems to promote the necessary number of rows from $SA$ to $S$. Let's now analyze the NL* behavior. This algorithm, as said in \cref{section:NL}, tries to find all prime residuals of $\U$. After the first conjecture, that, as for the L* is supposed to accept the language $\U = \varepsilon$, the Learner receives the counter-example $\omega = a^n$. Since \textit{NL} adds the counter-example to $E$ and so it will directly make $E = \{\E, a, \dots, a^n\}$. This coincides with the number of residuals of $\U$. In this case the algorithms only has to promote $n$ rows, one for each residual and this is possible since the promotion of a rows will create new rows in $SA$ to make the \OT complete. We can finally note that the number of equivalence queries is the same for the two algorithms since: \begin{itemize} \item if $n = 0$, then $\U = (a+b)^*a$ and the first equivalence query is immediately accepted, without any consistency problem; \item else: the two Learners only need two equivalence queries to understand $\U$\footnote{Note that in \cref{fig:EquivWrostDFACompare} the curve of NL* and L* are superposed}. After the first equivalence query, L* and NL* receive the first counter-example and thanks to the consistency and closedness check, both algorithms will be able to send a second conjecture $\A$ where $\LA = \U$. \end{itemize} We can conclude that when the \textit{cRFSA} is exponentially smaller than the \textit{mDFA} it is better to apply the \textit{NL*} algorithm. \subsection{The cRFSA has the same size as the mDFA} \label{sec:worstRFSA} In this section we are going to show the behavior of the two algorithms on a particular class of regular expressions depending on a fixed parameter $n$ where the \textit{cRFSA} has exactly the same size of the \textit{mDFA} but where the corresponding minimum \textit{NFA} is exponentially smaller. This automaton is proposed in Section 6 of \cite{RFSA}. The construction is done for an automaton $A_n = \langle \Sigma, Q, Q_I, F, \delta \rangle $ where: \begin{itemize} \item $\Sigma = {a, b}$, \item $Q = \{q_i \mid 0 \leq i < n-1 \}$, \item $Q_I = \{q_i \mid 0 \leq i < n/2\}$, \item $F = q_0$ \item $\delta(q_i,a) = q_i+1$for $0 \leq i< n - 1$, $\delta(q_{n-1},a) = q_0$, $\delta(q_0,b)=q_0$, $\delta(q_i,b) = q_{i-1}$ for $1<i<n$ and $\delta(q_1,b)=q_{n-1}$. \end{itemize} As proved in that paper, the number of states of a minimal \textit{NFA} equals $n$, but the number of states of the \textit{cRFSA} is exponential with respect to $n$. The statistics of the execution of the two algorithms are shown in \cref{fig:wrostRFSA}. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostRFSA/State nb in A.png} \caption{Comparing State Number} \label{fig:StateWrostRFSACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostRFSA/Membership queries.png} \caption{Membership queries Number} \label{fig:MemberWrostRFSACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostRFSA/Equivalence queries.png} \caption{Equivalence queries Number} \label{fig:EquivWrostRFSACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostRFSA/Transition nb in A.png} \caption{Transition Number} \label{fig:TransitionWrostRFSACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostRFSA/Closedness.png} \caption{Closedness Problem Number} \label{fig:ClosednessWrostRFSACompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/wrostRFSA/Consistence.png} \caption{Consistency Problem Number} \label{fig:ConsistenceWrostRFSACompare} \end{subfigure} \caption{mDFA vs cRFSA on $\U = L(A_n)$} \label{fig:wrostRFSA} \end{figure} We can see that the number of state of the two automata generated by \textit{NL*} and \textit{L*} are the same and that the number of closedness problems and membership queries are similar. What changes, is the number of equivalence queries which is particularly smaller for the \textit{NL*} algorithm as for the number of time where the \OT is found not consistent. This is because adding counter-examples in $E$ instead of $S$ is in general a good mean to be able to distinguish faster newer residuals. We are able to reduce therefore the number of Consistency problems and so the number of Equivalence queries. \begin{remark} It is better to have a Learner that poses the least Equivalence queries as possible, because it is more expensive to test it than to answer to Membership queries. \end{remark} We would also point out that the curve of the Membership queries of NL* in \cref{fig:MemberWrostRFSACompare} is slightly bigger then the number of L*. In fact, we can see that the number $n$ of states of the \textit{mDFA} equals the number of states of the \textit{cRFSA}, so every state of the \textit{mDFA} is a \textit{Prime} residual. Let $n$ be the size of the \textit{mDFA} ($=$ size of the $\textit{cRFSA}$). The Learners have to put in $S$ at least $n$ different rows and at least $\log_2(n)$ columns to find the good conjecture. \textit{NL*} has to make more membership queries because every time it receives a counter-example from the Teacher, it has to add it together with all its suffixes in $E$, and since $|S|$ can be big, it may be necessary to make a lot of membership queries to complete every cell of the new-added column. That's why also in \cref{fig:ClosednessWrostRFSACompare} the curve of \textit{NL*} is slightly worst than the \textit{L*} curve. Finally, it is interesting to see that the number of transition of the \textit{cRFSA} is exactly the same as the number of transitions in the \textit{mDFA}. But in general, and we will try to show it in the next section, the number of transitions in a \textit{cRFSA} is bigger then the number of transitions in the \textit{mDFA}. \subsection{Results over random Teachers} The third type of comparison has been performed over a huge number of \textit{Teachers} on the form of \textit{mDFA} of size varying from $1$ to $100$ states. In this way we are able to see in average the performances of \textit{L} versus \textit{NL*}. This benchmark has been done taking the \textit{Automata} from the \textit{GitHub} repository \url{https://github.com/parof/buchi-automata-benchmark} which samples around $2000$ examples. Every automaton is supposed to represent a \textit{Büchi Automaton} over a binary alphabet and we have reused them to create \textit{mDFA}. We have transformed then every \textit{Automaton} of the list into a \textit{Teacher} and, one by one, every \textit{Teacher} has been submitted to \textit{L*} and \textit{NL*} to get at the end some statistics that we could compare to those proposed by Bollig \textit{et al.} in \cite{NLPaper}. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/BenchMark/State nb in A.png} \caption{Comparing State Number} \label{fig:StateBenchMarkCompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/BenchMark/Membership queries.png} \caption{Membership queries Number} \label{fig:MemberBenchMarkCompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/BenchMark/Equivalence queries.png} \caption{Equivalence queries Number} \label{fig:EquivBenchMarkCompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/BenchMark/Transition nb in A.png} \caption{Transition Number} \label{fig:TransitionBenchMarkCompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/BenchMark/Closedness.png} \caption{Closedness Problem Number} \label{fig:ClosednessBenchMarkCompare} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{../statistics/plots/BenchMark/Consistence.png} \caption{Consistency Problem Number} \label{fig:ConsistenceBenchMarkCompare} \end{subfigure} \caption{mDFA vs cRFSA on random Teachers} \label{fig:benchmark} \end{figure} We see in \cref{fig:benchmark} that the number of states of the \textit{cRFSA} is in general exponentially smaller than the \textit{mDFA} and that, the number of equivalence and membership queries is in general taken by \textit{NL*}. As expected the number of Consistency problems is won by \textit{NL*} but curiously the Closedness comparison is taken by \textit{NL*} only after about $50$ states. Looking \cref{fig:MemberBenchMarkCompare} and \cref{fig:EquivBenchMarkCompare}, we can see that the gap between \textit{NL*} and \textit{L*} curves is not really big. Again, as for the previous section, when dealing with Teacher which demands a similar number of Membership and Equivalence queries for both \textit{NL*} and \textit{L*}, it is \textit{NL*} which will have more Closedness problems. This is due to the fact that \textit{NL*} have to find all the \textit{Prime} residuals and so it will have to promote more rows, thing that is more efficiently done by \textit{L*} which adds the counter-example in $S$. In \cref{fig:TransitionBenchMarkCompare} we see that the number of transitions of the \textit{mDFA} grows linearly with respect to the number of its states\footnote{Number of transitions $= |\Sigma| \times$ number of states of the Automaton}, but the number of transitions in the \textit{cRFSA} is often bigger. This is due to the fact that the \textit{cRFSA} tends to create a lot of transitions, sometimes redundant, from a state $q_i$ to every state $q_i'$ where $L(q_i') \subseteq L(q_i)$.
{ "alphanum_fraction": 0.748247166, "avg_line_length": 85.2569832402, "ext": "tex", "hexsha": "56967ad1bf0522efd881a761f6be92c29337841c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bfc624f83b2c6e69fe0be42e4a2bf29bbc72218f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FissoreD/TER-M1-M1S2", "max_forks_repo_path": "report/sections/Results.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bfc624f83b2c6e69fe0be42e4a2bf29bbc72218f", "max_issues_repo_issues_event_max_datetime": "2022-03-21T17:29:10.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-21T17:29:10.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FissoreD/TER-M1-M1S2", "max_issues_repo_path": "report/sections/Results.tex", "max_line_length": 1257, "max_stars_count": null, "max_stars_repo_head_hexsha": "bfc624f83b2c6e69fe0be42e4a2bf29bbc72218f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FissoreD/TER-M1-M1S2", "max_stars_repo_path": "report/sections/Results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4366, "size": 15261 }
\vspace{-1.5em} \section{Overview} \p{Linguistic Technology Systems (LTS) is developing a new database engine called \lConceptsDB{}, which is based on hypergraph data modeling. This new database engine features a multi-paradigm data representation strategy, informed by contemporary research into \AI{} and information semantics and permitting theoretical frameworks such as Conceptual Spaces, Conceptual Role Semantics, and Petri Nets to be applied toward the computational modeling of information spaces. In addition, \ConceptsDB{} prioritizes application development --- particularly native/desktop-style applications. Data about application/session/user state can be directly stored in the database, so that \ConceptsDB{} can provide a convenient scaffolding for implementing desktop software. Hypergraph-based data modeling ensures that information needed by the application can be readily marshaled between runtime formats and whichever persistent/serial/\GUI{} representations are necessary for application-level capabilites.} \p{LTS is currently working on a prototype of \ConceptsDB{} which is part of our \q{\lConceptsDB{} Application Framework} (\CsAF{}). This framework is oriented to scientific computing and scientific software. To be specific, \CsAF{} database instances can be employed at several sites within a scientific-computing and/or data-sharing platform: as an embedded component of published data sets; as a cloud service managing data and/or publication repositories; or as a tool for individual or institutional users to track data sets and publications. Our \q{\MOSAIC{}} code libraries (\q{Multi-Paradigm Ontologies for Scientific and Technical Publications}) encapsulate functionality applicable to using \ConceptsDB{} in publishing contexts. A \MOSAIC{} Portal is a data/publication repository where users can query and download resources following a protocol associated with \ConceptsDB{}. The \q{\lMOSAIC{} Dataset Explorer} (\MdsX{}) libraries are designed for applications which interoperate with data/publication archives (including but not limited to \MOSAIC{} Portals). Individual data sets may also embed a version of \ConceptsDB{} --- specifically, \q{DigammaDB} (\DgDb{}), which is a light-weight, self-contained \ConceptsDB{}-compatible engine suitable to be distributed as source code within a data set/code repository.} \p{To help researchers deploy data sets with customized desktop software, LTS has developed a \q{Dataset Creator} (\dsC{}) designed to be used with \DgDb{}. This database engine and data-modeling technology (\MdsX{}, \DgDb{}, and \dsC{}) form a trio of libraries, which are helpful for the implementation of published data sets via special-purpose applications that are customized for viewing and examining information specific to the research methods and paradigms from which a data set originates; and for the implementation of tools to acquire and keep track of data sets and their associated publications.} \p{Collectively, this trio of libraries form the basis of a \CsAF{} implementation --- i.e., an Application Framework centered on \ConceptsDB{} --- which prioritizes deploying and accessing scientific data sets. The information and assets included within a data set, and modeled via \CsAF{} components, can span several different requirements, including: \begin{enumerate}[leftmargin=12pt] \item{} Data sets themselves --- these may include raw data files and/or code for accessing this data as well as demonstrations/implementations of algorithms for analyzing or otherwise processing the data (or data having a similar format/structure or scientific background); \item{} Full-text publications --- potentially including both machine-readable and human-readable (e.g. \PDF{}) formats, annotated so as to link parts of a text document with corresponding parts/elements within its associated data set(s); \item{} Systematic descriptions of research methods, protocols, and (if applicable) equipment, such as may be formally expressed by standards like \MIBBI{} (Minimum Information for Biological and Biomedical Investigations), \BioCoder{} (see Footnote~\hyperref[fnt:bioc]{\ref{fnt:bioc}}), or \Pandore{} (a bioimaging library that contains an \q{Image Processing Objectives} Ontology); \item{} Applications for viewing data sets --- applications which may be in the form of pre-existing software, or custom-built for an individual data set. \end{enumerate}} \p{The \MOSAIC{} Dataset Explorer (\MdsX{}) libraries include code for interoperating with data structures documenting each of these aspects of scientific resources, which may involve querying data sets themselves or querying \API{}s of scientific corpora where data sets and publications are hosted. \lMdsX{} is built around what we call a \q{Scientific Data Repository Model} (\SDRM{}) which is concretized separately for each corpus or repository connected with a given \MdsX{} application. \lSDRM{} is divided into distinct concretizations, or \q{modules,} organized around the \API{}s and data profiles of individual scientific portals/repositories. For this reason, LTS seeks to collaborate with organizations who maintain scientific portals so that we can make open-access \SDRM{} modules available to the general public, targeting those specific scientific resources.} \p{The following sections will describe \MdsX{}, \dsC{}, and \DgDb{} in greater detail as well as provide more information about \SDRM{} modules.} %\p{}
{ "alphanum_fraction": 0.7816687511, "avg_line_length": 42.0827067669, "ext": "tex", "hexsha": "2a1a6c08888dc089a8ae28840878dc70c8a68bd2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e24f46cdf657a8bdb990c7883c6bd3d0a0c9cff0", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "scignscape/PGVM", "max_forks_repo_path": "extra/papers/csaf/csaf-document.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e24f46cdf657a8bdb990c7883c6bd3d0a0c9cff0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "scignscape/PGVM", "max_issues_repo_path": "extra/papers/csaf/csaf-document.tex", "max_line_length": 92, "max_stars_count": null, "max_stars_repo_head_hexsha": "e24f46cdf657a8bdb990c7883c6bd3d0a0c9cff0", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "scignscape/PGVM", "max_stars_repo_path": "extra/papers/csaf/csaf-document.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1277, "size": 5597 }
\documentclass{article} % if you need to pass options to natbib, use, e.g.: % \PassOptionsToPackage{numbers, compress}{natbib} % before loading neurips_2021 % ready for submission \usepackage[final]{_report} % to compile a preprint version, e.g., for submission to arXiv, add add the % [preprint] option: % \usepackage[preprint]{_report} % to compile a camera-ready version, add the [final] option, e.g.: % \usepackage[final]{_report} % to avoid loading the natbib package, add option nonatbib: % \usepackage[nonatbib]{_report} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \usepackage{hyperref} % hyperlinks \usepackage{url} % simple URL typesetting \usepackage{booktabs} % professional-quality tables \usepackage{amsfonts} % blackboard math symbols \usepackage{nicefrac} % compact symbols for 1/2, etc. \usepackage{microtype} % microtypography \usepackage{xcolor} % colors \usepackage[pdftex]{graphicx} \title{ Kaggle - GettingStarted prediction Competition \\ Digit Recognizer (MNIST) } % The \author macro works with any number of authors. There are two commands % used to separate the names and addresses of multiple authors: \And and \AND. % % Using \And between authors leaves it to LaTeX to determine where to break the % lines. Using \AND forces a line break at that point. So, if LaTeX puts 3 of 4 % authors names on the first line, and the last on the second line, try using % \AND instead of \And before the third author name. \author{% Steve Levesque \\ \url{https://stevelevesque.dev} \\ \texttt{[email protected]} } \begin{document} \maketitle \begin{abstract} The digit recognition task in machine learning or data science is the "hello world" of the domain. It is a good point to start up in the field. This article will describe the steps used to achieve "acceptable" results (by this statement, a score around 99\%, and lower than 99.6\%), possible ways to increase it fairly in terms of baises and cheating, and discard unreasonably high scores such as 99.957\% or worst like 100\%. The redaction's content difficulty is made to be light and made for general public, for those who could graspe that there are a lot of superficial explainations in term of results and advances. \end{abstract} \section{Introduction} For this type of task, it is easily possible (especially as for today, 2021-22) to build a model that can reach above 90\% in a matter of minutes, or around 97-99\% in less than an hour. The tricky part about MNIST digit recognition is mainly for the last hundred digits or so, as for the model AND humans. Seeing by ourselves such writings is hard even for us to classify since there are very badly written number like a four that ressembles a lot to a nine and others of the sort. \section{MNIST Dataset} \subsection{Data Description} The MNIST dataset consists of numbers between 0 and 9 that are handwritten. The traditionnal task with the digit is to classify each picture of 784 pixels or of dimension $28 \times 28$ with 1 dimension of color (black and white) to its rightful category (a number between 0 and 9 respectively). \begin{figure}[!htbp] \centering \includegraphics[scale=0.5]{../plots/mnist_introduction_numbers.png} \caption{First 9 digits of the Kaggle MNIST test set} \end{figure} \subsection{Data Augmentation} To help generalize in machine learning, it is a good solution to have more data that is not the test set nor a subset/superset of it. (to avoid bias or at a higher pitch, cheating) It is possible to modify the images of our train set with small equivariant variations (invariant variations, as the word would suggest, is just copying the image and doubling the size of our training set, which is useless) such as translation, rotation and zooming (i.e. not as a whole 180 degrees, else 6 and 9 would be the same). \\ With the augmentations, digits will have more chances to be righfully classified if someone writes with a more italic tone or with a unsteady pattern. We can see in the first 9 digits of the test set of Figure 1 that there are four zeros written with a different size, shape, marker size, etc. \section{Algorithms} \subsection{XGBoost} "XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable." The first sentence from their website describes well XGBoost in a few words. It can train and predict with a score quite acceptable for the little ammount of time it takes it to solve huge diverse problems. The score is 93.78\% for about 10 minutes of training. \subsection{KNN} Another simple and easy algorithm to use is K Nearest Neighbors. It works well with a lot of problems that takes distances into account. We can use the number of neighbors equal to one to have a medium decent result. It won't be the best, but with how MNIST is with distances towards each digit that are normally close to eachother if they are of the same group, it will classify all even looking numbers together without hassle. The score won't be 99\%, since there will be errors when a seven looks like a one and all similar irregularities. The score is 97.01\% for about 30 minutes of training. \subsection{CNN} However, CNN is of choice if the intent is to perform well on a task that requires comupter vision such as this one. We start by defining a conv-net for our model. Keras is used for the network, data augmenting, model saving/loading and finally for prediction. Next, we choose the right number of epochs and batch size for the training. When done, the final step is plotting and comparing the results. It is important to see what is gain and for how much (i.e. time spent for every 0.1\% gained). \begin{figure}[!htbp] \centering \includegraphics[scale=0.3]{../plots/mnist-supervised-classification-image-cnn_keras-model.png} \caption{Conv-net with Keras used for the model.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=0.25]{../plots/mnist-supervised-classification-image-cnn_2epochs-128batch_curves.png} \includegraphics[scale=0.25]{../plots/mnist-supervised-classification-image-cnn_50epochs-64batch_curves.png} \caption{Loss and accuracy for 2-50 epochs and 128-64 respectively.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=0.25]{../plots/mnist-supervised-classification-image-cnn_2epochs-128batch_confusion-matrix.png} \hspace{\fill} \includegraphics[scale=0.25]{../plots/mnist-supervised-classification-image-cnn_50epochs-64batch_confusion-matrix.png} \caption{Confusion matrix for 2-50 epochs and 128-64 respectively.} \end{figure} The results are made with data augmentation beforehand. With only 2 epochs and a large batch size of 128, we can reach as high as 98.69\%. This is good taking into account that it takes around 10 minutes to train. With 50 epochs and 64 batch size, we can reach up to 99.55\%. With double the ammount of epochs (100), we only get 0.021\% increase. Thus around double the time for only that much more. The first score is (98.69\%) for about 10 minutes of training. The second (99.55\%) for about 30 minutes of training. The third (99.57\%) for about 1 hour of training. \section{Results} The table below contains results obtained from the algorithms above and some other less legitimate techniques. Scores aboves 99.57\% will be discused on following sections. Above this treshold for the last digits more complicated to classify, highter accuracy is always possible, but with high consumtion of time and/or resources. NB: The public score constitutes the private as well, since it is 100\% the data for learning purposes. \begin{table}[!htbp] \caption{Leaderboard Results by Algortihm} \label{results-table} \centering \begin{tabular}{lll} \toprule Algorithms & Public \\ \midrule XGBoost & $\sim$93.78\% \\ KNN (neighbours = 1) & $\sim$97.01\% \\ CNN 2 epochs and 128 batch-size & $\sim$98.69\% \\ CNN 50 epochs and 64 batch-size & $\sim$99.55\% \\ CNN 100 epochs and 64 batch-size & \textbf{$\sim$99.57\%} \\ CNN 50 epochs and 64 batch-size with QMNIST & $\sim$99.96\% \\ 1 for-loop on MNIST original dataset & 100\% \\ \bottomrule \end{tabular} \end{table} \section{Biased Methods} There are lots attractive results like the 99.96\% with QMNIST or the infamous 100\% that is nothing but a scam, especially with such a problem where even humans can't reach 100\% unless lucky because the last 1\% of the digits are so badly written we have a hard time trying to figure them out. "The QMNIST dataset was generated from the original data found in the NIST Special Database 19 with the goal to match the MNIST preprocessing as closely as possible."\footnote{https://github.com/facebookresearch/qmnist}, How is it possible to get such results? The answer resolves around a simple fact, the labels are given since the test set is a subset, directly or not of the train set. A result of 100\% can be achieved by brute force on the MNIST original dataset which contains every elments of the test set on the Kaggle competition. The same principle applies for QMNIST since even though it is not a perfect subset, it is indirectly by the fact that the generation as for goal to mimic MNIST. With 120k digits from a reconstruction heavily based over MNIST with around 10 digits over 60k of them are invariant counterparts of the test set, we can assume that every other equivariant digits are very similar to MNIST and that in consequence it is possible the test set is a subset of QMNIST. With a score of 99.96\%, it could be a good sign that it is a subset. \section{Possible Derivations} Is it the end? No, and it is possible to find articles and works that can perform better than 99.57\% without any use of the test set and super/sub sets of anything related. \subsection{Ensemble on Trained Models} As used in this paper\footnote{https://arxiv.org/pdf/2008.10400v2.pdf}, it would be possible to use multiple different models and achieve a result close to 99.8\% and 99.9\% (the paper stipulates an accuracy of 99.91\% on MNIST). There is also this ensemblist method used on Kaggle to obtain 99.76\%\footnote{ https://www.kaggle.com/cdeotte/25-million-images-0-99757-mnist/notebook} that uses a similar ensemble technique. \subsection{Ensemble on Models and Algorithms} There could be a way to have a better berformance than humans if we ensemble multiples algortihms which has multiple models. This could reach a point where the machine can learn the pattern used for "generating" the MNIST digits and classify correctly the hundred last incomprehensible digits left. Plus, it could exploit other aspects not directly linked to a CNN with ensemble models, such as distances to give possible additional insights. However, this would cost an exponentional amount of resources and time for only about a 0.01\% increase. \section{Conclusion} As of the date this has been written, there is a treshold where a CNN model can go and where ensemblist methods with high data augmentations are necessary to scrape off an additional fraction of a percent or two. Everything reaching 99.9\% or 100\% should be discarted, as a human would have difficulty to reach such precision. %\section*{Acknowledgments} \begin{thebibliography}{99} \bibitem{ref1} Kaggle Competition Digit Recognizer \url{https://www.kaggle.com/c/digit-recognizer/overview} \bibitem{ref2} Hyperopt \url{http://hyperopt.github.io/hyperopt/} \bibitem{ref3} scikit-learn \url{https://scikit-learn.org/stable/} \bibitem{ref4} XGBoost \url{https://xgboost.readthedocs.io/en/stable/} \bibitem{ref5} QMNIST \url{https://github.com/facebookresearch/qmnist} \bibitem{ref6} Confusion Matrix Plot \url{https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html} \bibitem{ref7} Matplotlib.pyplot \url{https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.html} \bibitem{ref8} Python \url{https://www.python.org/} \bibitem{ref9} Keras \url{https://keras.io/} \bibitem{ref10} Tensorflow \url{https://www.tensorflow.org/} \bibitem{ref11} High Scoring MNIST Solutions \url{https://paperswithcode.com/sota/image-classification-on-mnist} \bibitem{ref12} Paper - An Ensemble of Simple Convolutional Neural Network Models for MNIST Digit Recognition \url{https://arxiv.org/pdf/2008.10400v2.pdf} \bibitem{ref13} Kaggle MNIST 0.99757 \url{https://www.kaggle.com/cdeotte/25-million-images-0-99757-mnist/notebook} \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.763851618, "avg_line_length": 45.9057971014, "ext": "tex", "hexsha": "3b72afc7f2610985301799ed0f52641a1a2cba9f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f4506471b525e045402ddf81d9979cbd278f02ca", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "steve-levesque/Portfolio-ML-KaggleComp", "max_forks_repo_path": "docs/reports/mnist_report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f4506471b525e045402ddf81d9979cbd278f02ca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "steve-levesque/Portfolio-ML-KaggleComp", "max_issues_repo_path": "docs/reports/mnist_report.tex", "max_line_length": 133, "max_stars_count": null, "max_stars_repo_head_hexsha": "f4506471b525e045402ddf81d9979cbd278f02ca", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "steve-levesque/Portfolio-ML-KaggleComp", "max_stars_repo_path": "docs/reports/mnist_report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3401, "size": 12670 }